15,076 results for University of Canterbury Library

Winter is coming: An environmental monitoring and spatiotemporal modelling approach for better understanding of respiratory disease (COPD)
Marek L; Campbell; Kingham; Epton M; Storer M (2017)
Conference Contributions  Other
View record details
University of Canterbury Library 
Supply Chain Management in New Zealand: Practices, Strategy and Performance.
Donovan J; Castka P; Hanna M (2017)
Reports
University of Canterbury LibrarySupply chain management is an important part of New Zealand (NZ) economy yet relatively little empirical evidence is available about the practices of NZ firms and their impact on supply chain performance. In this study, we aim to fill this gap. We have partnered with two associations in NZ, NZPICS (Association of Operations & Supply Chain Professionals) and NZMEA (New Zealand Manufacturers and Exporters Association) and asked their members to provide us with the data on their firms and supply chains; namely their locations, industry sector, customer bases, outsourcing activities, competitive priorities, supply chain management practices (such as information sharing) and performance of their supply chain. We have collected the data through a survey in JulySeptember 2013 and received 145 responses. In order for supply chain networks to compete effectively, they must share information with to be able to jointly make decisions and problem solve and this must be made with an external perspective including its supply chain partners. The results from this survey found that high performing companies are using collaborative supply chain practices to improve their supply chain management capabilities in quality, flexibility and delivery. These performance capabilities are seen to be “customer centric” outcomes that reflect an organisation’s objective of appealing to a target customer segment that is not necessarily cost focused or pricesensitive. Apart from this relation between supply chain practices and performance, we also provide descriptive statistics on the current status of supply chain practices in New Zealand.
View record details 
Improved Shortest Path Algorithms for Nearly Acyclic Graphs
Saunders, Shane; Takaoka, Tadao (2002)
Doctoral thesis
University of Canterbury LibraryDijkstra’s algorithm solves the singlesource shortest path problem on any directed graph in O(m + n log n) time when a Fibonacci heap is used as the frontier set data structure. Here n is the number of vertices and m is the number of edges in the graph. If the graph is nearly acyclic, other algorithms can achieve a time complexity lower than that of Dijkstra’s algorithm. Abuaiadh and Kingston gave a single source shortest path algorithm for nearly acyclic graphs with O(m + n log t) time complexity, where the new parameter, t, is the number of deletemin operations performed in priority queue manipulation. If the graph is nearly acyclic, then t is expected to be small, and the algorithm outperforms Dijkstra’s algorithm. Takaoka, using a different definition for acyclicity, gave an algorithm with O(m+n log k) time complexity. In this algorithm, the new parameter, k, is the maximum cardinality of the strongly connected components in the graph. The generalised singlesource (GSS) problem allows an initial distance to be de fined at each vertex in the graph. Decomposing a graph into r trees allows the GSS problem to be solved within O(m+r log r) time. This paper presents a new allpairs algorithm with a time complexity of O(mn + nr log r), where r is the number of acyclic parts resulting when the graph is decomposed into acyclic parts. The acyclic decomposition used is setwise unique and can be computed in O(mn) time. If the decomposition has been precalculated, then GSS can be solved within O(m+r log r) time whenever edgecosts in the graph change. A second new allpairs algorithm is presented, with O(mn + nr2) worst case time complexity, where r is the number of vertices in a precalculated feedback vertex set for the nearly acyclic graph. For certain graphs, these new algorithms offer an improvement on the time complexity of the previous algorithms.
View record details 
Assessment and Validation of the Fire Brigade Intervention Model for use within New Zealand and PerformanceBased Fire Engineering
Claridge, Ed (2010)
Doctoral thesis
University of Canterbury LibraryThe Fire Brigade Intervention Model (FBIM) has been in use for over a decade and is used regularly throughout Australia and to a lesser extent in New Zealand. Since November 2008, the FBIM has been referenced within the New Zealand compliance document C/AS1 and is accepted by the New Zealand Fire Service (NZFS) as a suitable methodology to demonstrate the performance requirements of the New Zealand Building Code (NZBC) relating to fire brigade operations. However, the FBIM currently has no New Zealand data available to reflect NZFS operations. At present, building designs are using Australian data which is potentially dated and which has only undergone limited validation for New Zealand conditions. An analysis of building consent applications as submitted to the NZFS for review has been undertaken with specific emphasis on quantifying the impact of alternative fire engineering designs and firefighting facilities. This statistical review has indicated that up to 67% of all the fire reports reviewed contained insufficient information to demonstrate compliance with the requirements of the NZBC. For new buildings that contained alternative fire engineering designs, the NZFS made recommendations specific to firefighting facilities in 63% of the reports reviewed. A review of international performancebased building codes is provided to compare international performance requirements and expectations on responding fire fighters from overseas codes. The NZBC and prescriptive requirements are also discussed for their requirements and implications for firefighting requirements. This project presents data that has been collected from a number of sources including specifically designed exercises, NZFS incident statistics, incident video footage and from attendance and observation at emergency incidents. Validation of this data has been undertaken against fire ground field experiments and with real emergency incidents attended during this research. A FBIM is provided based on the data presented in this research using a probabilistic riskbased approach and MonteCarlo analysis methods considering a highrise building scenario. This identifies some of the advantages of using probabilistic methods and the FBIM rather than the traditional percentile approach. An FBIM analysis allows the building designer to factor in the effects of fire fighters on the building design and to identify areas of the building design that may need further consideration.
View record details 
Effectiveness of Automatic Fire Sprinklers in High Ceiling Areas & the Impact of Sprinkler Skipping
Dyer, J. W. (2008)
Doctoral thesis
University of Canterbury LibraryThere is a misconception that sprinklers will offer little value in nonstorage areas with high ceiling heights such as seating areas in theatres, atria in high rise buildings, auditoriums, sports arenas, school and university gymnasiums, meeting rooms in convention centres and hotels, exhibition halls, movie and television studios, casinos, concert halls and the back of stage of theatres or auditoriums. This project examines the misconception that sprinklers offer little value in nonstorage areas with high ceilings, with the goal of determining whether sprinklers are effective in these areas. This project also examines the issue of sprinkler skipping, which fire testing has shown to be more pronounced for areas with higher ceiling clearances and the effect that sprinkler skipping has on the effectiveness of sprinklers in areas with high ceiling clearances.
View record details 
Analysis of FDS Predicted Sprinkler Activation Times with Experiments
Bittern, Adam (2004)
Doctoral thesis
University of Canterbury LibraryFire Dynamics Simulator (FDS) is a computational fluid dynamics model used to calculate fire phenomena. The use of computer models such as FDS is becoming more widespread within the fire engineering community. Fire engineers are using computer models to demonstrate compliance with building codes. The computer models are continuously being developed as fire science and computing technology advances. It is essential that these models are validated to the point were the fire engineering community can have confidence in there use. This research report analyses FDS predicted sprinkler activation times with actual sprinkler activation times from a series of chair fires in a 8 x 4 x 2.4 meter gypsum wallboard compartment. The experiments consisted of a series of chair fires where the mass loss rate and sprinkler activation times were recorded, as well as temperature data. The fire data, compartment details and sprinkler head details were then modelled in FDS. The research shows that the cfactor values used by the sprinkler activation model in FDS has a significant influence. The cfactor value influenced the sprinkler activation times by as much as 50 %. FDS predicted sprinkler activation times with varying degrees of success. The success depended on the sprinkler head type modelled and position of the fire. The grid size used for the simulation affected the sensitivity of the comparison.
View record details 
Earthquake Damage to Passive Fire Protection Systems in Tall Buildings and its Impact on Fire Safety
SHARP, GEOFFREY (2003)
Doctoral thesis
University of Canterbury LibraryNew Zealand is a country which is extremely prone to seismic activity. One of the many impacts an earthquake may have is to cause fires. If a fire was to start in a damaged multistorey structure the safety of the occupants would undoubtedly be in question. During an earthquake large lateral forces are experienced by tall buildings, this in turn causes deformations to take place. It is these deformations that can cause damage to various parts of the structure. One very important component of any structure is its passive fire protection; unfortunately passive protection systems such as Gypsum plasterboard walls are very vulnerable to earthquake damage. Discovering the extent to which this reduces the fire safety of buildings is the primary objective of this project. Currently in New Zealand there are no legislative design criteria for the event of fire following an earthquake. Another aim of this research is to gain a further understanding of this gap between the design of tall buildings for the demands of earthquake and the demands of fire. A greater understanding of the risks posed by postearthquake fire is to be gained by addressing the vulnerability of tall buildings. To determine the level of risk associated with postearthquake fire the topic was split into two parts. The first part involved developing models to calculate a factor of safety for burning buildings as a ratio of available and actual escape times. The second part looked at how damage to plasterboard walls, protecting escape paths, would affect the fire safety of the building. By considering the results of these two parts an overall assessment of the risk associated with postearthquake fire was made. It was found that for fire following an earthquake in buildings greater than ten stories, in which the sprinklers do not operate; the occupants may be unsafe because the expected escape time is greater than the expected failure time of the fire rated walls surrounding the escape route.
View record details 
Sequential Estimation of Variance in SteadyState Simulation
Schmidt, Adriaan; Pawlikowski, Krzysztof; McNickle, Don (2008)
Doctoral thesis
University of Canterbury LibraryToday, many studies of communication networks rely on simulation conducted to assess their performance. Steadystate simulation is used to draw conclusions about the longrun behaviour of stable systems. Current methodology of analysis of output data from steadystate simulation focuses almost exclusively on the offline estimation of the steadystate means of the parameters under investigation. Thus, the literature on “variance estimation” mostly deals with the estimation of the variance of the mean, which is needed to construct a confidence interval of the estimated mean values. So far, little work has been done on the estimation of the steadystate variance of simulated processes. In the performance analysis of communication networks, we find applications where the packet delay variation or jitter is of interest. In audio or video streaming applications, networking packets should take approximately the same time to arrive at their destination; the delay itself is less important (see e.g. Tanenbaum, 2003). To find the jitter of a communication link, the variance of the packet delay times needs to be estimated. The theoretical background of this research includes sequential steadystate simulation, stochastic processes, basic results on the estimation of the steadystate mean, and stochastic properties of the variance. These are briefly summarised in Chapter 2. The aim of this research is the sequential (online) estimation of the steadystate variance, along with the variance of the variance which is used to construct a confidence interval of the estimate. To this end, we propose and evaluate several variance estimators in Chapter 4.
View record details 
Pattern Matching in Compressed Text and Images
Bell, Tim; Adjeroh, Don; Mukherjee, Amar (2001)
Doctoral thesis
View record details
University of Canterbury Library 
Grid Computing: the Current State and Future Trends (in general and from the University of Canterbury’s perspective)
Roxburgh, Andrew; Pawlikowski, Krzysztof; McNickle, Donald C. (2004)
Doctoral thesis
University of Canterbury LibraryThe term ‘Grid Computing’ is relatively new and means a lot of different things to a lot of different people[19]. It has been used as a buzzword for any new technology to do with computing, especially computer networking, and therefore it has been overhyped as the solution to just about every computing problem. One of the goals of this paper is to give a clear definition of what Grid Computing is and why it is required. Grid Computing, or Network Computing, is intended to provide computational power that is accessible in the same way that electricity is available from the electricity grid  you simply plug into it and do not need to worry about where the power is coming from or how it got there. The idea of Grid Computing is the same  if more computing power is required, spare cycles on other computers are used. This means that supercomputer type power is accessible without the huge costs of supercomputing, and that CPU cycles that would otherwise be wasted are put to good use. In fact, one of the major researchers into Grid Computing, Ian Foster from the University of Chicago says “grids are above all a mechanism for sharing resources”, [13]. This means primarily sharing CPU time but also other things such as data files. Although this description sounds simple there are a number of problems with creating Grid systems  how do you access computers with different operating systems, how do you find those computers to access and how do you make sure that you can trust others to run code on your machine? In fact, how do you encourage people to let others run code on their machines in the first place? These questions, and many others, need to be answered for Grid Computing to succeed and they are also discussed in this paper. Grid Computing is no longer just a concept to be discussed but is something that is actually used every day. There are many Grids around the world, and many researchers investigating how to do Grid Computing better. These current Grids and the some of the current Grid research topics are also discussed in this report. There is also significant potential for Grid Computing to be used at the University of Canterbury. There are several projects which are very well suited to Grid Computing and it is likely that others would emerge were a Grid system available. The potential for Grid Computing and some of the tools that could be used for this are discussed below as well. The layout of this paper is as follows: Section 2 discusses why Grid Computing is needed at all. Section 3 discusses what makes up a Grid system, and Section 4 discusses some current Grids and Grid technologies. Section 5 discusses some of the current issues that need to be addressed in Grid Computing, Section 6 1 discusses the possibility of Grid Computing at the University of Canterbury, and finally section 7 concludes.
View record details 
Parallel Sequential Estimation of Quantiles During Steady State Simulation
Eickhoff, M; Pawlikowski, K; McNickle, D (2012)
Doctoral thesis
University of Canterbury LibrarySimulation results are often limited to mean values, even though this provides very limited infor mation about the analyzed systems' performance. Quantile analysis provides much deeper insights into the performance of simulation system of interest. A set of quantiles can be used to approximate a cumulative distribution function, providing full information about a given performance characteristic of the simulated system. In this paper, we will present two methods for parallel sequential estimation of steady state quantiles. The quantiles are estimated using simulation output data from concur rently executed independent replications. They are calculated sequentially and online, i.e. during simulation, to ensure that the results are produced to a speci ed accuracy. The set of quantiles to be estimated can be automatically determined using e cient estimation as a criterion.
View record details 
A Survey and Empirical Comparison of Modern PseudoRandom Number Generators for Distributed Stochastic Simulations
Schoo, Marcus; Pawlikowski, Krzysztof; McNickle, Donald C. (2005)
Doctoral thesis
University of Canterbury LibraryDistributed stochastic simulations has become a popular tool for evaluating and testing complex stochastic dynamic systems. However there is some concern about the credibility of the final results of such simulation studies [11]. One of the important issues which need to be properly addressed for ensuring validity of the final results from any simulation study is application of an appropriate source of randomness. In the case of distributed stochastic simulation, the quality of the final results is highly dependent on the underlying parallel PseudoRandom Number Generator (PRNG). Parallel PRNGs (PPRNGs) with the required empirical, analytical and deterministic properties are not trivial to find [9, 23, 10]. However, much research has resulted in several generators which we consider to be of high quality [6, 23, 24, 28, 32]. The effectiveness of simulations however depends not only on their accuracy but also on their efficiency and so simulations are also reliant on the speed and flexibility of these PPRNGs. In this paper, without a loss of generality, we examine the required features of modern PPRNGs from the point of view of their possible applications in Multiple Replications in Parallel (MRIP) paradigm of stochastic simulation. Having surveyed the most recommended generators of this class, we test their implementations in C and C++. The generators considered include: the combined multiple recursive generator MRG32k3a [6, 31], dynamic creation of Mersene Twisters [28] and the SPRNG Multiplicative LaggedFibonacci Generator (MLFG) [24]. For the purpose of comparison we 1 also test a pLab combined Explicit Inverse Congruential Generator (cEICG) [9, 10]. Their performance is compared from the point of view of their initialization and generation times. Our tests show that initialization can be completed most quickly by MLFG and most slowly by Dynamic Creation. Generation of random numbers was done most quickly by Dynamic Creation’s Mersenne Twisters and most slowly by the cEICG.
View record details 
Collaborative Software Engineering: An Annotated Bibliography
Cook, Carl (2004)
Doctoral thesis
University of Canterbury LibraryThis work is intended to be a useful starting point for those interested in researching the field of Collaborative Software Engineering (CSE). We list current CSE tools, models, and discussion papers, as well as important papers in related fields. As this bibliography is a broad survey of many research areas, it should prove useful for most aspects of software engineering research.
View record details 
Supporting OO Design Heuristics
Churcher, Neville; Frater, Sarah; Huynh, Cong Phuoc; Irwin, Warwick (2006)
Doctoral thesis
University of Canterbury LibraryHeuristics have long been recognised as a way to tackle problems which are intractable because of their size or complexity. They have been used in software engineering for purposes such as identification of favourable regions of design space. Some heuristics in software engineering can be expressed in highlevel abstract terms while others are more specific. Heuristics tend to be couched in terms which make them hard to automate. In our previous work we have developed robust semantic models of software in order to support the computation of metrics and the construction of visualisations which allow their interpretation by developers. In this paper, we show how software engineering heuristics can be supported by a semantic model infrastructure. Examples from our current work illustrate the value of combining the rigour of a semantic model with the human mental models associated with heuristics.
View record details 
Hybrid Random Early Detection Algorithm for Improving EndtoEnd Congestion Control in TCP/IP Networks
Haider, Aun; Sirisena, Harsha; Pawlikowski, Krzysztof (2005)
Doctoral thesis
University of Canterbury LibraryThe successful operation of the present Internet depends mainly upon TCP/IP which employs endtoend congestion control mechanisms built in the end hosts. In order to further enhance this paradigm of endtoend control the Random Early Detection algorithm has been proposed, which starts to mark or drop packets at the onset of congestion. The paper addresses issues related to the choice of queue length indication parameter for packet marking/dropping decisions in REDtype algorithms under the varying traffic conditions. Two modifications to RED are proposed: (i) use of both instantaneous queue size and its EWMA for packet marking/dropping and (ii) reducing the effect of the EWMA queue size value when the queue size is less then minth for a certain number of consecutive packet arrivals. The newly developed Hybrid RED algorithm can effectively improve the performance of TCP/IP based networks while working in a control loop formed by either dropping or marking of packets at congestion epochs. New guidelines are developed for better marking/dropping of packets to achieve a faster response of REDtype algorithms. The hybrid RED algorithm has been tested using ns2 simulations, that show better utilization of network bandwidth and a lower packet loss rate.
View record details 
Survey of simulators of Next Generation Networks for studying service availability and resilience
Begg, L; Liu, W; Pawlikowski, K; Perera, S; Sirisena, H (2004)
Doctoral thesis
University of Canterbury LibraryIt is expected that discreteevent simulation will be an important method of our study of service availability and resilience in the Next Generation Networks (NGNs). Thus, application of an efficient simulator, which can allow gradual addition of the required elements of NGNs and their functionalities, for evaluating quality of services the NGNs could offer, is an important part of this project. Such a discreteevent simulator, used in modelling and evaluation studies of service availability and resiliency mechanisms in NGNs, will be further referred to as an NGN simulator. This document presents results of a survey of the most popular simulators of telecommunication networks, including simulators suggested by other teams participating in this NGN proJect. A custombuilt simulator is also included. The results of this survey should help to select the most appropriate simulation tool(s) needed in research leading to Outcome 4 (Measurements), Outcome 6 (Service Availability Model Definition), Outcome 7 (Network Component Selection) and Outcome 8 (Prototype Development). This report does not include discreteevent simulators/emulators with "softwareinloop", which have been separately surveyed by our colleagues from the University of Waikato. First, we will present the criteria used in our evaluation of selected simulators and then the results of our survey in which these criteria have been applied. It is expected that discreteevent simulation will be an important method of our study of service availability and resilience in the Next Generation Networks (NGNs). Thus, application of an efficient simulator, which can allow gradual addition of the required elements of NGNs and their functionalities, for evaluating quality of services the NGNs could offer, is an important part of this project. Such a discreteevent simulator, used in modelling and evaluation studies of service availability and resiliency mechanisms in NGNs, will be further referred to as an NGN simulator. This document presents results of a survey of the most popular simulators of telecommunication networks, including simulators suggested by other teams participating in this NGN proJect. A custombuilt simulator is also included. The results of this survey should help to select the most appropriate simulation tool(s) needed in research leading to Outcome 4 (Measurements), Outcome 6 (Service Availability Model Definition), Outcome 7 (Network Component Selection) and Outcome 8 (Prototype Development). This report does not include discreteevent simulators/emulators with "softwareinloop", which have been separately surveyed by our colleagues from the University of Waikato. First, we will present the criteria used in our evaluation of selected simulators and then the results of our survey in which these criteria have been applied.
View record details 
Heuristic Rules for Improving Quality of Results from Sequential Stochastic DiscreteEvent Simulation
Pawlikowski, K; Mcnickle, D; Lee, J.S. R. (2012)
Doctoral thesis
University of Canterbury LibrarySequential analysis of output data during stochastic discreteevent simulation is a very effective practical way of controlling statistical errors of final simulation results. Such stochastic sequential simulation evolves along a sequence of consecutive checkpoints at which the accuracy of estimates, usually conveniently measured by the relative statistical error, defined as the ratio of the halfwidth of a given confidence interval (at an assumed confidence level) to the point estimate, is assessed. The simulation is stopped when the error reaches a satisfactorily low value. One of problems with this simulation scenario is that the inherently random nature of the output data produced during a stochastic simulation can lead to accidental, temporary satisfaction of the stopping rule. Such premature stoppings of simulations is one of causes of inaccurate final results, producing biased point estimates, with confidence intervals that do not contain the exact theoretical values. In this paper we consider a number of rules of thumb that can enhance the quality of the results from sequential stochastic simulation despite that some simulations can be prematurely stopped. The effectiveness of these rules of thumb is quantitatively assessed on the basis of experimental results obtained from fully automated simulations aimed at estimation of steadystate mean values. Keywords: Coverage of confidence intervals, sequential stopping rules, statistical errors of results, stochastic discreteevent simulation
View record details 
ASPIRE: Functional Specification and Architectural Design
Mitrovic, Antonija; Martin, Brent; Suraweera, Pramuditha; Zakharov, Konstantin; Milik, Nancy; Holland, Jay (2005)
Doctoral thesis
University of Canterbury LibraryThis document reports the work done during the initial two months of the ASPIRE project, funded by the eLearning Collaborative Development Fund grant 502. In this project, we will develop a Webenabled authoring system called ASPIRE, for building intelligent learning agents1 for use in elearning courses. ASPIRE will support the process of developing intelligent educational agents by automating the tasks involved, thus making it possible for tertiary teachers with little computer background to develop systems for their courses. In addition, we will develop several intelligent agents using the authoring system, so that we can evaluate its effectiveness and efficiency. The resulting elearning courses will overcome the deficiencies of existing distance learning courses and support deep learning. The proposed project will dramatically extend the capability of the tertiary education system in the area of elearning. In this first report on the ASPIRE project, we start by presenting the background for the project, and then describe our previous work. Section 1.2 presents the basic features of constraintbased tutors, while Section 1.3 presents WETAS, a prototype of a tutoring shell. ASPIRE will be based on these foundations. The first project milestone is the architecture of ASPIRE, which is discussed in Section 2. We first present the overall architecture of ASPIRE, and then turn to details of ASPIRETutor, the tutoring server, followed by a similar discussion of ASPIREAuthor, the authoring server. Section 3 presents the data model and discusses individual classes. We then present the functionality of the system in terms of user stories in Section 4. Section 5 presents the results of the second project milestone – designing the knowledge representation language used to generate domain models. The third milestone, the Session Manager, is presented in the last section.
View record details 
Image Coding Using Orthogonal Basis Functions
Hunt, Oliver (2004)
Doctoral thesis
University of Canterbury LibraryThe transform properties of several orthogonal basis functions are analysed in detail in this report, and their performance compared using a set of grayscale test images, containing both natural and artificial scenes. Welldefined image quality measures are used to determine the type of images that are most suitable for compression for a given basis function. The particular transforms that we have examined are the Discrete Cosine Transform, Discrete Tchebichef Transform, WalshHadamard Transform and Haar Transforms. We have found that the Discrete Cosine Transform and Discrete Tchebichef Transform provide the greatest energy compactness for images containing natural scenes. For images with significant interpixel variations we have found that the Discrete Tchebichef Transform and Haar Transform provide the best performance. The WalshHadamard Transform proved to be significantly less effective than either the Discrete Cosine or Discrete Tchebichef Transforms.
View record details 
Investigating the Effectiveness of Problem Templates on Learning in Intelligent Tutoring Systems
Mathews, Moffat (2006)
Doctoral thesis
University of Canterbury LibraryDeliberate practice within a coached environment is required for skill acquisition and mastery. Intelligent Tutoring Systems (ITSs) provide such an environment. A goal in ITS development is to find means to maximise effective learning. This provides the motivation for the project presented. This paper proposes the notion of problem templates. These mental constructs extend the idea of memory templates, and allow experts in a domain to store vast amounts of domainspecific information that are easily accessible when faced with a problem. This research aims to examine the validity of such a construct and investigate its role in regards to effective learning within ITSs. After extensive background research, an evaluation study was performed at the University of Canterbury. Physical representations of problem templates were formed in Structured Query Language (SQL). These were used to model students, select problems, and provide customised feedback in the experimental version of SQLTutor, an Intelligent Tutoring System. The control group used the original version of SQLTutor where pedagogical (problem selection and feedback) and modelling decisions were based on constraints. Preliminary results show that such a construct could exist; furthermore, it could be used to help students attain high levels of expertise within a domain. Students using template based ITS showed high levels of learning within short periods of time. The author suggests further evaluation studies to investigate the extent and detail of its effect on learning.
View record details