80,754 results

  • Risk Management of Risk Under the Basel Accord: A Bayesian Approach to Forecasting Value-at-Risk of VIX Futures

    Casarin, R.; Chang, C.L.; Jiménez-Martín, J.; McAleer, M.; Pérez-Amara, T. (2011)

    Discussion / Working Papers
    University of Canterbury Library

    RePEc Working Papers Series: No. 26/2011

    View record details
  • On-line networks, social capital and social integration: a case study of on-line communities in Malaysia.

    Wan Jaafar, Wan Munira (2011)

    Doctoral thesis
    University of Canterbury Library

    In 1996, Malaysia developed a national ICT policy intending to establish on-line community networks amongst all citizens as part of the agenda to prepare the nation to become a mainstream knowledge-based society and economy. As a country that has historically experienced uneasy tension between inter-ethnic social relationships, this research seeks to explore whether on-line social networking affects the forms of social capital and social integration found amongst diverse on-line ethnic communities (Malay, Chinese and Indian) in Malaysia. Six on-line communities were selected as case studies and the research was carried out in two stages. The first stage involved interviewing three different groups of participants: on-line community administrators, Government representatives and the general public; the second stage was a web-based survey of on-line participants. The findings suggest that the six selected on-line communities in this study show great potential for enhancing social networks and social capital across all members of different ethnicities. However, these are not significant enough to create social integration across all ethnic communities. Instead, three different trends of bonding and bridging social capital emerged across the six selected on-line communities. The first trend shows bridging social capital throughout both on-line and off-line activities in MalaysiaMAYA.com (social networking site), SARA (residential-based) and FamilyPlace.com (parenting and children). The second trend indicates that bridging networks were limited to on-line communication as seen in both residentially-based communities (USJ Subang Jaya and PJNet). In contrast, VirtualFriends.net (social networking site) only demonstrates bonding social capital developed in both on-line and off-line social networking. Considering these diverse patterns, it is argued that transferring bridging social capital from an on-line medium to an off-line medium is challenging. Factors of cultural capital such as language use and cultural and religious observations have been highlighted as significant in shaping community networking patterns. Overall, the issue of ethnic integration in the context of on-line communities in Malaysia remains, at best, a challenging factor for the formation of on-line/ off-line social capital.

    View record details
  • Student 'belief effects' in remedial reading.

    Kirk, Judy A. (2001)

    Doctoral thesis
    University of Canterbury Library

    This study investigated the word recognition difficulties, the strategies used for word recognition and the self-beliefs about their ability to read and their reading behaviours, of six severely reading disabled Year Nine and Year Ten adolescents in a New Zealand coeducational, secondary school. Each student was given a year long, individualised, one-on-one reading programme, which taught phonological processing skills, letter-sound knowledge and the strategies to apply and monitor the application of their letter-sound knowledge. The programme also encouraged the students to adopt or maintain very positive self-beliefs about their ability to decipher words and the effectiveness of applying the strategies they were being taught consistently, persistently and with the flexibility to change if their initial attempts were unsuccessful. Reading disabled adolescents who experience continual failure are said to come to believe that they do not have the ability to succeed; do not have control over their progress. As a result they do not believe that with effort they can achieve. They become passive learners with a range of avoidance behaviours. They become learned helpless. As a consequence they fail to generalise the skills, knowledge and strategies they possess to new tasks. When they entered the programme the participating students had difficulty deciphering most words of two or more syllables. They used incomplete and inaccurate letter information both in their attempts to decipher unfamiliar words and when deciphering one and two syllable, high frequency words that they had read correctly on previous occasions. In addition each had difficulty integrating contextual meaning with letter information as they read. The study has shown that each student had their own particular pattern of beliefs about their ability to read and the reading strategies they used. Some students held a mastery pattern of beliefs. They made accelerated progress of up to three age equivalent years in word recognition in the year. They were very optimistic about their ability to read and would tackle text that was, for them, very difficult to decipher. They were consistent and persistent in applying the strategies. Those students who made the most progress learned to be flexible and change their strategy use if they were initially unsuccessful. The students who held maladaptive patterns of beliefs made progress of only one age equivalent year or less. The learned helpless students increased their beliefs in the effectiveness of the programme teaching as the year progressed. But they formed and changed their beliefs about their ability to decipher as a result of their classroom experiences. When they changed their beliefs about their ability, they changed their reading behaviours in terms of the programme teaching, because they believed in its effectiveness. They became more consistent and persistent in their use of the strategies they were being taught. One student with a maladaptive pattern of beliefs was not learned helpless but instead held too high a belief in the effectiveness of his reading strategies. This led to a dysfunctional pattern of repeatedly reapplying them. The study concluded, first, that the severe reading problems the participating students had resulted from their difficulties with using accurate and complete letter-sound information and their difficulties with integrating this information with the use of contextual meaning to decipher words. These students were capable of using strategies successfully. Whether each student's achievement gains were accelerated or more limited depended on their reading self-beliefs about their ability and their strategy use. Second, the study concluded that it is effective to teach a comprehensive programme for word recognition which includes teaching letter-sound information and the strategies to apply this letter sound knowledge and encourages the students to hold positive self-beliefs about their ability to decipher words and their strategy use. It is important that such a programme is run for sufficient time to allow changes in ability beliefs and beliefs about strategy use, time for these changes in beliefs to result in changes in strategy use and time for the changes in strategy use to result in changes in rates of achievement. It is suggested that good liaison between the classroom teacher and the remedial teacher, encouraging students to believe they had control over their learning and using stimulating reading material can be used to hasten changes in ability beliefs and motivation to read.

    View record details
  • The distribution of instructional leadership in eLearning clusters : an ecological perspective

    Stevens, Kerry Maxwell (2011)

    Masters thesis
    University of Canterbury Library

    This study explores educational leadership within and across two of NZ’s eLearning clusters. Two complementary perspectives of educational leadership are used to frame the investigation: instructional leadership and distributed leadership. The research was conducted approximately nine months after the cessation of a two-year Ministry subsidy for the employment of 12 ePrincipals and at a time when Ultrafast Broadband was imminent for nearly all NZ schools. The literature review explores aspects of two areas related to eLearning leadership: conventional educational leadership in ‘bricks-and-mortar’ schooling contexts and eLearning/eTeaching in virtual schooling contexts. Data was gathered from semistructured interviews with twelve school-based research participants (ePrincipals, eTeachers, Site Supervisors and Principals) across two of NZ’s eLearning clusters and four National Officials with responsibilities for wider forms of eLearning. The findings are presented in a manner that attempts to capture directly the research participants’ voices, while still maintaining confidentiality and anonymity. The findings are discussed using an ecological perspective of eLearning as the unifying framework to explore the leadership across nested and interacting layers, from the micro-level of an eLearning class to the macro-level of NZ’s system for secondary education. The major findings from the study indicate that educational leadership in eLearning clusters is complex, relies heavily on goodwill and collaboration, and occurs in a challenging environment. Within an eLearning cluster the leadership of eLearning/eTeaching is distributed primarily across the ePrincipal, eTeachers and Site Supervisors who each assume complementary leadership roles. A raft of recommendations, across all ecosystem levels of eLearning, is proposed for leaders to consider when initiating change to strengthen their practices and policies with respect to enhancing eLearning and eTeaching.

    View record details
  • Decomposition of Rayleigh Fading dispersive channels

    Baas, Nicholas J. (2001)

    Doctoral thesis
    University of Canterbury Library

    This thesis identifies, develops and applies methods for the decomposition of fading dispersive channels. Such channels arise in wireless communication as a result of multipath and relative motion of the transmitter, scatterers and receiver. The decompositions considered are the f-power series and Karhunen-Loève (KL) expansions. For the KL expansion generalisations to rapid time variation are possible with the separate options of single spread and double spread decomposition. The single spread decomposition involves a model of the instantaneous channel transfer function with time variation supported by sample spaced coefficients. The double spread decomposition employs a model of each received pulse and requires symbol spaced coefficients. The decompositions are applied to pulse shaping, channel modelling for equalisation and the determination of performance limits for linear modulation over fading dispersive channels. The results on pulse shaping show that, with moderate bandwidth expansion and appropriate design, it is possible to significantly lower the complexity of a mobile receiver. The approach suggests a way to move complexity and power consumption away from the mobile unit and into the base station. The effects of diversity on performance are investigated by assuming a single pulse from a linear modulation format. This removes the need to consider intersymbol interference and allows conclusions about the impact of fading and dispersion on the probability of error and the average mutual information.

    View record details
  • Performance of code division multiple access on multipath channels - an exact analysis

    Cowl, David J. (1994)

    Doctoral thesis
    University of Canterbury Library

    This thesis presents an exact analysis approach to the investigation of the average probability of error performance for a code division multiple access (CDMA) system operating over multipath fading channels. The system in question has K users transmitting direct sequence spread spectrum signals over the same bandwidth. The receiver employs a selection diversity algorithm to select the path with the strongest desired signal at any given time. Performance results are presented for three classes of channel model, those being the single path non-fading channel, the single path fading channel and the multiple path fading channel. The performance is also presented for a system where ideal power control is applied. The channel model for the single path fading channel is a single path whose gain has a Rayleigh distribution. The channel model for the multiple path fading channel is a tapped delay line model, where each tap is equally spaced in time, and has a gain with a Rayleigh distribution. The average gain for each tap is specified by an average delay profile. The exact analysis approach involves finding the probability density functions (pdf's) for the per-user multi-user interference and the inter-symbol interference from the desired user. These probability density functions are derived from the pdf's of each contributing factor to the interference, including the empirical cross-correlation function pdf. The characteristic functions of the per-user multi-user interference and the inter-symbol interference are found from the pdf's, and the characteristic function of the total interference is given by the product of the inter-symbol interference characteristic function and the K - 1 multi-user interference characteristic functions. The average probability of error is calculated using the characteristic function of the total interference. Conclusions on the performance of the selection diversity algorithm and the power control algorithm are drawn, based on the results.

    View record details
  • Modelling of power system transformers in the complex conjugate harmonic space

    Daza, Enrique Acha (1988)

    Doctoral thesis
    University of Canterbury Library

    Magnetizing harmonics in power systems have received limited attention. The general belief is that they do not reach harmful levels in interconnected networks. Moreover the modelling of non-linearities is not a straightforward procedure and so there has been little motivation to develop appropriate methodologies that allow a thorough investigation to take place. In this thesis the problem of magnetizing harmonics in power systems is investigated. The results obtained show that, contrary to expectations, magnetizing currents can give rise to a considerable harmonic distortion in the voltage wave form of power networks operating under loaded conditions. The method adopted in this research linearizes each magnetic non-linearity around a base operating point. The linearization exercise takes place in the complex-conjugate harmonic space and the individual linearized equations may be interpreted as harmonic Norton equivalents. These equations combine easily with each other and with the transfer admittances representing the linear part of the network. The overall process of linearization may be seen as a linearization of the entire network and can also be interpreted as a multi-nodal, polyphase harmonic Norton equivalent. This problem is non-linear and the harmonic solution is reached by an iterative process. A re-linearization of the network takes place at each iterative step and so the solution is found through a Newton-type procedure. Several iterative strategies are tested, including unified and sequential solutions with either single or multi-evaluated Jacobians. A hitherto neglected problem which also receives attention is the harmonic modelling of non-homogenous transmission lines. A novel approach to the modelling of the frequency dependent part of the transmission line is also presented. The equations proposed are shown to be the fastest to date and yet maintain a high degree of accuracy.

    View record details
  • Improved congestion control for packet switched data networks and the Internet

    Haider, Aun (2004)

    Doctoral thesis
    University of Canterbury Library

    Congestion control is one of the fundamental issues in computer networks. Without proper congestion control mechanisms there is the possibility of inefficient utilization of resources, ultimately leading to network collapse. Hence congestion control is an effort to adapt the performance of a network to changes in the traffic load without adversely affecting users perceived utilities. This thesis is a step in the direction of improved network congestion control. Traditionally the Internet has adopted a best effort policy while relying on an end-to-end mechanism. Complex functions are implemented by end users, keeping the core routers of network simple and scalable. This policy also helps in updating the software at the users' end. Thus, currently most of the functionality of the current Internet lie within the end users' protocols, particularly within Transmission Control Protocol (TCP). This strategy has worked fine to date, but networks have evolved and the traffic volume has increased many fold; hence routers need to be involved in controlling traffic, particularly during periods of congestion. Other benefits of using routers to control the flow of traffic would be facilitating the introduction of differentiated services or offering different qualities of service to different users. Any real congestion episode due to demand of greater than available bandwidth, or congestion created on a particular target host by computer viruses, will hamper the smooth execution of the offered network services. Thus, the role of congestion control mechanisms in modern computer networks is very crucial. In order to find effective solutions to congestion control, in this thesis we use feedback control system models of computer networks. The closed loop formed by TCPIIP between the end hosts, through intermediate routers, relies on implicit feedback of congestion information through returning acknowledgements. This feedback information about the congestion state of the network can be in the form of lost packets, changes in round trip time and rate of arrival of acknowledgements. Thus, end hosts can either execute reactive or proactive congestion control mechanisms. The former approach uses duplicate acknowledgements and timeouts as congestion signals, as done in TCP Reno, whereas the latter approach depends on changes in the round trip time, as in TCP Vegas. The protocols employing the second approach are still in their infancy as they cannot co-exist safely with protocols employing the first approach. Whereas TCP Reno and its mutations, such as TCP Sack, are presently widely used in computer networks, including the current Internet. These protocols require packet losses to happen before they can detect congestion, thus inherently leading to wastage of time and network bandwidth. Active Queue Management (AQM) is an alternative approach which provides congestion feedback from routers to end users. It makes a network to behave as a sensitive closed loop feedback control system, with a response time of one round trip time, congestion information being delivered to the end host to reduce data sending rates before actual packets losses happen. From this congestion information, end hosts can reduce their congestion window size, thus pumping fewer packets into a congested network until the congestion period is over and routers stop sending congestion signals. Keeping both approaches in view, we have adopted a two-pronged strategy to address the problem of congestion control. They are to adapt the network at its edges as well as its core routers. We begin by introducing TCPIIP based computer networks and defining the congestion control problem. Next we look at different proactive end-to-end protocols, including TCP Vegas due to its better fairness properties. We address the incompatibility problem between TCP Vegas and TCP Reno by using ECN based on Random Early Detection (RED) algorithm to adjust parameters of TCP Vegas. Further, we develop two alternative algorithms, namely optimal minimum variance and generalized optimal minimum variance, for fair end-to-end protocols. The relationship between (p, 1) proportionally fair algorithm and the generalized algorithm is investigated along with conditions for its stable operation. Noteworthy is a novel treatment of the issue of transient fairness. This represents the work done on congestion control at the edges of network. Next, we focus on router-based congestion control algorithms and start with a survey of previous work done in that direction. We select the RED algorithm for further work due to it being recommended for the implementation of AQM. First we devise a new Hybrid RED algorithm which employs instantaneous queue size along with an exponential weighted moving average queue size for making decisions about packet marking/dropping, and adjusts the average value during periods of low traffic. This algorithm improves the link utilization and packet loss rate as compared to basic RED. We further propose a control theory based Auto-tuning RED algorithm that adapts to changing traffic load. This algorithm can clamp the average queue size to a desired reference value which can be used to estimate queuing delays for Quality of Service purposes. As an alternative approach to router-based congestion control, we investigate Proportional, Proportional-Integral (PI) and Proportional-Integral-Derivative (PID) principles based control algorithms for AQM. New control-theoretic RED and frequency response based PI and PID control algorithms are developed and their performance is compared with that of existing algorithms. Later we transform the RED and PI principle based algorithms into their adaptive versions using the well known square root of p formula. The performance of these load adaptive algorithms is compared with that of the previously developed fixed parameter algorithms. Apart from some recent research, most of the previous efforts on the design of congestion control algorithms have been heuristic. This thesis provides an effective use of control theory principles in the design of congestion control algorithms. We develop fixed-parameter-type feedback congestion control algorithms as well as their adaptive versions. All of the newly proposed algorithms are evaluated by using ns-based simulations. The thesis concludes with a number of research proposals emanating from the work reported.

    View record details
  • Coding and equalisation for fixed-access wireless systems

    Holdsworth, Katharine Ormond (2000)

    Doctoral thesis
    University of Canterbury Library

    This Thesis considers the design of block coded signalling formats employing spectrally efficient modulation schemes. They are intended for high-integrity, fixed-access, wireless systems on line-of-sight microwave radio channels. Multidimensional multilevel block coded modulations employing quadrature amplitude modulation are considered. An approximation to their error performance is described and compared to simulation results. This approximation is shown to be a very good estimate at moderate to high signal-to-noise ratio. The effect of parallel transitions is considered and the trade-off between distance and the error coefficient is explored. The advantages of soft- or hard-decision decoding of each component code is discussed. A simple approach to combined decoding and equalisation of multilevel block coded modulation is also developed. This approach is shown to have better performance than conventional independent equalisation and decoding. The proposed structure uses a simple iterative scheme to decode and equalise multilevel block coded modulations based on decision feedback. System performance is evaluated via computer simulation. It is shown that the combined decoding and equalisation scheme gives a performance gain of up to 1 dB at a bit error rate of 10-4 over conventional, concatenated equalisation and decoding.

    View record details
  • Echo cancellation on communication circuits

    Kelly, Mervyn William (1979)

    Doctoral thesis
    University of Canterbury Library

    Echoes present on telephone circuits are a significant problem on long circuits where the round trip delay is greater than 100 mS. This is particularly so on satellite circuits where the round trip delay is in the order of 500 mS for each satellite hop. An echo canceller has been designed for New Zealand conditions which is less complex and, therefore, should be less expensive to produce than current overseas models. Although particularly applicable to New Zealand conditions, the techniques used have wider application. The general problems of echo canceller design are considered with some specific proposals being made. The two cancellers described here are non-adaptive and are based on a digital transversal filter. The later canceller employs Random Access Memory (RAM), uses pseudo random noise for system identification and compands the speech samples and filter coefficients in a true logarithmic format using multiplexing techniques for bulky and costly components.

    View record details
  • Reactive traffic control mechanisms for communication networks with self-similar bandwidth demands

    Östring, Sven Andrew Mark (2001)

    Doctoral thesis
    University of Canterbury Library

    Communication network architectures are in the process of being redesigned so that many different services are integrated within the same network. Due to this integration, traffic management algorithms need to balance the requirements of the traffic which the algorithms are directly controlling with Quality of Service (QoS) requirements of other classes of traffic which will be encountered in the network. Of particular interest is one class of traffic, termed elastic traffic, that responds to dynamic feedback from the network regarding the amount of available resources within the network. Examples of this type of traffic include the Available Bit Rate (ABR) service in Asynchronous Transfer Mode (ATM) networks and connections using Transmission Control Protocol (TCP) in the Internet. Both examples aim to utilise available bandwidth within a network. Reactive traffic management, like that which occurs in the ABR service and TCP, depends explicitly on the dynamic bandwidth requirements of other traffic which is currently using the network. In particular, there is significant evidence that a wide range of network traffic, including Ethernet, World Wide Web, Varible Bit Rate video and signalling traffic, is self-similar. The term self-similar refers to the particular characteristic of network traffic to remain bursty over a wide range of time scales. A closely associated characteristic of self-similar traffic is its long-range dependence (LRD), which refers to the significant correlations that occur with the traffic. By utilising these correlations, greater predictability of network traffic can be achieved, and hence the performance of reactive traffic management algorithms can be enhanced. A predictive rate control algorithm, called PERC (Predictive Explicit Rate Control), is proposed in this thesis which is targeted to the ABR service in ATM networks. By incorporating the LRD stochastic structure of background traffic, measurements of the bandwidth requirements of background traffic, and the delay associated with a particular ABR connection, a predictive algorithm is defined which provides explicit rate information that is conveyed to ABR sources. An enhancement to PERC is also described. This algorithm, called PERC+, uses previous control information to correct prediction errors that occur for connections with larger round-trip delay. These algorithms have been extensively analysed with regards to their network performance, and simulation results show that queue lengths and cell loss rates are significantly reduced when these algorithms are deployed. An adaptive version of PERC has also been developed using real-time parameter estimates of self-similar traffic. This has excellent performance compared with standard ABR rate control algorithms such as ERICA. Since PERC and its enhancement PERC+ have explicitly utilised the index of self-similarity, known as the Hurst parameter, the sensitivity of these algorithms to this parameter can be determined analytically. Research work described in this thesis shows that the algorithms have an asymmetric sensitivity to the Hurst parameter, with significant sensitivity in the region where the parameter is underestimated as being close to 0.5. Simulation results reveal the same bias in the performance of the algorithm with regards to the Hurst parameter. In contrast, PERC is insensitive to estimates of the mean, using the sample mean estimator, and estimates of the traffic variance, which is due to the algorithm primarily utilising the correlation structure of the traffic to predict future bandwidth requirements. Sensitivity analysis falls into the area of investigative research, but it naturally leads to the area of robust control, where algorithms are designed so that uncertainty in traffic parameter estimation or modelling can be accommodated. An alternative robust design approach, to the standard maximum entropy approach, is proposed in this thesis that uses the maximum likelihood function to develop the predictive rate controller. The likelihood function defines the proximity of a specific traffic model to the traffic data, and hence gives a measure of the performance of a chosen model. Maximising the likelihood function leads to optimising robust performance, and it is shown, through simulations, that the system performance is close to the optimal performance as compared with maximising the spectral entropy. There is still debate regarding the influence of LRD on network performance. This thesis also considers the question of the influence of LRD on traffic predictability, and demonstrates that predictive rate control algorithms that only use short-term correlations have close performance to algorithms that utilise long-term correlations. It is noted that predictors based on LRD still out-perform ones which use short-term correlations, but that there is Potential simplification in the design of predictors, since traffic predictability can be achieved using short-term correlations. This thesis forms a substantial contribution to the understanding of control in the case where self-similar processes form part of the overall system. Rather than doggedly pursuing self-similar control, a broader view has been taken where the performance of algorithms have been considered from a number of perspectives. A number of different research avenues lead on from this work, and these are outlined.

    View record details
  • Turbo codes: convergence phenomena & non-binary constructions

    Reid, Andrew Carey (2002)

    Doctoral thesis
    University of Canterbury Library

    The introduction of turbo codes in 1993 provided a code structure that could approach Shannon limit performance whilst remaining practically decodeable. Much subsequent work has focused on this remarkable structure, attempting to explain its performance and to extend or modify it. This thesis builds on this research providing insights into the convergence behaviour of the iterative decoder for turbo codes and examining the potential of turbo codes constructed from non-binary component codes. The first chapter of this thesis gives a brief history of coding theory, providing context for the work. Chapter two explains in detail both the turbo encoding and decoding structures considered. Chapter three presents new work on convergence phenomena observed in the iterative decoding process. These results emphasise the dynamic nature of the decoder and allow for both a stopping criteria and ARQ scheme to be proposed. Chapters four and five present the work on non-binary turbo codes. First the problem of choosing good component codes is discussed and an achievability bound on the dominant parameter affecting their performance is derived. Searches for good component codes over a number of small rings are then conducted, and simulation results presented. The new results, and suggestions for further work are summarised in the conclusion of Chapter six.

    View record details
  • Site investigations for residential development on the Port Hills, Christchurch.

    McDowell, Barry John (1989)

    Masters thesis
    University of Canterbury Library

    Three site investigations for residential development on the Port Hills gave a chance to document remedial measures in volcanic bedrock (McCormacks Bay) and cut and fill operations in loess (Westmorland), and to carry out detailed logging and index testing, as well as strength testing in loess (Westmorland and Coleridge Tce). A design-as-you-go approach was adopted for remedial measures in blast-damaged volcanic bedrock at McCormacks Bay Quarry Subdivision because of potential difficulties in obtaining detailed sub-surface information. Remedial measures included: (a) removal of loose blocks, (b) reinforced concrete buttressing, (c) a gabion basket retaining wall, and (d) a vegetation programme. Engineering geological mapping and face logging are important for delineating and subdividing rock and soil units, as well as active and inactive areas of erosion and slope instability. Geotechnical testing programmes, remedial measures and earth works should only proceed after completion and interpretation of engineering geological plans, sections and face logs. Index tests carried out on loess from Westmorland and Coleridge Tce included: (a) grainsize distribution, (b) Atterberg limits, (c) insitu dry density and moisture content, (d) pinhole erosion, and (e) the crumb test for clay dispersion. Grainsize distribution and Atterberg limits are important tests for identifying a material as loess, but show little variation within loess. Dry density, pinhole erosion and detailed field descriptions from a fresh face allow for the division of insitu loess into layers that represent primary airfall and reworked loess, as well as modification by soil/fragipan forming processes. Total strength parameters (c,ø) were obtained for loess by triaxial testing (UU test) of 35mm diameter tube samples. Maximum strength measured was c=178 kPa, ø=30° (W=8.5%), with a minimum of c=0 kPa, ø=30° (W=19%). A comparison of field density tests on loess fill showed that physical tests (tube samples, Balloon densometer, sand replacement) are directly comparable, while results from a nuclear densometer require simple correction factors to be comparable with physical tests.

    View record details
  • On concatenated single parity check codes and bit interleaved coded modulation.

    Tee, James Seng Khien (2001)

    Doctoral thesis
    University of Canterbury Library

    In recent years, the invention of Turbo codes has spurred much interest in the coding community. Turbo codes are capable of approaching channel capacity closely at a decoding complexity much lower than previously thought possible. Although decoding complexity is relatively low, Turbo codes are still too complex to implement for many practical systems. This work is focused on low complexity channel coding schemes with Turbo-like performance. The issue of complexity is tackled by using single parity check (SPC) codes, arguably the simplest codes known. The SPC codes are used as component codes in multiple parallel and multiple serial concatenated structures to achieve high performance. An elegant technique for improving error performance by increasing the dimensionality of the code without changing the block length and code rate is presented. For high bandwidth efficiency applications, concatenated SPC codes are combined with 16-QAM Bit Interleaved Coded Modulation (BICM) to achieve excellent performance. Analytical and simulation results show that concatenated SPC codes are capable of achieving Turbo-like performances at a complexity which is approximately 10 times less than that of a 16-state Turbo code. A simple yet accurate generalised bounding method is derived for BICM systems employing large signal constellations. This bound works well over a wide range of SNRs for common signal constellations in the independent Rayleigh fading channel. Moreover, the bounding method is independent of the type and code rate of channel coding scheme. In addition to the primary aim of the research, an improved decoder structure for serially concatenated codes has been designed, and a sub-optimal, soft-in-soft-out iterative technique for decoding systematic binary algebraic block codes has been developed.

    View record details
  • Energy management engineering : a predictive energy management system incorporating an adaptive neural network for the direct heating of domestic and industrial fluid mediums.

    Wezenberg, Herman (2000)

    Doctoral thesis
    University of Canterbury Library

    The objective of this research project is to improve the control and provide a more cost-efficient operation in the direct heating of stored domestic or industrial fluid mediums; such to be achieved by means of an intelligent automated energy management system. For the residential customer this system concept applies to the hot water supply as stored in the familiar hot water cylinder; for the industrial or commercial customer the scope is considerably greater with larger quantities and varieties of fluid mediums. Both areas can obtain significant financial savings with improved energy management. Both consumers and power supply and distribution companies will benefit with increased utilisation of cheaper 'off-peak' electricity; reducing costs and spreading the system load demand. The project has focussed on domestic energy management with a definite view to the wider field of industrial applications. Domestic energy control methodology and equipment has not significantly altered for decades. However, computer hardware and software has since then flourished to an unprecedented proportion and has become relatively cheap and versatile; these factors pave the way for the application of computer technology in this area of great potential. The technology allows the implementation of a 'hot water energy management system', which makes a forecast of the hot water demand for the next 24 hours and proceeds to provide this demand in the most efficient manner possible. In the (near) future, the system, known as FEMS for Fluid Energy Management System, is able to take advantage and in fact will promote the use of a retail 'dynamic spot price tariff’. FEMS is a combination of hardware and software developed to replace the existing cylinder thermostat, take care of the necessary data-acquisition and control the cylinder's total energy instead of it's (single point) temperature. This provides, besides heating cost reduction, a greater accuracy, a degree of flexibility, improved feedback, legionella inhibition, and a diagnostic capability. To the domestic consumer the latter three items are of greatest relevance. The crux of the system lies in its predictive ability. Having explored the more conventional alternatives, a suitable solution was found in the utilisation of the Elman recurrent neural networks, which focus on the temporal characteristics of the hot water demand time series and are able to adapt to changing environments, coping with the presence of any non-linearity and noise in the data. Prior to developing FEMS a study was made of the basic fluid behaviour in medium and high pressure domestic hot water cylinders, an area not well-covered to date and of interest to engineers and manufacturers alike. For this step data acquisition equipment and software was purposely created. The control software plus equipment were combined into a fully automated test system with minimal operator input, allowing a large amount of data to be gathered over a period measured in months. A similar system was subsequently used to collect actual hot water demand data from a residential family, and in fact forms the basis for FEMS. Finally an enhanced version of FEMS is discussed and it is shown how the system is able to output multiple prediction and utilise varying tariff rates.

    View record details
  • Adolescent Methylone Exposure and its Effects on Behavioural Development in Adulthood

    Daniel, Jollee Jaye (2011)

    Masters thesis
    University of Canterbury Library

    Originally developed as an anti-depressant and later available as a ‘party-pill’ in New Zealand, methylone is currently classed as an illegal drug. This is due to findings of its similarity in chemical structure to that of Ecstasy (MDMA). Methylone is a relatively new drug into which little research has been conducted. Consequently, no known study has investigated the long-term effects on behavioural development arising from exposure during adolescence. The present thesis therefore aimed to identify long-term effects of chronic adolescent exposure to methylone on adult anxiety-like behaviours. This was achieved by the use of 80 rats (40 males: 40 females) and exposing them to either a methylone or saline treatment for ten consecutive days. Two different treatment age groups (early versus late adolescence) were examined and to ensure adequate comparisons could be made, two control groups were utilised. All rats were tested during adulthood in four specifically selected anxiety-measure tests; the open-field, preference for the light side of a light-dark box, acoustic startle and responsiveness to the novel arm of a Y-maze. The results suggested methylone-exposed rats displayed more anxiolytic behaviours than saline-treated rats. In the open field methylone exposed rats exhibited less ambulation than controls and those treated in early adolescence defecated more while rats treated in late adolescence occupied the corners of the apparatus more exhibiting higher anxiety-like behaviours. Exploratory behaviours in the Y-maze were decreased in methylone-treated rats, and those exposed in early adolescence entered the novel arm less often. However, acoustic startle results suggested methylone-exposed rats were less anxious as evidenced by a lower startle amplitude than controls. Overall, the results suggested differences in anxiety-like behaviours between methylone-exposed rats and controls. It did not appear that being exposed to methylone in early adolescence resulted in vast differences in anxiety-like behaviours than if exposure began in late adolescence.

    View record details
  • The Ohakuri pyroclastic deposits and the evolution of the Rotorua-Ohakuri volcanotectonic depression.

    Gravley, Darren MClurg (2004)

    Doctoral thesis
    University of Canterbury Library

    The caldera-forming Ohakuri pyroclastic deposits (~ 100 km3 magma) were erupted at ~ 240 ka from the newly defined Ohakuri caldera, which is located within the central Taupo Volcanic Zone (TVZ) of New Zealand. The Ohakuri pyroclastic deposits are remarkable for their widespread lateral and vertical lithofacies variation that is attributed to phreatomagmatic eruption dynamics and a variable depositional environment. The Ohakuri pyroclastic deposits overlie 3 precursor airfall units erupted from a source within what was to become the Ohakuri caldera. The third of these fall units, which directly underlies the Ohakuri deposits, is a plinian-style deposit (unit 3) that is interbedded with distal Mamaku ignimbrite (>145 km3 magma) from the Rotorua caldera, ~ 25 km north of the Ohakuri caldera. It is thus inferred that these two major eruptions must have overlapped such that the Ohakuri pyroclastic deposits and Mamaku ignimbrite were erupted at most weeks to months apart. The complex Ohakuri deposits, previously documented in part as sediments, are here described in terms of pyroclastic lithofacies that are grouped into 5 geographically distinct lithofacies associations. Each Ohakuri lithofacies association thus represents a distinctive style of deposition in an eruption that was highly variable in time and space. One lithofacies association consists of giant-dune-bedforms with wavelengths up to 42 m that are characterised by an anomalously high fraction of fine ash. Two other lithofacies associations are related to the interaction between primary pyroclastic density currents and a wet depositional environment that triggered several episodes of secondary hydroeruptions. These hydroeruptions and their deposits provide evidence for time breaks in the Ohakuri eruption sequence that, otherwise, could (the time breaks) not be discerned from thick accumulations of structureless ignimbrite. Geochemically, the Ohakuri pumice compositions range from silicic type 1 and 2 compositions to a dacitic type 3 composition. The distribution of the 3 pumice types, matched with the distribution of Ohakuri lithofacies associations, reveals an eruption sequence that can be divided into two main events. The Ohakuri and Mamaku eruptions, together with the syn-volcanic subsidence of the central Kapenga area (> 100 km2), formed what is defined here as the Rotorua-Ohakuri volcanotectonic depression. With respect to the central Kapenga area, paleogeographic reconstruction from fieldwork and age data shows that > 250 m of vertical displacement occurred on its western margin (the Horohoro Fault scarp) in one large 'superfault' event. The subsidence of this region was induced by lateral withdrawal of magma, via a NE-SW trending conduit system, which was then erupted from the Ohakuri caldera.

    View record details
  • Kinematics of the Paparoa Metamorphic Core Complex, West Coast, South Island, New Zealand.

    Schulte, Daniel (2011)

    Masters thesis
    University of Canterbury Library

    The Paparoa Metamorphic Core Complex developed in the Mid-Cretaceous due to continental extension conditioning the crust for the eventual breakup of the Gondwana Pacific Margin, which separated Australia and New Zealand. It has two detachment systems: the top-NE-displacing Ohika Detachment at the northern end of the complex and the top-SW-displacing Pike Detachment at the southern end of the complex. The structure is rather unusual for core complexes worldwide, which are commonly characterised by a single detachment system. Few suggestions for the kinematics of the core complex development have been made so far. In this study structural-, micrographic- and fission track analyses were applied to investigate the bivergent character and to constrain the kinematics of the core complex. The new results combined with reinterpretations of previous workers’ observations reveal a detailed sequence of the core complex exhumation and the subsequent development. Knowledge about the influence and the timing of the two respective detachments is critical for understanding the structural evolution of the core complex. The syntectonic Buckland Granite plays a key role in the determination of the importance of the two detachment systems. Structural evidence shows that the Pike Detachment is responsible for most of the exhumation, while the Ohika Detachment is a mere complexity. In contrast to earlier opinions the southwestern normal fault system predates the northeastern one. The Buckland Pluton records the ceasing pervasive influence of the Pike Detachment, while activity on the Ohika Detachment had effect on the surface about ~8 Ma later. Most fission track ages are not related to the core complex stage, but reflect the younger late Cretaceous history. They show post core complex burial and renewed exhumation in two phases, which are regionally linked to the development of the adjacent Paparoa Basin and the Paparoa Coal Measures to the southwest and to the inception of seafloor spreading in the Tasman Sea in a larger context.

    View record details
  • The evolution of Maroa Volcanic Centre, Taupo Volcanic Zone, New Zealand

    Leonard, Graham S. (2003)

    Doctoral thesis
    University of Canterbury Library

    Maroa Volcanic Centre (Maroa) is located within the older Whakamaru caldera, central Taupo Volcanic Zone, New Zealand. Dome lavas make up the majority of Maroa volume, with the large Maroa West and East Complexes (MWC and MEC, respectively) erupted mostly over a short 29 kyr period starting at 251 ± 17 ka. The five mappable Maroa pyroclastics deposits are discussed in detail. The Korotai (283 ± 11 ka), Atiamuri (229 ±12 ka), and Pukeahua (~229 -196 ka) pyroclastics are all s 1 km3 and erupted from (a) northern Maroa, (b) a vent below Mandarin Dome and (c) Pukeahua Dome Complex vents, respectively. The Putauaki (272 ± 10 ka) and Orakonui (256 ± 12 ka) pyroclastics total ~ 4 km3 from a petrologically and geographically very similar central Maroa source. The ~ 220 ka Mokai pyroclastics outcrop partly within Maroa but their source remains unclear, whereas the ~ 240 ka Ohakuri pyroclastics appear to have come from a caldera just north of Maroa. The ages of the Mamaku, Ohakuri and Mokai pyroclastics are equivocaL The Mamaku and Ohakuri pyroclastics appear to be older (~ 240 ka) than the age previously accepted for the Mamaku pyroclastics. Maroa lavas are all plagioclase-orthopyroxene bearing, commonly with lesser quartz. Hornblende +/- biotite are sometimes present and their presence is correlated with geochemical variation. All Maroa deposits are rhyolites (apart from two high-silica dacite analyses) and are peraluminous and calcic. They all have the trace element signatures of arc-related rocks typical of TVZ deposits. Maroa deposits fall geochemically into three magma types based on Rb and Sr content: M (Rb 80-123 ppm, Sr 65-88 ppm), T (Rb 80-113 ppm, Sr 100-175 ppm) and N (Rb 120-150 ppm, Sr 35- 100 ppm). The geochemical distinction of these types is also seen in the concentrations of most other elements. Based on the spatial, chronological and petrological similarities of the MWC/MEC and Pukeahua eastern magma associations (termed (1) and (2)) a further four magma associations are determined ((3) through (6)). These six associations account for almost all Maroa deposits. Two end-member models are proposed for the sources of each of the Maroa magma associations: (a) a single relatively shallow magma source feeding spatially clustered eruptions, and (b) a deeper source feeding multiple shallower offshoots over a wider area. Sources for the Maroa magma associations probably lie on a continuum between these two model end members. The distinction between Maroa and Taupo Volcanic Centres is somewhat arbitrary and is best considered to be the easting directly north of Ben Lomond, north of which most volcanism is older than 100 ka and M and N type, and south of which most volcanism is younger than 100 ka and T type. The remaining boundaries (north to include Ngautuku, west to include Mokauteure and east to include Whakapapa domes) are arbitrary, and include the farthest domes linked closely, spatially and magmatic ally, to the other Maroa domes. From 230 to 64 ka there was a hiatus in caldera-forming ignimbrite eruptions. Maroa and the Western Dome Belt (WDB) constitute the largest concentrated volume of eruptions (as relatively gentle lava extrusion) during this period. The rate of Maroa volcanism has decreased exponentially from a maximum prior to 200 ka. In contrast volcanism at Taupo and Okataina has increased from ~ 64 ka to present. The oldest Maroa dome (305 ± 17 ka) constrains the maximum rate of infilling of Whakamaru caldera as 39-17 km3/kyr. This highlights the extraordinarily fast rate of infilling common at silicic calderas and is in agreement with international case studies, except where post-collapse structural resurgence has continued for more than 100 kyr. The majority of caldera fill, representing voluminous eruption deposits in the first tens of thousands of years post collapse, is buried and only accessible via drilling. The WDB and Maroa are petrologically distinct from one another in terms of some or all of Rb, Sr, Ba and Zr content, despite eruption over a similar period. Magma sources for Maroa and the WDB may have been partly or wholly derived from the Whakamaru caldera magma system(s), but petrological distinctions among all three mean that Maroa and the WDB cannot be considered as simple magmatic resurgence of the Whakamaru caldera. Maroa's distinct Thorpe Rd Fault is in fact a fossil feature which hasn't been active in almost 200 kyr. In addition, the graben across Tuahu Dome was likely created by shallow blind diking. Several recent studies across TVZ show structural features with some associated dike intrusion/eruption. Such volcano tectonic interaction is rarely highlighted in TVZ but may be relatively common and lie on a continuum between dike-induced faulting and dikes following structural features. Although rates of volcanism are now low in Maroa magmatic intrusion appears to remain high. This raises the possibility of a causative link between faulting and volcanism in contrast to traditional views of volcanism controlled by rates of magmatic ascent. Probable future eruptions from Maroa are likely to be of similar scale (<0.1 km3 ) and frequency (every ~ 14,000 years) to most of those over the last 100 ka. Several towns lie in a range of zones of Maroa volcanic hazard from total destruction to possible ash fall. However, the probability of a future eruption is only ~ 0.6 % in an 80 year lifetime.

    View record details
  • An ecological study of Ulva Lactuca L. and other benthic algae on the Avon-Heathcote Estury, Christchurch

    Steffensen, Dennis Arthur (1974)

    Doctoral thesis
    University of Canterbury Library

    This study comprises investigations into the ecology of the benthic algae on the Avon-Heathcote Estuary with special attention to the influence of sewage discharge on the principal species. Ulva lactuca, the most important, has been described in greatest detail. The work was carried out in two parts, in the field and in the laboratory. In the field from May 1971 to May 1973, monthly sampling was used to relate the spatial and seasonal variation in algal abundance to relevant environmental factors, such as nutrient concentrations, exposure times, substrate availability, current velocities and grazing by Zediloma subrostrata. Algal abundance was measured from per cent cover and dry weight per area. Aerial photography was employed to map the overall distribution of the algae. In the laboratory, the effects of phosphate and nitrate additions and variation in temperature on the growth of Ulva lactuca were studied. The dominant alga was Ulva lactuca L. which occurred as a small attached winter plant, a larger ribbon-like spring plant, a sheet-like detached summer plant and a smaller bullate detached plant. The latter had previous been described as Ulva laingii Chapman but appears to be the result of low salinity on aging U. Lactuca thalli. The distribution of Ulva and Enteromorpha ramulosa was restricted to the mid-tide zone due to intolerance of long exposure and low salinities. Attached algae were restricted to stable substrates while drift algae accumulated in areas with low current velocities. Eutrophication, resulting from sewage discharge, promoted productivity in the western region of the Estuary and appeared to be the major cause of the increased productivity over the last 40 years Temperature emerged as the main factor determining seasonal growth patterns. Aerial infrared photography was a successful survey tool and allowed the areas of active algae to be detected.

    View record details