27,434 results for ResearchSpace@Auckland

  • Symbol Ranking Text Compression

    Fenwick, P. (1996-06)

    Report
    The University of Auckland Library

    In his work on the information content of English text in 1951, Shannon described a method of recoding the input text, a technique which has apparently lain dormant for the ensuing 45 years. Whereas traditional compressors exploit symbol frequencies and symbol contexts, Shannon’s method adds the concept of “symbol ranking”, as in ‘the next symbol is the one 3rd most likely in the present context’. This report describes an implementation of his method and shows that it forms the basis of a good text compressor.1 The recent “acb” compressor of Buynovsky is shown to belong to the general class of symbol ranking compressors.

    View record details
  • Height data from gradient fields

    Klette, Reinhard; Schluns, Karsten (1996-08)

    Report
    The University of Auckland Library

    The paper starts with a review of integration techniques for calculating height maps from dense gradient fields. There exist a few proposals of local integration methods (Coleman/Jain 1982, Healey/Jain 1984, Wu/Li 1988, Rodehorst 1993), and two proposals for global optimization (Horn/Brooks 1986 and Frankot/ Chellappa 1988). Several experimental evaluations of such integration techniques are discussed in this paper. The examined algorithms received high markings on curved objects but low markings on polyhedral shapes. Locally adaptive approaches are suggested to cope with complex shapes in general.

    View record details
  • When Virtual Memory Isn't Enough

    Thomborson, C. (1996-11)

    Report
    The University of Auckland Library

    Virtual memory, even on the largest and fastest contemporary computers, is neither large enough nor fast enough for all applications. Some data structures must be held in the file system, and some ‘performance hints’ must be given to the memory-management runtime routines. For these reasons, most large-memory application codes are littered with system-specific names, constants, file-migration policies, pragmas and hints. Such codes are very difficult to develop, maintain, and port. I propose a new paradigm for the design of large-memory codes, providing many of the performance advantages and few of the drawbacks of system-specific coding techniques. In my paradigm, programmers must organise their data structures into a series of (nesting) data blocks, with blocksizes Bh increasing in a fractal (power-law) progression [see formula in pdf]. Furthermore, the larger blocks must be referenced much less frequently than the smaller blocks. I argue that efficient algorithms for several important problems on workstations and PCs are found at ∂≈3/2 and R=8. I sketch a model of memory performance that explains why non-hierarchical large-memory codes require system-specific tuning for efficient execution.

    View record details
  • Applying the Principle of Literate Programming to Constraint Satisfaction

    Guesgen, H.W.; Loerch, U. (1996-12)

    Report
    The University of Auckland Library

    When more that two decades ago David Waltz presented his now well-known filtering algorithm for labeling three-dimensional line-diagrams, one could hardly expect that the basic principle he introduced, namely the technique of constraint satisfaction, would become the basis of an emerging research area which has already outgrown the field of artificial intelligence and started to influence many other disciplines. In the last few years many AI problems have been formulated as constraint satisfaction systems. Usually such a system offers a proper syntax for specifying constraint satisfaction problems. However, the syntax sometimes may be complex or awkward, and may require some effort to get familiar with. In this paper, we argue that it is more appropriate to specify a constraint satisfaction problem by using a graphic editor and to translate the output of the editor into a notation that can be used as input for the constraint satisfaction system. Moreover, we are aiming at using the same output for documentation purposes. In other words, we are applying Knuth's idea of literate programming to constraint satisfaction.

    View record details
  • Construction of Time-Relaxed Minimal Broadcast Networks

    Dinneen, Michael; Ventura, Jose; Wilson, Mark; Zakeri, Golbon (1997-02)

    Report
    The University of Auckland Library

    In broadcasting, or one-to-all communication, a message originally held in one node of the network must be transmitted to all the other nodes. A minimal broadcast network is a communication network that can transmit a message originated at any node to all other nodes of the network in minimum time. In this paper, we present a compound method to construct sparse, time-relaxed, minimal broadcast networks (t-mbn), in which broadcasting can be accomplished in slightly more than the minimum time. The proposed method generates a new network by connecting a subset of nodes from several copies of a t₁-mbn using the structure of another t₂-mbn. The objective is to construct a network as sparse as possible satisfying the desired broadcasting time constraint. Computational results illustrate the effectiveness of the proposed method.

    View record details
  • A Fast, Constant-Order, Symbol Ranking Text Compressor

    Fenwick, P. (1997-04)

    Report
    The University of Auckland Library

    Recent work on “symbol ranking” text compressors included a version which combines moderate compression with high speed and has been described in a poster at the 1997 Data Compression Conference. That work is extended here, especially to produce a compressor capable of efficient hardware implementation. The compressor is based on a conventional set-associative cache, with LRU update. To process a symbol, a context of the three preceding symbols is hashed to access a particular cache line. If the access “hits”, the LRU index is emitted as the recoded symbol, or otherwise the symbol is emitted and the LRU status updated. Several versions are described, with some improving the performance by maintaining a Move-To-Front list of symbols and emitting some symbols as “short” 5-bit indices into this table, or by encoding runs of correctly-predicted symbols. The final performance is in the range of 3.5 – 4 bits per byte over the Calgary Corpus, with software speeds of up to 1 Mbyte/s. The best of the hardware designs should run at 100Mbyte/s with discrete components, or 200–300 Mbyte/s with a completely integrated design.

    View record details
  • A New Data sturcture for cumulative Probability Tables: an Improved Frequency to Symbol Algorithm

    Fenwick, P. (1995-03)

    Report
    The University of Auckland Library

    A recent paper presented a new efficient data structure, the “Binary Indexed Tree”, for maintaining the cumulative frequencies for arithmetic data compression. While algorithms were presented for all of the necessary operations, significant problems remained with the method for determining the symbol corresponding to a known frequency. This report corrects that deficiency.

    View record details
  • The Gray Code. (1996)

    Doran, R.W. (1996)

    Report
    The University of Auckland Library

    This report is a self-contained summary of properties and algorithms concerning the Gray code. Descriptions are given of the Gray code definition, algorithms and circuits for generating the code and for conversion between binary and Gray code, for incrementing, counting, and adding Gray code words. Some interesting applications of the code are also treated.

    View record details
  • The Block Sorting Data Compression Algorthm

    Fenwick, P. (1995-04)

    Report
    The University of Auckland Library

    This report presents some preliminary work on a recently described “Block Sorting” lossless or text compression algorithm. While having little apparent relationship to established techniques, it has a performance which places it definitely among the bestknown compressors. The original paper did little more than present the algorithm, with strong advice for efficient implementation. Here, the algorithm is restated in data compression terms and various measurements are made on aspects of its operation. Consideration of the possible efficiency of text compression leads to the revival of ideas by Shannon as the basis of a text compressor and then to the classification of the Block Sorting compressor as an example of this “new” type. Finally, this work leads to a reconsideration of the meaning of escape codes in PPM-style compressors and a suggested technique for better estimating escape probabilities.

    View record details
  • Quasi Convolutional Smoothing of Polyhedra

    Lobb, R.J. (1995-05)

    Report
    The University of Auckland Library

    This paper describes a system for the representation and rendering of polyhedral scenes in which individual components can have different user-specified amounts of rounding applied to edges and corners. Objects are represented by structures similar to CSG trees but with arithmetic operators at internal nodes rather than set membership operators. An object’s smoothing attribute specifies the radius of a spherical smoothing filter, and the smoothed object’s surface is defined by an iso-density surface after low-pass filtering. The filtering is an approximation to true convolutional filtering, but allows rapid determination of iso-density surfaces. The rounded surfaces are similar to those achieved by use of filleting and surface blending techniques during modelling, but are much easier to specify, far more economical in storage, and simpler to compute. By varying the smoothing radii, a wide range of effects can be obtained, from near-perfect polyhedra through to “blobby models”.

    View record details
  • Improvements to the Block Sorting Text Compression Algorithm

    Fenwick, P. (1995)

    Report
    The University of Auckland Library

    This report presents some further work on the recently described “Block Sorting” lossless or text compression algorithm. It is already known that it is a context-based compressor of unbounded order, but those contexts are completely restructured by the sort phase of the compression. The report examines the effects of those context changes. It is shown that the requirements on the final compression stage are quite different from those in compressors of more conventional design. The report then presents several different approaches to improving the compression performance, eventually yielding a compressor which is among the best so far presented and is actually based on fundamental work by Shannon in 1950. It is shown that the block-sorting technique compares well with other compressors in terms of compression factor, compression speed and working memory.

    View record details
  • Herristic for Fuzzy Constraint Satisfaction

    Guesgen, H.W.; Philpott, A. (1995-07)

    Report
    The University of Auckland Library

    Work in the field of AI over the past twenty years has shown that many problems can be represented as constraint satisfaction problems and efficiently solved by constraint satisfaction algorithms. However, constraint satisfaction in its pure form isn’t always suitable for real world problems, as they often tend to be inconsistent, which means the corresponding constraint satisfaction problems don’t have solutions. A way to handle inconsistent constraint satisfaction problems is to make them fuzzy. The idea is to associate fuzzy values in a reasonable way, i.e., a way that directly corresponds to the way how crisp constraint problems are handled. The purpose of this paper is to briefly introduce a framework for fuzzy constraint satisfaction problems and to discuss some heuristics for solving them efficiently.

    View record details
  • Block Sorting Text Compression -- Final Report

    Fenwick, P. (1996)

    Report
    The University of Auckland Library

    A recent development in text compression is a “block sorting” algorithm which permutes the input text according to a special sort procedure and then processes the permuted text with Move-to-Front and a final statistical compressor. The technique combines good speed with excellent compression performance. This report investigates the block sorting compression algorithm, in particular trying to understand its operation and limitations. Various approaches are investigated in an attempt to improve the compression with block sorting, most of which involve a hierarchy of coding models to allow fast adaptation to local contexts. The best technique involves a new “structured” coding model, especially designed for compressing data with skew symbol distributions. Block sorting compression is found to be related to work by Shannon in 1951 on the prediction of English text. The work confirms block-sorting as a good text compression technique, with a compression approaching that of the currently best compressors while being much faster than other compressors of comparable performance.

    View record details
  • Upwards and Downwards Accumulations on Trees

    Gibbons, J. (1992-06)

    Report
    The University of Auckland Library

    An accumulation is a higher-order operation over structured objects of some type; it leaves the `shape' of an object unchanged, but replaces each element of that object with some accumulated information about the other elements. Upwards and downwards accumulations on trees are two instances of this scheme; they replace each element of a tree with some function – in fact, some homomorphism – of that element's descendants and of its ancestors, respectively. These two operations can be thought of as passing information up and down the tree. We describe these two accumulations, and show how together they solve the so-called `prefix sums' problem.

    View record details
  • Efficient Parallel Algorithm for Tree Accumulations

    Gibbons, J.; Cai, W.; Skillicorn, D. (1993-03)

    Report
    The University of Auckland Library

    Accumulations are higher-order operations on structured objects: they leave the shape of an object unchanged, but replace elements of that object with accumulated information about other elements. Upwards and downwards accumulations on trees are two such operations; they form the basis of many tree algorithms. We present two Erew Pram algorithms for computing accumulations on trees taking O(log n ) time on O (n/log n) processors, which is optional.

    View record details
  • Linear-time Breadth First Tree Algorithms : An exercise in the arithmetic of folds and zips

    Jones, G.; Gibbons, J. (1993-05)

    Report
    The University of Auckland Library

    This paper is about an application of the mathematics of the zip, reduce (fold) and accumulate (scan) operations on lists. It gives an account of the derivation of a linear-time breadth-first tree traversal algorithm, and of a subtle and efficient breadth-first tree labelling algorithm.

    View record details
  • The Circuit Model for Parallel Algorithms

    Doran, R.W.; Thomas, I. (1993-06)

    Report
    The University of Auckland Library

    View record details
  • Deriving Tidy Drawings Of Trees (1993)

    Gibbons, J. (1993-11)

    Report
    The University of Auckland Library

    The tree-drawing problem is to produce a ‘tidy’ mapping of elements of a tree to points in the plane. In this paper, we derive an efficient algorithm for producing tidy drawings of trees. The specification, the starting point for the derivations, consists of a collection of intuitively appealing criteria satisfied by tidy drawings. The derivation shows constructively that these criteria completely determine the drawing. Indeed, the criteria completely determine a simple but inefficient algorithm for drawing a tree, which can be transformed into an efficient algorithm using just standard techniques and a small number of inventive steps. The algorithm consists of an upwards accumulation followed by a downwards accumulation on the tree, and is further evidence of the utility of these two higher-order tree operations.

    View record details
  • Using change descriptions to maintain consistency across multiple representations

    Hosking, J.G.; Grundy, J.C. (1995-02)

    Report
    The University of Auckland Library

    We describe a technique for dealing with partial mappings between different representations, both formal and informal, of an evolving software system. This technique uses discrete "change descriptions" to propagate changes between related views. These change descriptions my be used to autiomatically modify affected views, or to annotate the view to indicate manual intervention is required to maintain consistency.

    View record details
  • Integrated OO System Development Using SPE

    Grundy, J.C.; Hosking, J.G. (1993-02)

    Report
    The University of Auckland Library

    SPE is a programing environment for OO programming which supports multiple textual and graphical views of a program. Views are kept consistent with one another using a mechanism of update records. SPE is useful throughout all phases of the software development lifecycle providing support for conceptual level diagram construction, visual and textual programming, hypertext-based browsing, and visual debugging, together with a modification history. SPE is implemented as a specialisation of an object-oriented framework. This framework can be readily extended or tailored to support other languages and graphical notations.

    View record details