2,282 results for Report

  • Properties of Optimal Prefix-Free Machines as Instantaneous Codes

    Tadaki, K. (2010)

    Report
    The University of Auckland Library

    The optimal prefix-free machine U is a universal decoding algorithm used to define the notion of program-size complexity H(s) for a finite binary string s. Since the set of all halting inputs for U is chosen to form a prefix-free set, the optimal prefix-free machine U can be regarded as an instantaneous code for noiseless source coding scheme. In this paper, we investigate the properties of optimal prefix-free machines as instantaneous codes. In particular, we investigate the properties of the set U`1 (s) of codewords associated with a symbol s. Namely, we investigate the number of codewords in U`1 (s) and the distribution of codewords in U`1 (s) for each symbol s, using the toolkit of algorithmic information theory.

    View record details
  • A Multi-Criteria Metric Algorithm for Recommender Systems

    Akhtarzada, A; Calude, C.S.; Hosking, J. (2011)

    Report
    The University of Auckland Library

    Information overload and an abundance of choices create situations where selecting one option becomes extremely difficult or even worse, a guessing game. Collaborative ranking systems are widely used to alleviate this problem by creating intelligent rankings of items based on an aggregation of user opinions. Current ranking systems can still be improved in a number of areas, including accuracy, transparency and flexibility. This paper presents a multi-criteria ranking algorithm that can be used on a non-rigid set of criteria. The system implementing the algorithm fares well with respect to the above qualities.

    View record details
  • P Systems in Stereo Matching (extended version)

    Gimel'farb, G.; Nicolescu, R.; Ragavan, S. (2011)

    Report
    The University of Auckland Library

    Designing parallel versions of sequential algorithms has attracted renewed attention, due to recent hardware advances, including various general-purpose multi-core and many-core processors, as well as special-purpose FPGA implementations. P systems consist of networks of autonomous cells, such that each cell transforms its input signals in accord with its symbol-rewriting rules and feeds the output results into its immediate neighbours. Inherent massive intra- and inter-cell parallelisms make P systems a prospective theoretical testbed for designing efficient parallel and parallel-sequential algorithms. This paper discusses the ability of P systems to implement the symmetric dynamic programming stereo (SDPS) matching algorithm, which explicitly accounts for binocular or monocular visibility of 3D surface points. Given enough cells, the P system implementation speeds up the inner algorithm loop from O(nd) to O(n + d), where n is the width of a stereo image and d is the disparity range. The implementation also gives an insight into to a more general SDPS algorithm that allows a possible multiplicity of solutions to the ill-posed optimal stereo matching problem.

    View record details
  • A Forward-Parsing Randomness Test Based on the Expected Codeword Length of T-codes

    Speidel, U. (2011)

    Report
    The University of Auckland Library

    This paper proposes an algorithm for randomness testing. It is a variant of an algorithm presented earlier by the author for the construction of random sequences. The algorithm exploits two facts: Firstly, given a complete finite prefix code Ci over an alphabet A, every semi-infinite sequence s of symbols x0, x1, x2, ... from A starts with a codeword wi ∈ Ci, and if one presumes that s is random, one can compute the expected length hi of wi. Secondly, wi can be used to extend Ci to yield a larger code Ci+1. In this case, the actual length |wi| of wi determines the growth in the expected codeword length hi+1 for the codeword wi+1 ∈ Ci+1 that follows in s after the end of wi. If |wi| > hi, then hi+1 grows less compared to hi than if |wi| ≤ hi. By comparing expected codeword lengths and actual codeword lengths cumulatively, one may assess the randomness of s: If the hypothesis that s is random holds, then the cumulative values should closely mirror each other.

    View record details
  • Inductive Complexity Measures for Mathematical Problems

    Burgin, M.; Calude, C.S.; Calude, E. (2011)

    Report
    The University of Auckland Library

    An algorithmic uniform method to measure the complexity of statements of finitely refutable statements [6, 7, 8], was used to classify famous/ interesting mathematical statements like Fermat’s last theorem, Hilbert’s tenth problem, the four colour theorem, the Riemann hypothesis, [9, 13, 14]. Working with inductive Turing machines of various orders [1] instead of classical computations, we propose a class of inductive complexity measures and inductive complexity classes for mathematical statements which generalise the previous method. In particular, the new method is capable to classify statements of the form ∀n∃m R(n,m), where R(n,m) is a computable binary predicate. As illustrations, we evaluate the inductive complexity of the Collatz conjecture or twin prime conjecture—which cannot not be evaluated with the original method.

    View record details
  • The ideal generated by $sigma$-nowhere dense sets

    Cao, Jiling; Greenwood, Sina (2004-11)

    Report
    The University of Auckland Library

    In this paper, we consider the ideal $mathscr I_sigma$ generated by all $sigma$-nowhere dense sets in a topological space. Properties of this ideal and its relations with the Volterra property are explored. We show that $mathscr I_sigma$ is compatible with the topology for any given space, an analogue to the Banach category theorem. Some applications of this result and the Banach category theorem are also given.

    View record details
  • Von Neumann Normalisation of a Quantum Random Number Generator

    Abbott, A.A.; Calude, C.S. (2010)

    Report
    The University of Auckland Library

    In this paper we study von Neumann un-biasing normalisation for ideal and real quantum random number generators, operating on finite strings or infinite bit sequences. In the ideal cases one can obtain the desired un-biasing. This relies critically on the independence of the source, a notion we rigorously define for our model. In real cases, affected by imperfections in measurement and hardware, one cannot achieve a true un-biasing, but, if the bias “drifts sufficiently slowly”, the result can be arbitrarily close to un-biasing. For infinite sequences, normalisation can both increase or decrease the (algorithmic) randomness of the generated sequences. A successful application of von Neumann normalisation—in fact, any un-biasing transformation—does exactly what it promises, un-biasing, one (among infinitely many) symptoms of randomness; it will not produce “true” randomness.

    View record details
  • Understanding the Quantum Computational Speed-up via De-quantisation

    Abbott, A.A.; Calude, C.S. (2010)

    Report
    The University of Auckland Library

    While it seems possible that quantum computers may allow for algorithms offering a computational speed-up over classical algorithms for some problems, the issue is poorly understood. We explore this computational speed-up by investigating the ability to de-quantise quantum algorithms into classical simulations of the algorithms which are as efficient in both time and space as the original quantum algorithms. The process of de-quantisation helps formulate conditions to determine if a quantum algorithm provides a real speed-up over classical algorithms. These conditions can be used to develop new quantum algorithms more effectively (by avoiding features that could allow the algorithm to be efficiently classically simulated) and to create new classical algorithms (by using features which have proved valuable for quantum algorithms). Results on many different methods of de-quantisations are presented, as well as a general formal definition of de-quantisation. De-quantisations employing higher-dimensional classical bits, as well as those using matrix-simulations, put emphasis on entanglement in quantum algorithms; a key result is that any algorithm in which the entanglement is bounded is de-quantisable. These methods are contrasted with the stabiliser formalism de-quantisations due to the Gottesman-Knill Theorem, as well as those which take advantage of the topology of the circuit for a quantum algorithm. The benefits and limits of the different methods are discussed, and the importance of utilising a range of techniques is emphasised. We further discuss some features of quantum algorithms which current de-quantisation methods do not cover and highlight several important open questions in the area.

    View record details
  • New Solutions for Disjoint Paths in P Systems

    Nicolescu, Radu; Wu, Huiling (2012)

    Report
    The University of Auckland Library

    We propose faster algorithms to identify maximum cardinality sets of edge- and node-disjoint paths, between a source cell and a target cell in P systems. Previously, Dinneen et al. presented two algorithms, based on classical depth- rst search (DFS), which run in O(md) P steps; where m is the number of edges and d is the outdegree of the source node. Combining Cidon's distributed DFS and our new result, Theorem 6, we propose two improved DFS-based algorithms, which run in O(nd) P steps; where n is the number of nodes. We also propose two improved algorithms, based on breadth- rst search (BFS), with O(nf) runtime complexity, where f is the number of disjoint paths. All our algorithms are xed- size, i.e. the number of P rules does not depend on the number of cells in the underlying P system. Empirically, for randomly generated digraphs of various sizes, our DFS-based edge-disjoint algorithm is 39% faster than Dinneen et al.'s edge-disjoint algorithm and our BFS-based edge-disjoint algorithm is 80% faster, in terms of P steps. Our improvements can be considered for any distributed DFS implementation (i.e. not only for P systems).

    View record details
  • A New Universality Result on P Systems

    Dinneen, M.J.; Kim, Y.B. (2012)

    Report
    The University of Auckland Library

    In this paper, we present a new universality result in P systems, by providing a method to construct a P system ∏M with two cells that simulates an arbitrary register machine M. The novelty of our approach is to utilize a very simple register machine that supports reading of binary data. In our simple construction of system ∏M: (i) cell states of a "simulator" cell in ∏M model the instruction lines of M, (ii) symbols (and their multiplicities) of ∏M model registers (and their values) of M, (iii) evolution rules of M model the execution of instructions of M and (iv) communication to/from a "data" cell of ∏M is used to model the reading of data of M.

    View record details
  • Efficiency Frontiers of XML Cardinality Constraints

    Ferrarotti, F.; Hartmann, S.; Link, S. (2012)

    Report
    The University of Auckland Library

    XML has gained widespread acceptance as a premier format for publishing, sharing and manipulating data through the web. While the semi-structured nature of XML provides a high degree of syntactic flexibility there are significant shortcomings when it comes to specifying the semantics of XML data. For the advancement of XML applications it is therefore a major challenge to discover natural classes of constraints that can be utilized effectively by XML data engineers. This endeavor is ambitious given the multitude of intractability results that have been established. We investigate a class of XML cardinality constraints that is precious in the sense that it keeps the right balance between expressiveness and efficiency of maintenance. In particular, we characterize the associated implication problem axiomatically and develop a low-degree polynomial time algorithm that can be readily applied for deciding implication. Our class of constraints is chosen near-optimal as already minor extensions of its expressiveness cause potential intractability. Finally, we transfer our findings to establish a precious class of soft cardinality constraints on XML data. Soft cardinality constraints need to be satisfied on average only, and thus permit violations in a controlled manner. Soft constraints are therefore able to tolerate exceptions that frequently occur in practice, yet can be reasoned about efficiently.

    View record details
  • Design by Example for SQL Table Definitions with Functional Dependencies

    Kirchberg, M.; Hartmann, S.; Link, S. (2012)

    Report
    The University of Auckland Library

    A database is C-Armstrong for a given set of constraints in a class C if it satisfies every constraint of the set and violates every constraint in C not implied by the set. Therefore, Armstrong databases are test data that perfectly illustrate the current perceptions about the semantics of a schema. We extend the existing theory of Armstrong relations to a toolbox of Armstrong tables. That is, we investigate structural and computational properties of Armstrong tables for the class of functional dependencies (FDs) over SQL tables. Relations are special instances of SQL tables with no duplicate rows and no null value occurrences. While FDs do not enjoy Armstrong tables, the combined class of standard FDs and NOT NULL constraints does enjoy Armstrong tables. The problem of finding an Armstrong table is shown to be precisely exponential for this combined class. However, we establish an algorithm that computes Armstrong tables with a size at most quadratic in that of a minimum-sized Armstrong table. Our resulting toolbox of Armstrong tables can be applied by data engineers to concisely visualize constraints on SQL data. Such support can lead to designs that guarantee efficient data management in practice.

    View record details
  • Generalized Number Derivatives

    Stay, M (2004-08)

    Report
    The University of Auckland Library

    We generalize the concept of a number derivative, and examine one particular instance of a deformed number derivative for finite field elements. We find that the derivative is linear when the deformation is a Frobenius map and go on to examine some of its basic properties.

    View record details
  • The Implication Problem of Data Dependencies over SQL Table Definitions: Axiomatic, Algorithmic and Logical Characterizations

    Hartmann, S.; Link, S. (2012)

    Report
    The University of Auckland Library

    We investigate the implication problem for classes of data dependencies over SQL table definitions. Under Zaniolo’s “no information” interpretation of null markers we establish an axiomatization and algorithms to decide the implication problem for the combined class of functional and multivalued dependencies in the presence of NOT NULL constraints. The resulting theory subsumes three previously orthogonal frameworks. We further show that the implication problem of this class is equivalent to that in a propositional fragment of Cadoli and Schaerf’s family of para-consistent S-3 logics. In particular, S is the set of variables that correspond to attributes declared NOT NULL. We also show how our equivalences for multivalued dependencies can be extended to Delobel’s class of full first-order hierarchical decompositions, and the equivalences for functional dependencies can be extended to arbitrary Boolean dependencies. These dualities allow us to transfer several findings from the propositional fragments to the corresponding classes of data dependencies, and vice versa. We show that our results also apply to Codd’s null interpretation "value unknown at present", but not to Imielinski’s or-relations utilizing Levene and Loizou’s weak possible world semantics. Our findings establish NOT NULL constraints as an effective mechanism to balance not only the 1certainty in database relations but also the expressiveness with the efficiency of entailment relations. They also control the degree by which the implication of data dependencies over total relations is soundly approximated in SQL table definitions.

    View record details
  • An initial-algebra approach to directed acyclic graphs

    Gibbons, Jenny (1995-04)

    Report
    The University of Auckland Library

    The initial-algebra approach to modelling datatypes consists of giving constructors for building larger objects of that type from smaller ones, and laws identifying different ways of constructing the same object. The recursive decomposition of objects of the datatype leads directly to a recursive pattern of computation on those objects, which is very helpful for both functional and parallel programming. We show how to model a particular kind of directed acyclic graph using this initial-algebra approach.

    View record details
  • Branchwidth, Branch Decompositions and b-parses

    Probert, A; Dinneen, MJ (2013)

    Report
    The University of Auckland Library

    In this paper we present an easy-to-use data structure for representing graphs of bounded branchwidth, called b-parses. Many hard problems that can be represented as graph problems can be more easily solved when a decomposition of the graph is taken into account. This is particularly true where the input graph can be seen to be treelike in some form. An example of such a treelike structure is branch decomposition, were the edges of a graph are arranged as leaves around a tree and the internal nodes of the tree represent connectivity between subsets of the edges of the original graph. This is similar in concept to the idea of tree decomposition which views the input graph vertices as forming a treelike structure of bounded-sized vertex separators. However branch decompositions may be simpler to work with than tree decompositions for appropriate problems because of the structure (and possibly smaller width) of the tree that is formed. In this paper an algebraic representation of branch decompositions (b-parse) is proposed as an alternative to the t-parse representation for tree decompositions. An application of this data structure to the known hard problems Minimum Vertex Cover and 3-Coloring is examined. Finally, possible benefits of using b-parses from the parallelism perspective is given.

    View record details
  • Another Example of Higher Order Randomness

    Becher, V; Chaitin, G.J (2002-05)

    Report
    The University of Auckland Library

    We consider the notion of randomness relative to an oracle: a real number is random in A if and only if its initial segments are algorithmically incompressible in a self-delimiting universal machine equipped with an oracle A. We prove that the probability that a program for infinite computations outputs a cofinite set is random in the second jump of the halting problem.

    View record details
  • Phase Transition and Strong Predictability

    Tadaki, K (2013)

    Report
    The University of Auckland Library

    The statistical mechanical interpretation of algorithmic information theory (AIT, for short) was introduced and developed in our former work [K. Tadaki, Local Proceedings of CiE 2008, pp.425–434, 2008], where we introduced the notion of thermodynamic quantities into AIT. These quantities are real functions of temperature T > 0. The values of all the thermodynamic quantities diverge when T exceeds 1. This phenomenon corresponds to phase transition in statistical mechanics. In this paper we introduce the notion of strong predictability for an infinite binary sequence and then apply it to the partition function Z(T ), which is one of the thermodynamic quantities in AIT. We then reveal a new computational aspect of the phase transition in AIT by showing the critical difference of the behavior of Z(T) between T = 1 and T < 1 in terms of the strong predictability for the base-two expansion of Z(T).

    View record details
  • Solving SAT with Bilateral Computing

    Arulanandham, J.J; Calude, C.S; Dinneen, Michael (2002-12)

    Report
    The University of Auckland Library

    We solve a simple instance of the SAT problem using a natural physical computing system based on fluid mechanics. The natural system functions in a way that avoids the combinatorial explosion which generally arises from the exponential number of assignments to be examined. The solution may be viewed as part of a more general type of natural computation called Bilateral Computing. The paper will also describe this new computing paradigm and will compare it with Reversible Computing. Our approach is informal: the emphasis is on motivation, ideas and implementation rather than a formal description.

    View record details
  • Effective Recognition and Visualization of Semantic Requirements by Perfect SQL Samples

    Tran Le, VB; Link, S; Ferrarotti, F (2013)

    Report
    The University of Auckland Library

    SQL designs result from methodologies such as UML or Entity-Relationship models, description logics, or relational normalization. Independently of the methodology, sample data is promoted by academia and industry to visualize and consolidate the designs produced. SQL table definitions are a standard-compliant encoding of their designers' perception about the semantics of an application domain. Armstrong sample data visualize these perceptions. We present a tool that computes Armstrong samples for different classes of SQL constraints. Exploiting our tool, we then investigate empirically how these Armstrong samples help design teams recognize domain semantics. New measures empower us to compute the distance between constraint sets in order to evaluate the usefulness of our tool. Extensive experiments con rm that users of our tool are likely to recognize domain semantics they would overlook otherwise. The tool thereby e ffectively complements existing design methodologies in finding quality schemata that process data efficiently.

    View record details