2,282 results for Report

Properties of Optimal PrefixFree Machines as Instantaneous Codes
Tadaki, K. (2010)
Report
The University of Auckland LibraryThe optimal preﬁxfree machine U is a universal decoding algorithm used to deﬁne the notion of programsize complexity H(s) for a ﬁnite binary string s. Since the set of all halting inputs for U is chosen to form a preﬁxfree set, the optimal preﬁxfree machine U can be regarded as an instantaneous code for noiseless source coding scheme. In this paper, we investigate the properties of optimal preﬁxfree machines as instantaneous codes. In particular, we investigate the properties of the set U`1 (s) of codewords associated with a symbol s. Namely, we investigate the number of codewords in U`1 (s) and the distribution of codewords in U`1 (s) for each symbol s, using the toolkit of algorithmic information theory.
View record details 
A MultiCriteria Metric Algorithm for Recommender Systems
Akhtarzada, A; Calude, C.S.; Hosking, J. (2011)
Report
The University of Auckland LibraryInformation overload and an abundance of choices create situations where selecting one option becomes extremely difficult or even worse, a guessing game. Collaborative ranking systems are widely used to alleviate this problem by creating intelligent rankings of items based on an aggregation of user opinions. Current ranking systems can still be improved in a number of areas, including accuracy, transparency and flexibility. This paper presents a multicriteria ranking algorithm that can be used on a nonrigid set of criteria. The system implementing the algorithm fares well with respect to the above qualities.
View record details 
P Systems in Stereo Matching (extended version)
Gimel'farb, G.; Nicolescu, R.; Ragavan, S. (2011)
Report
The University of Auckland LibraryDesigning parallel versions of sequential algorithms has attracted renewed attention, due to recent hardware advances, including various generalpurpose multicore and manycore processors, as well as specialpurpose FPGA implementations. P systems consist of networks of autonomous cells, such that each cell transforms its input signals in accord with its symbolrewriting rules and feeds the output results into its immediate neighbours. Inherent massive intra and intercell parallelisms make P systems a prospective theoretical testbed for designing efficient parallel and parallelsequential algorithms. This paper discusses the ability of P systems to implement the symmetric dynamic programming stereo (SDPS) matching algorithm, which explicitly accounts for binocular or monocular visibility of 3D surface points. Given enough cells, the P system implementation speeds up the inner algorithm loop from O(nd) to O(n + d), where n is the width of a stereo image and d is the disparity range. The implementation also gives an insight into to a more general SDPS algorithm that allows a possible multiplicity of solutions to the illposed optimal stereo matching problem.
View record details 
A ForwardParsing Randomness Test Based on the Expected Codeword Length of Tcodes
Speidel, U. (2011)
Report
The University of Auckland LibraryThis paper proposes an algorithm for randomness testing. It is a variant of an algorithm presented earlier by the author for the construction of random sequences. The algorithm exploits two facts: Firstly, given a complete finite prefix code Ci over an alphabet A, every semiinfinite sequence s of symbols x0, x1, x2, ... from A starts with a codeword wi ∈ Ci, and if one presumes that s is random, one can compute the expected length hi of wi. Secondly, wi can be used to extend Ci to yield a larger code Ci+1. In this case, the actual length wi of wi determines the growth in the expected codeword length hi+1 for the codeword wi+1 ∈ Ci+1 that follows in s after the end of wi. If wi > hi, then hi+1 grows less compared to hi than if wi ≤ hi. By comparing expected codeword lengths and actual codeword lengths cumulatively, one may assess the randomness of s: If the hypothesis that s is random holds, then the cumulative values should closely mirror each other.
View record details 
Inductive Complexity Measures for Mathematical Problems
Burgin, M.; Calude, C.S.; Calude, E. (2011)
Report
The University of Auckland LibraryAn algorithmic uniform method to measure the complexity of statements of finitely refutable statements [6, 7, 8], was used to classify famous/ interesting mathematical statements like Fermat’s last theorem, Hilbert’s tenth problem, the four colour theorem, the Riemann hypothesis, [9, 13, 14]. Working with inductive Turing machines of various orders [1] instead of classical computations, we propose a class of inductive complexity measures and inductive complexity classes for mathematical statements which generalise the previous method. In particular, the new method is capable to classify statements of the form ∀n∃m R(n,m), where R(n,m) is a computable binary predicate. As illustrations, we evaluate the inductive complexity of the Collatz conjecture or twin prime conjecture—which cannot not be evaluated with the original method.
View record details 
The ideal generated by $sigma$nowhere dense sets
Cao, Jiling; Greenwood, Sina (200411)
Report
The University of Auckland LibraryIn this paper, we consider the ideal $mathscr I_sigma$ generated by all $sigma$nowhere dense sets in a topological space. Properties of this ideal and its relations with the Volterra property are explored. We show that $mathscr I_sigma$ is compatible with the topology for any given space, an analogue to the Banach category theorem. Some applications of this result and the Banach category theorem are also given.
View record details 
Von Neumann Normalisation of a Quantum Random Number Generator
Abbott, A.A.; Calude, C.S. (2010)
Report
The University of Auckland LibraryIn this paper we study von Neumann unbiasing normalisation for ideal and real quantum random number generators, operating on ﬁnite strings or inﬁnite bit sequences. In the ideal cases one can obtain the desired unbiasing. This relies critically on the independence of the source, a notion we rigorously deﬁne for our model. In real cases, affected by imperfections in measurement and hardware, one cannot achieve a true unbiasing, but, if the bias “drifts sufficiently slowly”, the result can be arbitrarily close to unbiasing. For inﬁnite sequences, normalisation can both increase or decrease the (algorithmic) randomness of the generated sequences. A successful application of von Neumann normalisation—in fact, any unbiasing transformation—does exactly what it promises, unbiasing, one (among inﬁnitely many) symptoms of randomness; it will not produce “true” randomness.
View record details 
Understanding the Quantum Computational Speedup via Dequantisation
Abbott, A.A.; Calude, C.S. (2010)
Report
The University of Auckland LibraryWhile it seems possible that quantum computers may allow for algorithms oﬀering a computational speedup over classical algorithms for some problems, the issue is poorly understood. We explore this computational speedup by investigating the ability to dequantise quantum algorithms into classical simulations of the algorithms which are as eﬃcient in both time and space as the original quantum algorithms. The process of dequantisation helps formulate conditions to determine if a quantum algorithm provides a real speedup over classical algorithms. These conditions can be used to develop new quantum algorithms more eﬀectively (by avoiding features that could allow the algorithm to be eﬃciently classically simulated) and to create new classical algorithms (by using features which have proved valuable for quantum algorithms). Results on many diﬀerent methods of dequantisations are presented, as well as a general formal deﬁnition of dequantisation. Dequantisations employing higherdimensional classical bits, as well as those using matrixsimulations, put emphasis on entanglement in quantum algorithms; a key result is that any algorithm in which the entanglement is bounded is dequantisable. These methods are contrasted with the stabiliser formalism dequantisations due to the GottesmanKnill Theorem, as well as those which take advantage of the topology of the circuit for a quantum algorithm. The beneﬁts and limits of the diﬀerent methods are discussed, and the importance of utilising a range of techniques is emphasised. We further discuss some features of quantum algorithms which current dequantisation methods do not cover and highlight several important open questions in the area.
View record details 
New Solutions for Disjoint Paths in P Systems
Nicolescu, Radu; Wu, Huiling (2012)
Report
The University of Auckland LibraryWe propose faster algorithms to identify maximum cardinality sets of edge and nodedisjoint paths, between a source cell and a target cell in P systems. Previously, Dinneen et al. presented two algorithms, based on classical depth rst search (DFS), which run in O(md) P steps; where m is the number of edges and d is the outdegree of the source node. Combining Cidon's distributed DFS and our new result, Theorem 6, we propose two improved DFSbased algorithms, which run in O(nd) P steps; where n is the number of nodes. We also propose two improved algorithms, based on breadth rst search (BFS), with O(nf) runtime complexity, where f is the number of disjoint paths. All our algorithms are xed size, i.e. the number of P rules does not depend on the number of cells in the underlying P system. Empirically, for randomly generated digraphs of various sizes, our DFSbased edgedisjoint algorithm is 39% faster than Dinneen et al.'s edgedisjoint algorithm and our BFSbased edgedisjoint algorithm is 80% faster, in terms of P steps. Our improvements can be considered for any distributed DFS implementation (i.e. not only for P systems).
View record details 
A New Universality Result on P Systems
Dinneen, M.J.; Kim, Y.B. (2012)
Report
The University of Auckland LibraryIn this paper, we present a new universality result in P systems, by providing a method to construct a P system ∏M with two cells that simulates an arbitrary register machine M. The novelty of our approach is to utilize a very simple register machine that supports reading of binary data. In our simple construction of system ∏M: (i) cell states of a "simulator" cell in ∏M model the instruction lines of M, (ii) symbols (and their multiplicities) of ∏M model registers (and their values) of M, (iii) evolution rules of M model the execution of instructions of M and (iv) communication to/from a "data" cell of ∏M is used to model the reading of data of M.
View record details 
Efficiency Frontiers of XML Cardinality Constraints
Ferrarotti, F.; Hartmann, S.; Link, S. (2012)
Report
The University of Auckland LibraryXML has gained widespread acceptance as a premier format for publishing, sharing and manipulating data through the web. While the semistructured nature of XML provides a high degree of syntactic ﬂexibility there are signiﬁcant shortcomings when it comes to specifying the semantics of XML data. For the advancement of XML applications it is therefore a major challenge to discover natural classes of constraints that can be utilized effectively by XML data engineers. This endeavor is ambitious given the multitude of intractability results that have been established. We investigate a class of XML cardinality constraints that is precious in the sense that it keeps the right balance between expressiveness and efficiency of maintenance. In particular, we characterize the associated implication problem axiomatically and develop a lowdegree polynomial time algorithm that can be readily applied for deciding implication. Our class of constraints is chosen nearoptimal as already minor extensions of its expressiveness cause potential intractability. Finally, we transfer our ﬁndings to establish a precious class of soft cardinality constraints on XML data. Soft cardinality constraints need to be satisﬁed on average only, and thus permit violations in a controlled manner. Soft constraints are therefore able to tolerate exceptions that frequently occur in practice, yet can be reasoned about efficiently.
View record details 
Design by Example for SQL Table Definitions with Functional Dependencies
Kirchberg, M.; Hartmann, S.; Link, S. (2012)
Report
The University of Auckland LibraryA database is CArmstrong for a given set of constraints in a class C if it satisﬁes every constraint of the set and violates every constraint in C not implied by the set. Therefore, Armstrong databases are test data that perfectly illustrate the current perceptions about the semantics of a schema. We extend the existing theory of Armstrong relations to a toolbox of Armstrong tables. That is, we investigate structural and computational properties of Armstrong tables for the class of functional dependencies (FDs) over SQL tables. Relations are special instances of SQL tables with no duplicate rows and no null value occurrences. While FDs do not enjoy Armstrong tables, the combined class of standard FDs and NOT NULL constraints does enjoy Armstrong tables. The problem of ﬁnding an Armstrong table is shown to be precisely exponential for this combined class. However, we establish an algorithm that computes Armstrong tables with a size at most quadratic in that of a minimumsized Armstrong table. Our resulting toolbox of Armstrong tables can be applied by data engineers to concisely visualize constraints on SQL data. Such support can lead to designs that guarantee efficient data management in practice.
View record details 
Generalized Number Derivatives
Stay, M (200408)
Report
The University of Auckland LibraryWe generalize the concept of a number derivative, and examine one particular instance of a deformed number derivative for finite field elements. We find that the derivative is linear when the deformation is a Frobenius map and go on to examine some of its basic properties.
View record details 
The Implication Problem of Data Dependencies over SQL Table Definitions: Axiomatic, Algorithmic and Logical Characterizations
Hartmann, S.; Link, S. (2012)
Report
The University of Auckland LibraryWe investigate the implication problem for classes of data dependencies over SQL table deﬁnitions. Under Zaniolo’s “no information” interpretation of null markers we establish an axiomatization and algorithms to decide the implication problem for the combined class of functional and multivalued dependencies in the presence of NOT NULL constraints. The resulting theory subsumes three previously orthogonal frameworks. We further show that the implication problem of this class is equivalent to that in a propositional fragment of Cadoli and Schaerf’s family of paraconsistent S3 logics. In particular, S is the set of variables that correspond to attributes declared NOT NULL. We also show how our equivalences for multivalued dependencies can be extended to Delobel’s class of full ﬁrstorder hierarchical decompositions, and the equivalences for functional dependencies can be extended to arbitrary Boolean dependencies. These dualities allow us to transfer several ﬁndings from the propositional fragments to the corresponding classes of data dependencies, and vice versa. We show that our results also apply to Codd’s null interpretation "value unknown at present", but not to Imielinski’s orrelations utilizing Levene and Loizou’s weak possible world semantics. Our ﬁndings establish NOT NULL constraints as an eﬀective mechanism to balance not only the 1certainty in database relations but also the expressiveness with the eﬃciency of entailment relations. They also control the degree by which the implication of data dependencies over total relations is soundly approximated in SQL table deﬁnitions.
View record details 
An initialalgebra approach to directed acyclic graphs
Gibbons, Jenny (199504)
Report
The University of Auckland LibraryThe initialalgebra approach to modelling datatypes consists of giving constructors for building larger objects of that type from smaller ones, and laws identifying different ways of constructing the same object. The recursive decomposition of objects of the datatype leads directly to a recursive pattern of computation on those objects, which is very helpful for both functional and parallel programming. We show how to model a particular kind of directed acyclic graph using this initialalgebra approach.
View record details 
Branchwidth, Branch Decompositions and bparses
Probert, A; Dinneen, MJ (2013)
Report
The University of Auckland LibraryIn this paper we present an easytouse data structure for representing graphs of bounded branchwidth, called bparses. Many hard problems that can be represented as graph problems can be more easily solved when a decomposition of the graph is taken into account. This is particularly true where the input graph can be seen to be treelike in some form. An example of such a treelike structure is branch decomposition, were the edges of a graph are arranged as leaves around a tree and the internal nodes of the tree represent connectivity between subsets of the edges of the original graph. This is similar in concept to the idea of tree decomposition which views the input graph vertices as forming a treelike structure of boundedsized vertex separators. However branch decompositions may be simpler to work with than tree decompositions for appropriate problems because of the structure (and possibly smaller width) of the tree that is formed. In this paper an algebraic representation of branch decompositions (bparse) is proposed as an alternative to the tparse representation for tree decompositions. An application of this data structure to the known hard problems Minimum Vertex Cover and 3Coloring is examined. Finally, possible benefits of using bparses from the parallelism perspective is given.
View record details 
Another Example of Higher Order Randomness
Becher, V; Chaitin, G.J (200205)
Report
The University of Auckland LibraryWe consider the notion of randomness relative to an oracle: a real number is random in A if and only if its initial segments are algorithmically incompressible in a selfdelimiting universal machine equipped with an oracle A. We prove that the probability that a program for infinite computations outputs a cofinite set is random in the second jump of the halting problem.
View record details 
Phase Transition and Strong Predictability
Tadaki, K (2013)
Report
The University of Auckland LibraryThe statistical mechanical interpretation of algorithmic information theory (AIT, for short) was introduced and developed in our former work [K. Tadaki, Local Proceedings of CiE 2008, pp.425–434, 2008], where we introduced the notion of thermodynamic quantities into AIT. These quantities are real functions of temperature T > 0. The values of all the thermodynamic quantities diverge when T exceeds 1. This phenomenon corresponds to phase transition in statistical mechanics. In this paper we introduce the notion of strong predictability for an infinite binary sequence and then apply it to the partition function Z(T ), which is one of the thermodynamic quantities in AIT. We then reveal a new computational aspect of the phase transition in AIT by showing the critical difference of the behavior of Z(T) between T = 1 and T < 1 in terms of the strong predictability for the basetwo expansion of Z(T).
View record details 
Solving SAT with Bilateral Computing
Arulanandham, J.J; Calude, C.S; Dinneen, Michael (200212)
Report
The University of Auckland LibraryWe solve a simple instance of the SAT problem using a natural physical computing system based on fluid mechanics. The natural system functions in a way that avoids the combinatorial explosion which generally arises from the exponential number of assignments to be examined. The solution may be viewed as part of a more general type of natural computation called Bilateral Computing. The paper will also describe this new computing paradigm and will compare it with Reversible Computing. Our approach is informal: the emphasis is on motivation, ideas and implementation rather than a formal description.
View record details 
Effective Recognition and Visualization of Semantic Requirements by Perfect SQL Samples
Tran Le, VB; Link, S; Ferrarotti, F (2013)
Report
The University of Auckland LibrarySQL designs result from methodologies such as UML or EntityRelationship models, description logics, or relational normalization. Independently of the methodology, sample data is promoted by academia and industry to visualize and consolidate the designs produced. SQL table definitions are a standardcompliant encoding of their designers' perception about the semantics of an application domain. Armstrong sample data visualize these perceptions. We present a tool that computes Armstrong samples for different classes of SQL constraints. Exploiting our tool, we then investigate empirically how these Armstrong samples help design teams recognize domain semantics. New measures empower us to compute the distance between constraint sets in order to evaluate the usefulness of our tool. Extensive experiments con rm that users of our tool are likely to recognize domain semantics they would overlook otherwise. The tool thereby e ffectively complements existing design methodologies in finding quality schemata that process data efficiently.
View record details