3 results for Green, Peter, Thesis

  • An introduction to agent-based modelling

    Green, Peter (2007)

    Masters thesis
    University of Otago

    Agent-based models give us a way to model the aggregation of heterogeneous agent s, a feat that is nearly impossible in a deductive framework. Because these models cannot be solved exactly, they will often be explored using computer simulations. Computers are an important tool in this field, but they are not central to the methodology. A simple model like Schelling's can be investigated using a few toys from the games cupboard. The ability to program simulated models is a lower barrier to entry than the ability to build tractable analytical models. As a result the field is tending towards breadth rather than depth. Creating a new model ex nihilo is more straightforward and more rewarding than adapting an existing model (Axelrod 2003). This creative "anarchy" makes it difficult to compare and to replicate results (Leombruni et al. 2006). The methodology itself is promising though, and a more disciplined approach could make significant contributions to economics. This review presents the motivation behind agent-based modelling, and its epistemological justification. [extract from Introduction]

    View record details
  • Agent-based modelling of monopsony and the minimum wage

    Green, Peter (2007)

    Masters thesis
    University of Otago

    A simple supply and demand argument apparently shows that minimum wage policy, ironically, hurts the workers it is ostensibly aimed at helping, by increasing their chances of unemployment. Stigler (1946) claims that economists should be “outspoken, and singularly agreed” on the issue. While the profession happily achieves the former, they are a long way from the latter (Klein & Dompe 2007). Early last century, SidneyWebb (1912) claimed that minimum wage laws had increased productivity growth, both by drawing employers attention away from cost-cutting and towards productivity improvements, and by providing a relative advantage to high-wage firms. Today, this is backed up by mathematical models (e.g. Cahuc & Michel 1996, Acemoglu 2001). Recent studies — the “new economics” of the minimum wage — have provided more ambiguous evidence about employment effects. Monopsony models have become fashionable since they were used to account for increases in employment in Card & Krueger’s (1995a) Myth and Measurement. Although the word “monopsony” initially referred to markets with a single buyer, the modern usage refers to models where individual buyers face upward sloping supply curves. Despite the shift in meaning, the term still carries some stigma, especially if it is used in contexts where the assumption of one buyer would not be credible (Boal & Ransom 1997). This project investigates whether a simple agent-based model is better described by a competitive model or by a monopsony model, and what implications this has for minimum wage policy. Two models were built. The first is a toy model which simply reproduces a competitive model in simulation form. The second, based on search models of labour markets, exhibits behaviour similar to a monopsony model. [extract from Introduction]

    View record details
  • Towards a Fast Bayesian Climate Reconstruction

    Green, Peter (2016)

    Doctoral thesis
    University of Otago

    To understand global climate prior to the availability of widespread instrumental data, we need to reconstruct temperatures using natural proxies such as tree rings. For reconstructions of a temperature field with multiple proxies, the currently preferred method is RegEM (Schneider, 2001). However, this method has problems with speed, convergence, and interpretation. In this thesis we show how one variant of RegEM can be replaced by the monotone EM algorithm (Liu, 1999). This method is much faster, especially in suitably designed pseudoproxy simulation experiments. Multi-proxy reconstructions can be large, with thousands of variables and millions of parameters. We describe how monotone EM can be implemented efficiently for problems on this scale. RegEM has been interpreted in a Bayesian context as a multivariate normal model with an inverse Wishart prior. We extend this interpretation, noting the empirical Bayesian aspects, the implications of the prior for the variance loss problem, and using posterior predictive checks for model criticism. The Bayesian interpretation leads us to suggest a novel prior. Simulated reconstructions with this prior show promising performance against the usual prior, particularly in terms of low sensitivity to the tuning parameter.

    View record details