1,730 results for Working or discussion paper

  • What Drives the Cross-Country Growth and Inequality Correlation?

    Bandyopadhyay, Debasis; Basu, Parantap (2002)

    Working or discussion paper
    The University of Auckland Library

    We present a neo-classical model that explores the determinants of growth-inequality correlation and attempts to reconcile the seemingly conflicting evidence on the nature of growth-inequality relationship. The initial distribution of human capital determines the long run income distribution and the growth rate by influencing the occupational choice of the agents. The steady state proportion of adults that innovates and updates human capital is path-dependent. The output elasticity of skilled-labor, barriers to knowledge spillovers, and the degree of redistribution determine the range of steady state equilibria. From a calibration experiment we report that a combination of a skill-intensive technology, low barriers to knowledge spillovers, and a high degree of redistribution characterize the group of countries with a positive growth-inequality relationship. A negative relationship arises in the group with the opposite characteristics.

    View record details
  • The Mortensen Rule and Efficient Coordination Unemployment

    Julien, Benoit; Kennes, John; King, Ian (2002)

    Working or discussion paper
    The University of Auckland Library

    We study the implementation of constrained-efficient allocations in labour markets where a basic coordination problem leads to an equilibrium matching function. We argue that these allocations can be achieved in equilibrium if wages are determined by ex post bidding. This holds true even in finite sized markets where the equilibrium matching function has decreasing returns to scale where the Hosios rule does not apply to both with and without heterogeneity. This wage determination mechanism is similar to the one proposed by Mortensen (1982) in a different setting.

    View record details
  • A LAD Regression Under Non-Standard Conditions

    Rogers, Alan (1997)

    Working or discussion paper
    The University of Auckland Library

    Most work on the asymptotic properties of least absolute deviations (LAD) estimators makes use of the assumption that the common distribution of the disturbances has a density which is finite and positive at zero. We consider the implications of weakening this assumption in a regression setting. We see that the results obtained are similar in flavor to those obtained in a least squares context when the disturbance variance is allowed to be infinite: both the shape of the limiting distribution and the rate of convergence to it is affected in reasonably simple and intuitive ways. As well as conventional regression models we outline results for some simple autoregressive models which may have a unit root and/or infinite error variance.

    View record details
  • Forecasting Volatility:Evidence from the German Stock Market

    Bluhm, Hagen; Yu, Jun (2001)

    Working or discussion paper
    The University of Auckland Library

    In this paper we compare two basic approaches to forecast volatility in the German stock market. The first approach uses various univariate time series techniques while the second approach makes use of volatility implied in option prices. The time series models include the historical mean model, the exponentially weighted moving average (EWMA) model, four ARCH-type models and a stochastic volatility (SV) model. Based on the utilization of volatility forecasts in option pricing and Value-at-Risk (VaR), various forecast horizons and forecast error measurements are used to assess the ability of volatility forecasts. We show that the model rankings are sensitive to the error measurements as well as the forecast horizons. The result indicates that it is difficult to state which method is the clear winner. However, when option pricing is the primary interest, the SV model and implied volatility should be used. On the other hand, when VaR is the objective, the ARCH-type models are useful. Furthermore, a trading strategy suggests that the time series models are not better than the implied volatility in predicting volatility.

    View record details
  • Agency Theory Meets Social Capital: The Failure of the 1984-91 New Zealand Economic Revolution

    Hazledine, Tim (2000)

    Working or discussion paper
    The University of Auckland Library

    The failure of the New Zealand Economic Revolution of 1984-91 to generate improved economic performance is puzzling and important, since the reforms enacted then have often been cited as a 'textbook' example of how to liberalise an economy, and since the preconditions for success (such as good government, secure property rights and stable capitalist institutions) were all in place, in contrast to the economies of the former Soviet bloc. This paper first documents the extent of failure, and then attempts to explain it theoretically. This is the story: The reform program can be seen as a massive application (or mis-application) of Principal/Agent Theory. The Principal is the small group of economic revolutionaries. The Agents are the people of NZ. The Principal_s sole object is economic efficiency. The Agents enjoy the fruits of efficiency, but also emjoy other things ('slack'), which conflict with efficient behaviour. The Principal introduces policies (deregulation, liberalisation, commercialisation) which raise the opportunity cost of non- efficient behaviour in both private and public sectors. Unfortunately, the Principal has the 'wrong model' of how the economy functions. Slack does not just enter Agents' utility functions, it is also an input into production, where it appears as 'Forbearance' _ the flow variable associated with the stock concept known as Social Capital (the ability of agents to achieve mutually beneficial outcomes through trusting and trustworthy behaviour). Thus, the Reforms actually reduced economic efficiency, for two reasons (1) they forced noncooperative behaviour on agents, and (2) they incurred direct costs of monitoring and enforcement to bring agents' behaviour into line with the principal's objectives. And the total welfare costs exceed the loss of economic efficiency (GDP), since disproportionately more utility-enhancing slack, or forbearance is wiped out. The prediction of increased resources devoted to transaction cost activities, in particular management, is tested in a comparison of New Zealand and Australia (which did not go through such a radical reform process). The data do indeed show a substantial increase in the number of managers in NZ, relative to Australia.

    View record details
  • Compulsory Licensing of Technology and the Essential Facilities Doctrine

    Aoki, Reiko; Small, John (2002)

    Working or discussion paper
    The University of Auckland Library

    We look at compulsory licensing of intellectual property as remedy for anti-competitive practice. We identify aspects of intellectual property that warrants a different remedy from those using general definitions and remedies for essential facility. Based on the analysis, we present a characterisation of optimal compulsory licensing for a simple market.

    View record details
  • How Financial Development Caused Economic Growth in the APEC: Financial Integration with FDI OR Privatisation without FDI

    Bandyopadhyay, Debasis (2004)

    Working or discussion paper
    The University of Auckland Library

    Politicians fashionably argue in favour of financial development to promote economic growth following the seminal study of King and Levine (1993a, 1993b). Financial development, however, could come through alternative channels that are sometimes not compatible in small open economics. A relatively popular channel promotes privatisation of domestic financial intermediaries but with restrictions on foreign ownership. The other competing channel works through foreign direct investment (FDI) requiring foreign ownership of national assets. Until the last decade of globalisation, from sixties through early nineties, in many APEC countries and especially in the East Asia, privatisation of national banks went hand in hand with a regime of financial repressions. Under that regime governments kept the domestic interest rate above the world rate by imposing barriers against FDI. Recent trend in globalisation creates a political tension between those who welcome and the others who oppose FDI. This paper evaluates the relative contribution of those two alternative channels of financial development to economic growth. The model of analysis builds on King and Levine (1993b) but restricts its attention to small open economies of the APEC. Contrary to the previous findings, privatisation of domestic financial sector alone turns out to have a negative impact on the growth of efficiency measured by the growth of total factor productivity. This discrepancy could possibly be rationalised by a special characteristic of the APEC sample where a negative effect on efficiency came from the regimes of financial repression that blocked FDI. Financial integration led by FDI does bring the prospect of lower economic growth due to increased business fluctuations especially for the small open economies. Nevertheless, it is surprising to find that a significant improvement in efficiency and growth came to the APEC nations through the international channel with the flow of FDI. Consequently, barriers to globalisation out of a purely the nationalist concerns may be ill fated even for small open economies.

    View record details
  • Asymptotic Power Advantages of Long-Horizon Regressions

    Mark, Nelson; Sul, Donggyu (2002)

    Working or discussion paper
    The University of Auckland Library

    Local asymptotic power advantages are available for testing the hypothesis that the slope coefficient is zero in regressions of yt+k- yton xtfor k > 1, when { yt} ~ I(0) and {xt} ~ I(0). The advantages of these long-horizon regression tests accrue in empirically relevant regions of the admissible parameter space. In Monte Carlo experiments, small sample power advantages to long-horizon regression tests accrue in a region of the parameter space that is larger than that predicted by the asymptotic analysis.

    View record details
  • New Unit Root Asymptotics in the Presence of Deterministic Trends

    Phillips, Peter (1998)

    Working or discussion paper
    The University of Auckland Library

    Recent work by the author (1998) has shown that stochastic trends can be validly represented in empirical regressions in terms of deterministic functions of time. These representations offer an alternative mechanism for modelling stochastic trends. It is shown here that the alternate representations affect the asymptotics of all commonly used unit root tests in the presence of trends. In particular, the critical values of unit root tests diverge when the number of deterministic regressors K -+ rn as the sample size n + w. In such circumstances, use of conventional critical values based on fixed K will lead to rejection of the null of a unit root in favour of trend stationarity with probability one when the null is true. The results can be interpreted as saying that serious attempts to model trends by deterministic functions will always be successful and that these functions can validly represent stochastically trending data even when lagged variables are present in the regressor set, thereby undermining conventional unit root tests.

    View record details
  • Stacked generalization: when does it work?

    Ting, Kai Ming; Witten, Ian H. (1997-01)

    Working or discussion paper
    University of Waikato

    Stacked generalization is a general method of using a high-level model to combine lower-level models to achieve greater predictive accuracy. In this paper we resolve two crucial issues which have been considered to be a ‘black art’ in classification tasks ever since the introduction of stacked generalization in 1992 by Wolpert: the type of generalizer that is suitable to derive the higher-level model, and the kind of attributes that should be used as its input. We demonstrate the effectiveness of stacked generalization for combining three different types of learning algorithms, and also for combining models of the same type derived from a single learning algorithm in a multiple-data-batches scenario. We also compare the performance of stacked generalization with published results of arcing and bagging.

    View record details