This page contains information about events held in the centre in the academic year 2001-2. Some of the talks given have slides available, which can be downloaded by clicking on the pdf icon () next to the talk's title.
The events held were
Evolution in Finance
10 May 2002
Dr Igor Evstigneev
School of Economic Studies, University of Manchester
Evolutionary Finance: Market Selection and Survival of Investment Strategies An Objective Approach to Relative Valuation
This work analyses the random evolution of an incomplete financial market with short-lived assets. The main focus is on the model where investors adopt fixed-mix strategies prescribing to invest their wealth in the assets according to constant, time-independent proportions. It is shown that, in this setting, the Kelly-Breiman strategy (''betting one's beliefs'') is dominant: those traders who adopt it eventually accumulate total market wealth. If only one investor uses this strategy, then he or she is a single survivor in the market selection process. Some analogous results concerning models with more general, not necessarily fixed-mix, strategies are discussed.
Professor M A H Dempster
Centre for Financial Research, Judge Business School
Cambridge Systems Associates Limited
Intraday FX Trading: An Evolutionary Reinforcement Learning Approach
We have previously described trading systems based on such unsupervised learning approaches as reinforcement learning and genetic algorithms which take as input a collection of commonly used technical indicators and generate profitable trading decisions in terms of them. These are in contrast to traditional supervised learning approaches which use labelled trading data. In general, there are two distinct approaches to solving the reinforcement learning problem which involve either searching in value function space or searching in policy space. Temporal difference reinforcement learning methods and evolutionary algorithms are well known examples of these approaches and both were explored in our previous work. This article demonstrates the advantages of applying evolutionary algorithms to the reinforcement learning problem emphasizing a hybrid credit assignment approach. It was previously found that the temporal difference reinforcement learning suffered from problems with overfitting the in-sample data and this motivated the present work which attempts to introduce generalisation by creating a hybrid evolutionary based RL system. Technical analysis has been shown to have predictive value regarding future movements of foreign exchange prices and this article presents methods for automated high-frequency FX trading based on evolutionary reinforcement learning about signals from a variety of technical indicators. We apply these approaches to GBPUSD, USDCHF and USDJPY exchange rates at various frequencies. We find that the evolutionary reinforcement learning approach. is indeed able to outperform consistently the standard RL approach. Statistically significant profits are made consistently at transaction costs of up to 4bp for the hybrid system while the standard RL is only able to trade profitably up to about 2bp slippage per trade. It is also shown that at non-zero slippage a system that allows a neutral out-of-market position to be held outperforms consistently one where the trading system is always in the market.
26 April 2002
Ruben D Cohen
Corporate Finance, Structured Products, Citigroup, London
An Objective Approach to Relative Valuation
A fundamental approach to describing the behaviour of the equity price index is presented. The method centres on the contention that, under a constant discount rate and in a market that is efficient and in equilibrium, the forward-looking risk premium and dividend yield tend to zero. Extending this special-case scenario to one that involves time-wise variations in the discount rate leads to a certain co-ordinate transformation [or mapping], which addresses how the index should behave correspondingly. Applying the same principle to both, corporate earnings and the nominal gross domestic product [GDP], leads to a similar transformation. This, consequently, makes way for objective comparisons between the equity index, corporate earnings and the GDP, thereby raising the notion of relative valuation in this context. A practical demonstration of this is then provided for the US, UK and Japan economies and equity markets. Finally, a further potential application of the model is illustrated, which relates to computing the equity duration. The proposed approach is shown to circumvent the difficulties that are generally associated with calculating equity duration.
Dr Phelim Boyle
University of Waterloo, Ontario
Asset allocation using quasi Monte Carlo methods
Suppose an investor wishes to select assets so as to maximize expected utility of end-of-period wealth and/or consumption over time. The optimal asset allocation decision is of long standing interest to finance scholars and. it has direct practical relevance. In a complete market the modern procedure for computing the optimal portfolio weights is known the martingale approach and it was laid out by Cox and Huang (and other authors). Recently alternative implementations of the martingale approach based on Monte Carlo methods have been proposed. This paper describes one of these methods which involves the numerical computation of stochastic integrals. It is often possible to improve the efficiency of these computations by using deterministic numbers rather. than random numbers. These deterministic numbers are known as quasi random numbers and they are selected so that they are well dispersed throughout the region of interest. The paper implements a method for computing the optimal portfolio weight's that exploits a particular feature of quasi random numbers.
15 February 2002
Dr Jessica James
Econophysics: A new science, or a black art?
Econophysics was coined to describe the innovative research done by non-finance researchers in the finance area. While its refreshing approach has yielded insights, inevitably some folk have not looked before leaping into it, and misconceptions can occur. We present a selection of ideas from the econophysics menu, and explain where this new approach differs from more traditional financial research.
Dr Neil Johnson
Oxford Centre for Computational Finance
Multi-agent models of market dynamics: an Econophysics approach to finance
My group's research attempts to use the development of multi-agent market models to present a unified approach to the joint questions of how financial market movements may be simulated, forecast, and hedged against. The results of agent-based market simulations in which traders equipped with simple buy/sell strategies and limited information compete in speculative trading are presented and the effects of different market clearing mechanisms examined. I show that implementation of a simple Walrasian auction leads to unstable market dynamics and then show that a more realistic out-of-equilibrium clearing process leads to dynamics that closely resemble real financial movements, with fat-tailed price increments, clustered volatility and high volume autocorrelation. Replacing the "synthetic" price history used by these simulations with data taken from real financial time-series leads to the remarkable result that the agents can collectively learn to identify moments in the market where profit is attainable. Hence on real financial data, the system as a whole can perform better than random. Recent work of ours has focussed on understanding the trader dynamics underlying large market movements, from the perspective of agent-based models. In particular, I discuss the precursors of large movements - drawdowns, corrections and crashes - that emerge from these studies. Finally I discuss the implementation of the generalised risk-control formalism of Bouchaud and Sornette in conjunction with agent based models. Risk cannot be eliminated in the implementation of any practical trading strategy and I note that , the risk of option writing is greatly increased by the presence of transaction costs. I find that this risk, and the costs, can be reduced through the use of a delta-hedging strategy with modified, time-dependent volatility structure.
Dynamic Financial Analysis
1 February 2002
Converium, Zurich & University of Zurich
Dynamic Financial Analysis: Origins, Present State and Objectives
Portfolio Optimization and Risk
18 January 2002
Barclays Global Investors, London
Volatility and Correlation Modelling Applications to Portfolio Optimization and Risk
Latest models in volatility and correlation modelling
● Decay factor for the variance/covariance matrix: Risk Metrics framework
● Multivariate GARCH models and application to correlation estimation and forecasting
● Solutions: Enhancing speed, efficiency and accuracy of correlation estimation
● Comparing the different models within an absolute and relative risk measure
● Examining the impact of models within a portfolio optimization framework
● Comparing the different models within an absolute and relative risk measure
Professor Michael Dempster
Cambridge Systems Associates and Centre for Financial Research
Global Asset Liability Modelling
This talk will describe the construction of a strategic global asset liability management system for Pioneer Investments. The generation of global asset returns, pension fund liability structures, model design and management with various attitudes to risk, optimization algorithms and solution vizualization will be treated.
Valuation of Complex Assets
30 November 2001
Dr Steve Kou
Modelling Growth Stocks via Size Distribution
Growth stocks, such as biotechnology and internet stocks, typically have no predictable earnings, leading to high volatility in their share prices as well as making it difficult to apply the traditional valuation methods, such as the net present value approach. This paper attempts to show that the high volatility in share prices can nevertheless be used in building a model that leads to a particular size distribution, which can then be used to price a growth stock relative to its peers. The model focuses on both transient and steady state behaviour of the market capitalization of the stock, which in turn is modelled as a birth-death process. In addition, the model gives an explanation for an empirical observation that the market capitalization of internet stocks tends to be a power function of their relative ranks.
Dr Vadim Linetsky
Exact Pricing of Asian Options: An Eigenfunction Expansion Approach
An Eigenfunction Expansion Approach Asian or average price (rate) options deliver payoffs based on the average underlying price or financial variable over a pre-specified time period. Asian-style derivatives have a wide range of applications in currency, equity, interest rate, commodity, energy and insurance markets. We derive two closed-form analytical formulae for the price of the arithmetic Asian option when the underlying asset price follows geometric Brownian motion. Our derivation relies on the spectral theory of singular Sturm-Liouville (Schrodinger) operators and associated eigenfunction expansions. The first formula is an infinite series of terms involving Whittaker functions. The second formula is a single real integral of an expression involving Whittaker functions. The two formulas allow exact computation of Asian option prices. Along the way we will see how the problem of pricing Asian options is intimately connected with a number of problems in probability (Brownian exponential functionals, Wong's diffusion), analysis (real-variable approach to eigenfunction expansions), quantum physics (Schrodinger equation with Morse potential) and geometry (Maass Laplacian on the hyperbolic plane).
New Option Valuation Techniques
16 November 2001
Dr Dorje Brody
Imperial College, London and Churchill College, Cambridge
New Option Valuation Techniques
Dynamical State Models for Financial Assets and Derivative Pricing We introduce a state variable model for financial economics, which is applied to price contingent claims. The underlying variable that represents the random economic state of the market is an element of a Hilbert space and the price process is determined by the conditional expectation of a price operator in the given state. The dynamical evolution of the state is given by a stochastic law such that the induced price process does not entail arbitrage opportunities and asymptotically the state approaches one of the designated eigenstates of the price operator.
Dr Thomas Gustafsson
Algorithmica Research AB and Financial Mathematics Group, Uppsala University
State Transition Density Option Pricing
This paper outlines a method to calculate option prices using Green's function and Gaussian quadrature. Multi-dimensional formulae for analytical transition probabilities are presented. We derive an algorithm that intuitively can be described as a generalized multinomial algorithm or as an approximating Markov chain. For speed we change variables in a special way and sample points using Gaussian quadrature. The efficiency of the method is assessed by comparing the numerical results to that of other methods such as finite differences and binomial lattices. We address the American put problem, basket options and the effects of stochastic volatility.
Initial Public Offerings
2 November 2001
Professor Alexander Ljungquist
Stern School of Business, NYU
IPO Allocations: Discriminatory or Discretionary?
We estimate the structural links between IPO allocations, pre-market information production, and initial underpricing returns, within the context of theories of bookbuilding. Using a sample of both U.S. and international IPOs we find evidence of the following:
● IPO allocation policies favour institutional investors, both in the U.S. and worldwide.
● Constraints on the discretion bankers exercise in the allocation of IPO shares reduce institutional allocations.
● Constraints on allocation discretion result in offer prices that deviate less from the indicative price range established prior to bankers' efforts to gauge demand among institutional investors. We interpret this as indicative of diminished information production.
● Initial returns, which reflect a significant indirect cost of going public, are directly related to this measure of information production and inversely related to the fraction of shares allocated to institutional investors.
19 October 2001
Dr C M Jones
Gensec International Asset Management
Hedge Funds: A Look Behind the Screen
Hedge funds - investment vehicles that utilise leverage, short-selling and other 'non-standard' investment techniques - have been hitting the news headlines with some frequency since the summer of 1998. In the financial press, hedge funds have been treated as homogeneous entities that are a new phenomenon. However, 'hedge fund' is an umbrella term that covers a radically wide range of investment strategies, markets and levels of risk, with some of the lower risk strategies dating back to the 1940s. Some efforts have been made by the academic world to unbundle the numerous strategies in the 'hedge fund' canon, but these studies have been almost exclusively 'bottom-up', using performance data to draw conclusions about the risk and exposure characteristics of the funds themselves. In this talk, I will take a top-down view based on over 100 interviews with hedge fund managers, aiming to explain the true nature of the numerous hedge fund strategies, and so better understand the latent risks within. Furthermore, I will discuss the macro and market factors that influence such strategies, and the complimentary relationships between them. Finally, I apply the above work to the construction of optimal fund of funds portfolios and investigate the benefits and drawbacks of accessing the asset class through such 'multi-manager' funds.
Dr Narayan Y. Naik
London Business School
Characterizing Systematic Risk of Hedge Funds with Buy-and-Hold and Option-Based Strategies
Hedge funds are known to exhibit non-linear option-like exposures to standard asset classes. Thus, traditional linear factor models offer limited help in evaluating their risk-return tradeoffs. We propose to employ a combination of buy-and-hold and option-based strategies to characterize the systematic risks of equity-oriented hedge funds. Although, in practice, hedge funds can follow a myriad of dynamic trading strategies, we believe that adding a few simple option-based strategies to the traditional linear factor model can capture a significant part of their non-linear return characteristics. We investigate the general applicability of our approach by extending it to mutual funds and examine the differences between these two types of managed portfolios. Our approach can provide important insights that can be very helpful in addressing issues like risk management, benchmark design and manager compensation involving hedge funds.
5 October 2001
Professor Philippe Artzner
Université Louis Pasteur, Strasbourg
Coherent Multiperiod Risk Measurement
We explain why and how to deal with the definition, the acceptability and the management of risk in a genuinely multitemporal way. Acceptable value processes are primitive objects and the measure of risk of a value process is the initial extra capital which makes it acceptable. Coherence axioms then provide a representation of risk-adjusted valuation as the minimum expected value of a Stieltjes integral with respect to random measures. Some special cases allowing for recursive computations are presented.
Professor Michael Dempster
Centre for Financial Research, Judge Business School, University of Cambridge
Cambridge Systems Associates Ltd
Measuring Extreme Risk
Many questions regarding economic capital allocation for the operational risk of a financial institution, as proposed in the new Basel Capital Accord, continue to be open. Existing quantitative models that compute value at risk for market and credit risk do not take into account operational risk. They also make various assumptions about 'normality' and so exclude extreme and rare events. We formalise the definition of operational risk and apply extreme value theory for the purpose of calculating the economic capital requirement against unexpected operational losses. A case study of trading through the Russian Crisis of 1998 will be used to illustrate the ideas, which involve Bayesian hierarchical parameter estimation on the basis of limited internal data.