This page is an archive of information about events held in the centre before the academic year starting in September 2000. Some of the talks from 1999-2000 have slides available in pdf format. To download them, click on the pdf icon () next to the talk.

The years for which information is available are

A new Jump-Diffusion model of security price evolution is proposed. The model posits that asset price processes can be decomposed into a deterministic drift, a Wiener process, and two compound Poisson processes representing discontinuous price movements due to the arrivals of "good" and "bad" news during periods of economic expansion and contraction. Expansionary periods--Bull Markets--are characterized by more frequent arrival of "good news" that cause large price increases (jumps). Contractionary periods--Bear Markets--are characterized by more frequent arrival of "bad news" that cause large price decreases. Market conditions change at random points in time causing a change of regime. During each market epoch, the jump magnitudes are determined by random draws from stationary distributions. This form of information dynamics permits a simple form of returns, predictability, where conditional on the current phase of the market, the relative frequency and the direction of price movements may be anticipated. We derive the option pricing formula for this new price process and investigate the pricing bias due to misspecification error. We then generalize the model to allow for the rate of news arrival and the time of regime change to vary stochastically over time.

The introduction of new technology has raised a number of new and interesting theoretical and practical issues in the modelling and optimization of the design of telecommunication networks. France Telecom has recently decided to adopt a so-called ring architecture for all their local, main and sectorial networks. Here we consider the problem of assigning telecommunication traffic on such a network made of a cable of optic fibers that visits telecommunication centers in a circular fashion. Given a set of centers in a city or conglomeration linked together by individual circular optic fibers called rings with an essentially unlimited availability of rings of fixed capacity on the network, and given the expected demands between pairs of centers, assign demands to the rings subject to the ring capacities. On each ring it is then necessary to install an add/drop multiplexer at every center incident to one of the assigned demands. Given the high cost of the multiplexers, the objective is to find an assignment requiring the smallest total number of multiplexers.

In practice, this problem is solved using fast and effective heuristics (i.e. approximated solution methods). Speed and simplicity are important both because it is a subproblem of a much larger design problem (the subproblem treated here is solved very frequently), and because the heuristics are to be used by the operators managing the networks. However given the significance of the multiplexer costs, it is also important to have performance guarantees for some real life subproblems.

In this presentation, we explain how the quality of the heuristics was validated using optimization methods. We show that problems of practical size can be solved to optimality using a decomposition approach and a column generation procedure. Different variants of the problems are considered so as to compare the cost of alternative safety requirements for the design of the local networks. The methodological aspects of this research will be explained in simple terms.

Many production line systems (and other similar systems) can be modelled as a system of finite queues in. series. Consequently, queueing theory provides a powerful tool for studying how to design such systems to increase their efficiency.

When the processing times at the respective stations have substantial variability (as is common in assembly lines), a number of interesting questions arise regarding how the production line should be designed. to maximize its production rate. Some years ago, Hillier and Boling discovered the bowl phenomenon whereby, under certain circumstances, the optimal allocation of work to the stations follows the shape of a bowl, with the most work assigned to the end stations and the least to the center stations.

The recent research of Hillier and So has further extended this concept of deliberately unbalancing the production line in a certain way in order to improve its efficiency. The design variables being considered include the amount of buffer storage space and the number of servers to allocate to the respective stations. The effect of breakdowns (or other down time) at individual stations also is considered. This lecture surveys our recent results and conclusions.

The stochastic volatility model will be described and its relevance to financial time series discussed. A review of estimation procedures for the model will be given. A new text for the stochastic volatility will then be presented and compared with various alternatives.

The recently completed Chase Manhattan neural network market forecasting project will be described and critically reviewed.

There are now many models of interest rate markets available. Short-rate models, whole yield models (HJM), LIBOR models (BGM), potential processes (Rogers), and supermartingale models (Flesaker-Hughston) have all been suggested. In fact, these models can all be seen as particular ways of notating a general interest rate model which satisfies some basic properties. Up to a point, all interest rate models are exactly the same. This talk will survey current interest rate models, and place them in a universal framework, as well as producing some possibly surprising facts along the way.

A general model is presented for a multi-currency. economy extending the Amin-Jarrow framework. In this model a system of interest rates is associated with each currency and conditions are prescribed that ensure in a natural way that these rates are positive. The analysis is conducted in the 'natural" measure so as to bring out the inherent symmetries of such an economy and to establish a pricing methodology that is equally well suited to derivatives valuation and real-world scenario simulation.

Minimax optimisation is presented as a computational tool for risk management under uncertainty. In situations where decision making based on expected value optimisation does not yield acceptable solutions, minimax optimisation offers an alternative approach whereby decision making is based on the minimisation of the possible effect of the worst case scenario.

This paper presents a new simulation methodology for quantitative risk analysis of large multi-currency portfolios. The model discretizes the multivariate distribution of market variables into a limited number of scenarios. This results in a high degree of computational efficiency when there are many sources of risk and numerical accuracy dictates a large Monte Carlo sample. Both market and credit risk are incorporated. The model has broad applications in financial risk management, including value at Risk. Numerical examples are provided to illustrate some of its practical applications.

The talk will start with an overview of product structure and applications, definitions of credit derivatives and users, market overview, discussion of certain structures and applications, and risks of using credit derivatives. Topics discussed will include market potential - the outlook for the industry and its growth, regulatory issues on dealing banks accounting, CAD requirements, and papers from UK and US regulators.

This talk will focus on computation and analysis of credit derivatives. In particular, the compound option approach, rating transition analysis, default intensity analysis, stochastic credit spreads. The discussion will include the problems in each approach. Also, pricing bond swaps in reality. And theoretical challenges - identifying potential research topics.

Consider the performance of an options writer who mis-specifies the dynamics of the price process of the underlying asset by overestimating asset price volatility. When does he overprice the option? If he follows the hedging strategy suggested by his model, when does the terminal value of his strategy dominate the option payout?

Starting from a model where volatility is allowed to vary freely within a band,
_{0} < _{t} < _{1}, we express the value of a
contingent claim according to the worst-case volatility scenario for the security. This worst-case
valuation is sub-additive for any portfolio of contingent claims. Thus, it is possible to minimise
the volatility risk of an exotic security throught the addition of liquid vanilla options to the
portfolio. The minimisation is cost-efficient, that is it strikes the right balance between
diminished volatility risk vs the market's (bid/offer) cost of the option hedge.

Using a recursive modelling procedure which generates existing methods for simulating investors' search in 'real time' for a model that can forecast stock returns, we demonstrate the extent to which monthly stock returns in the UK were predictable during the period 1970-1993. Due to a set of unique historical circumstances, UK stock returns were extremely volatile in 1974-1975, and we discuss how to design a modelling approach which aims at accounting for this episode. We find evidence of both long-term and short-term predictability in UK stock returns, which could have been exploited by investors to improve on the risk-return trade-off offered by a passive strategy in the market portfolio.

In this lecture the paradigms of on-line optimization will be explained. It concerns optimization under complete uncertainty. Problem input data are revealed to the decision maker one by one, and decisions are to be taken based on the information hitherto obtained, possibly taking into account what may happen in the future. The decisions taken are irrevocable or, if a time-scale plays a role, they are irrevocable up to the current decision moment.

Problems of this type can be thought of as two-person games wherein one person is taking the decision actions and the other person is providing the data. A little thought should make it clear that we can easily come up with problem instances for which any on-line algorithm will never be able to arrive at the optimal solution to the corresponding off-line problem.

Algorithmic performance is typically analysed from a worst~e point of view. The ratio between
algorithmic solution and optimal solution of the off-line problem instance is called *the
competitive ratio. *In view of the above, lower bounds on this ratio can be derived for specific
problem classes. After this the search is on for algorithms that have a competitive ratio that
matches this lower bound in the worst case.

To illustrate the ideas problems with a time scale are used. The first one could be called the
*on-line travelling salesman problem. *Starting at time 0 at some fixed point of a metric
space, other points in the space that are to be visited are revealed over time while travelling
(think of a courier with a mobile phone who gets his orders while serving requests already known).
The overall time to visit all requested points assuming unit travel speed is to be minimized.

The other example concerns a single machine scheduling problem in which jobs characterized by their processing time are presented over time starting at time 0, and the sum of the completion times of the jobs is to be minimized. If time allows, some challenging open problems will be presented.

For an interbank options dealer, volatility is the key variable. For vanilla FX options the talk will consider: how options dealers trade volatility; the dependence of vanilla options on the market price of volatility (Vega) and on actual spot moves (Gamma); volatility estimation through GARCH analysis; and. the pricing and significance of skewness in the spot distribution through risk-reversals.

Pricing and hedging of interest rate derivatives has been the main motivation for the development of arbitrage-based models of yield curve dynamics. Nevertheless, practical considerations receive scant treatment in theoretical papers. This presentation identifies practical criteria for model selection, and discusses briefly available evidence on a number of the issues involved. Techniques are described for: calibrating models to market data; pricing various derivative contracts; and measuring risk. Implications for model selection are drawn out.

It is possible to specify a model for interest rates in various different ways, by giving the dynamics of the spot rate, or of the forward rates, for example.

A less well developed approach is to specify the law of the state-price density process directly. In Abstract, the state-price density process is a positive supermartingale and the theory of Markov processes provides a rich framework for the generation of examples of such things. We will show how this can be done, and provide simple examples (some familiar, some new) where prices of derivatives can be computed very easily. One benefit of the potential approach is that it becomes very easy to model the yield curve in many countries at once, together with the exchange rates between them.

Both in insurance and finance there recently has been an increase in the use of extreme value theory. In the case of insurance, the modelling and pricing of multi-line, multi-year high excess reassurance products necessitates the statistical analysis of rare events: large claims, low frequency. Within finance (risk management) extreme quantile estimation, like the estimation of Value-at-Risk (VaR), poses a challenging task to finance experts and statisticians alike. Besides offering methods for such estimation problems, modern extreme value theory also yields results for the estimation 'beyond VaR". This means, given that a 'Loss beyond VaR, occurs, estimate the distribution of that excess (also called shortfall).

A series of major catastrophes, culminating in Hurricane Andrew in 1992, provided the wake-up calls to both the insurance industry and capital markets. To the insurance industry, that the capital supporting the business may prove inadequate and new sources may be needed. To the capital markets, that there may be opportunities to transmigrate some of the financial instruments used in the financial markets to manage insurance risk. Since then there has been much endeavour, principally by investment bankers, reinsurers and reassurance brokers. This paper will analyse the progress achieved so far, present reasons for some of the failures and suggest likely ways forward.

While American calls on non-dividend paying stocks may be valued as European, there is no completely explicit exact solution for the values of American puts. We introduce a novel technique called randomization to value American puts and calls on dividend-paying stocks. This technique yields a new semi-explicit approximation for American option values in the Black Scholes model. Numerical results indicate that the approximation is both accurate and computationally efficient.

We describe the application of multigrid methods to the fast numerical solution of two-factor American put options in which the volatility is modelled as an additional stochastic process. The no-arbitrage pricing of such options leads to a time-dependent partial differential inequality in two, space" dimensions; this is approximated using @ finite-difference approach with an accurate implicit time-discretisation requiring the solution of a linear complementarity problem at each timestep. A Multigrid approach to the solution of this LCP is described and shown to be significantly faster and more robust than conventional iterative methods. The required degree of numerical accuracy is obtained using adaptive time-integration and co-ordinate stretching transformations which provide high resolution in the regions of interest. The adaptive time-integration reduces the time complexity to significantly better than linear for long-term options and the spatial complexity is reliably quadratic since multigrid gives convergence rates which are independent of the number of spatial mesh points. Consequently this approach can produce reliable 2-factor put prices in seconds on a Pentium PC. The approach extends to three factors (the spatial complexity is now cubic) and preliminary results will be described.

Value-at-Risk models can generally be categorized into three types: 'covariance methods' for linear portfolios, 'Monte Carlo simulation' for options portfolios, and 'historic simulation' for all portfolios. This talk presents the main advantages and limitations of each method, and describes ways in which problems can be overcome. In particular new methods for generating large, positive definite covariance matrices, the use of neural networks to generate covariance matrices from leptokurtic distributions, and the use of multivariate embedding of time series data to improve speed and accuracy in historical simulation will be treated.

This talk presents MIDAS (Manager's Intelligent Debt Advisory System), a multi-currency debt management decision support system which supports model-based, hierarchical strategic financial planning for a Canadian Crown-owned public utility. The system provides selected models appropriate to a decision process; configures and solves the models on user request; explains interprets and refines their results interactively and assists the user in evaluating alternative borrowing plans. These functions are precisely those required of more general firm-wide risk management systems and as such MIDAS represents an early implementation of a limited market risk management system.

Much of the basic theory of pricing and hedging derivative financial products makes the assumption that markets are complete with no counter-party default risk. Real markets are more imperfect. This talk will describe methods of pricing options in the presence of some realistic imperfections.

There now exist in the market classes of products such as Limited Recourse Notes and Default Swaps, the purpose of which is to enable market participants to hedge defaultable instruments such as emerging market debt. This talk will describe what these products are and discuss issues of pricing and hedging, including some simple optimization problems for hedging portfolios.

The market for products based essentially on the credit worthiness of a name, be they Eurobonds, asset swaps or credit derivatives, has been maturing and becoming more complex of late. This talk describes some of the competing analytical pricing methodologies for the new instruments in this market such as spread and default options, and considers their usefulness in the current market. These instruments present a particular challenge to risk aggregation and management practices: some solutions will be suggested.

1. Introduction - Different VAR methods

2. Comparing Credit and Market Risk

3. Using VAR to estimate Credit Risk

4. Aggregating Market and Credit Risk

5. Difficulties in applying VAR to Credit Risk

6. Data Problems

7. Modelling Issues

8. First Chicago Example

**Date:** Michaelmas Term 1997, Friday, 24 October 1997 **Topic:** Model Risk
**Speaker:** Dr William Shaw, Nomura International, London
**Title:** Investigating and fixing the mathematical pathology of derivative modelling
algorithms with computer algebra

Derivative pricing problems frequently involve mathematical issues of some subtlety, capable of exhibiting diverse forms of pathological behaviour. The algorithm verification project carried out at Nomura used the computer algebra system Mathematica to test a variety of algorithms against exact solutions, and revealed some interesting issues. This talk will focus on two of particular interest.

The first is the interaction of non-smooth initial (payoff) conditions with some of the more popular implicit finite difference schemes. The pathological behaviour of Crank-Nicholson and related schemes in this respect is exposed in detail, and it is shown how the problems are remedied by schemes such as the 3-time-level Douglas. Such schemes also allow very rapid computation due to their combination of strong stability and high-order truncation error, and can be used within PSOR or LP solution methods.

The second issue is how to build binomial and trinomial models that avoid problems with negative probability or negative asset prices. This is based on a new approach using solution-symmetry constraints to construct the tree, and using symbolic algebra to solve the complicated rational equations that result.

Finally - and as a brief link to the second talk - a new algorithm for evaluating SRCEV options both quickly and exactly will be presented.

We show examples of constant elasticity of variance models and also general stochastic volatility models under which the put-call parity relation fails; and thus they appear to admit arbitrage opportunities. We dig into the theory of stochastic processes to solve the supposed paradox and give a financial interpretation of this phenomenon

Neural networks offer a powerful method for tackling many inference problems. This comes from a combination of function approximation capabilities, combined with statistical methods for parameter estimation and dynamical systems for modelling time variations. It is also true that the hype associated with some of the terminology in this area leads to many "blind" applications of neural networks to difficult problems.

This talk is in two parts, firstly I will give an overview of some applications of neural networks in financial forecasting, focusing on sequential methods centred around the extended Kalman filtering framework. In the second part, I will talk about using neural networks in an options pi-icing task. Sequential methods used in this framework will be extended to derive smooth estimates of implied volatilities from the Black-Scholes type approach.

The talk will report on the findings of the statistical properties of high frequency market prices. The database comprises over 200 million market maker quotes and includes more than 20,000 observations per day for major market instruments, such as the USD/DEM exchange rate, scaling properties, intra-day seasonal patterns and dependencies of short- and long-term volatility are introduced.

The talk then introduces the theory of heterogeneous markets to explain the properties observed in the data. It is explained how the theory explains predictability of volatility and directional moves of markets. The talk closes with an overview of the possible applications of the theory and an outlook on future developments.

The paper develops a modification of the Sharpe Ratio and describes some of its applications. The usual Sharpe Ratio is a mean-variance based measure, and can therefore provide unsatisfactory rankings under non-normal probability distributions. The modified measure developed here is defined so that it is equal to the usual Sharpe Ratio when distributions are Normal, and continues to provide rankings which are consistent with stochastic dominance criteria under general distributions.

This has obvious advantages as a tool for performance measurement. The paper also shows how the new measure provides an appropriate framework for deriving valuation bounds for derivatives in incomplete markets. Finally, it provides some further characterizations of these bounds and discusses their possible role as measures of risk (i.e. related to Value at Risk concepts).

One of the main problems with Value-at-Risk analysis of trading portfolios is the accurate estimation of the probability of extreme events. It is well known, for example, that "market crashes" occur much more often than predicted by standard Gaussian models, such as used by RiskMetrics. We have developed a practical multivariate Jump-GARCH model that more accurately captures extreme events. We present applications of this technology in the risk analysis of a hypothetical FX-options portfolio.

A common assumption in finance is that many variables, including share prices, total returns, etc., follow a random walk or (logarithmic) Brownian motion. Empirical evidence over short periods does not contradict this.

However, empirical evidence over longer periods, and also consideration of the way in which agents in the market actually assess the values of items such as share prices, suggests that items such as interest rates, share dividend yields or price/earnings ratios, annual rates of price inflation and wage inflation, and exchange rates relative to consumer price indices, are all better modelled by autoregressive time series. Note that first order AR models are equivalent to Ornstein-Uhlenbeck processes in continuous time. This is equivalent to suggesting that items such as share dividends, earnings per share and share prices are cointegrated, but with a specific cointegration vector.

These autoregressive models are of use in the financial management of long term financial institutions such as life assurance and general insurance companies, pension schemes, etc.

Total integrated risk management provides a unified approach for linking all major corporate strategic decisions. A set of representative scenarios depicts the range and temporal movements of the uncertainties via a characteristic set of economic factors - interest rates, inflation, and currencies. The integrated system defines several performance measures within an optimization context. The goal is to maximize shareholder surplus over time subject to a set of constraints on risk and other factors. Two real-world examples from the insurance industry illustrate the concepts. Barriers to successful implementation are discussed.

The price of most interest-rate option products depends both on the correlation and the instantaneous volatility of the underlying forward rates.

The talk shows that: - 1) The possible shapes of the instantaneous correlation functions obtainable with a large class of low-dimensionality models are rather limited, and independent of the exact specification of the model. 2) The terminal de-correlation between rates (which directly enters the pricing of exotic options) is a function of both instantaneous correlation and time-dependent volatility. 3) An infinity of possible combinations of instantaneous volatilities and correlations can give rise to a given (finite) set of option prices. 4) Any of the possible choices for the instantaneous volatility functions uniquely determines the possible evolution of the term structure of volatilities. 5) The option market is not consistent with a deterministic strictly time-homogeneous term structure of volatilities. 6) Combining 4) and 5) the infinity of possible solutions alluded to in 3) can be resolved.

Empirical implementations of HJM term structure models are well known to be unstable in the sense that volatility factors computed at different moments in time will differ more than may be explained by observational noise. This paper shows how the differences may be attributed to the existence of low dimensional dynamics in an underlying term structure model. It is shown how in principle the observed behaviour of volatility factors may be obtained, and how the relationship between factors at different moments in time may be determined within this framework. It is demonstrated that the conclusion of Duffie and Kan, that an affine term structure model requires an affine structure in the processes followed by the state variables, may be generalised, at least locally. An empirical analysis of sterling money market data is presented in which non-trivial assumptions are made concerning the topology of the underlying state space.

Exchange seats are capital assets that confer access to the trading floor. On the New York Stock Exchange (NYSE), seats are bought and sold in a public auction market. As such, their prices reflect expectations about future activity and returns for the stock market as a whole. For this reason, the process by which seat prices are determined provide valuable information about beliefs of the participants who have the most intimate contact with the trading process. This paper examines the behaviour of NYSE seat prices using (1) Annual data on seat prices that span the entire history of trading of NYSE seats from 1869 to the present, and (2) The complete intra-daily record of trades, bids and offers for the seat market for the 1973-1994 period. This combined "macro" and "micro" characterization of the seat market provides new insights into the behaviours of seat prices, and also yields measures of the degree of divergence of opinion about future market activity is linked to heterogeneity in beliefs. We also find evidence that seat transactions have permanent price impacts.

The effects of delayed trade publication on price efficiency, dealer profitability and welfare distribution amongst traders have been the subject of a controversial debate. In this paper, we examine the effects of delayed publication in a monopolistic dealer market. Our analysis suggests that a monopolist specialist, who is not obliged to disclose his past trades, can benefit from non-disclosure -by injecting volatility in her quotes. When the monopolist follows this strategy, large traders benefit, expected trading volume rises, small traders lose and price efficiency decreases.

This paper discusses the allocation of capital over time with several risky assets. The maximization of the expected logarithm of the period by period wealth, referred to as the Kelly criterion, is a very desirable investment strategy. It has many attractive properties such as maximizing the asymptotic rate of growth of t@ investor's wealth and minimizing the time to reach specific goals (asymptotically). However, this very risky strategy utilizes very large wagers on favourable investments. Hence to increase security with a resultant decrease in expected return, fractional Kelly strategies are used that blend cash with the Kelly wager. This blend is usually done in an ad hoc manner in applications to gambling games such as black jack, horse racing and lotteries and commodity- and index futures trading. In this paper, we provide a method to choose these fractions optimally so that specific goals are achieved with high probability. The stochastic optimization model uses a disjunctive form for the probabilistic constraints, which identifies an outer problem of choosing an optimal set of scenarios, and an inner (conditional) problem of finding the optimal investment decisions for a given scenarios set. The multiperiod inner problem is composed of a sequence of conditional one period problems. The theory is illustrated for the dynamic allocation of wealth in stocks, bonds and cash equivalents.

Critical overview of quantitative methods for portfolio optimization. Extended discussion about multifactor models based on the Arbitrage Pricing Theory, and why they are elegant and appropriate for this task. Projection of exogenous economic variables into an APT space. An example of an enhanced indexing strategy using APT Pairs to implement stock picks as a risk_controlled shadow portfolio.

The structure of financial regulation in the UK is undergoing radical reform. Reasons why this is desirable and the danger involved are discussed, along with key issues for debate including the scope of regulation and appropriate standards. The implications of global interdependence for domestic regulators is addressed - do we need a financial UN force?

It is easier to establish codes on conduct, and standards of behaviour, in the field of financial regulations, than to enforce them. Society has tended to use five generic kinds of sanction- 1) Disclosure 2) Limitation of Function 3) Fines 4) Prison 5) Infliction of Pain. We ask which forms of sanctions are suitable for application to wrong-doing in the financial field, and discuss some of the problems of their use.

This paper uses contingent claims techniques and differential games to study the optimal design of a firm's securities when a manager exerts effort to control the proportional drift rate of the firm's cash flow. When contracts can be written contingent on the firm's profits but not on managerial effort, the optimal security for outside investors to hold strikes a balance between providing the manager with appropriate incentives and giving him an excessively generous stake in firm profits. If the manager can impose his salary demand on outside investors, ex post efficiency is restored.

In the asset management industry, the concept of guaranteed return products seems to become more and more important. As at June 1996, the market value of mutual funds with a guaranteed minimum level of return, listed at European exchanges, was well beyond 120 billion Deutschmark. The value of non-listed assets that are managed in a guaranteed structure can only be guessed. Whereas there is abundant literature on the trade-off between expected return and standard deviation of return, little has been published on the trade-off between expected return and the level of guaranteed return. We analyse the problem of determining an investment portfolio consisting of a stock index, European, exchange listed options on that index that expire at the investment horizon, and cash, such that: - the expected return at the investment horizon is maximised, subject to - the realised portfolio return at the horizon is no worse than the guaranteed return, independent of the value of the index, and - the probability that the portfolio return is positive is sufficiently large.

The paper studies the problem of maximizing the expected utility of terminal wealth in the framework of a general incomplete semimartingale model of a financial market. We show that the necessary and sufficient condition on a utility function for the validity of several key assertions of the theory to hold true is the requirement that the asymptotic elasticity of the utility function is strictly less than one.

This talk describes the results of a recently completed study which compares the performance of alternative asset allocation models fitted to recent US financial market data. The basic Markowitz, fixed-mix, optimal control and dynamic stochastic programming models and solution techniques are introduced and numerical results displayed. Experiments to validate the relative performance of the models and their biases will be discussed.

Persistence in international banking transaction errors is one example of operational risk, which requires adequate capital. Periodic small errors are systematic in ordinary banking transactions and are costly to correct in terms of managerial time and effort. Persistent transaction errors may indicate potentially catastrophic operational risk problems.

A selection of transaction errors in a large international bank over several months is viewed in terms of both traditional and non-normal probability distributions. Then the performance persistence of banking divisions is calculated in a "contingency table". The extent and statistical significance of performance persistence is computed using tests appropriate for large samples, and also for small samples. Odds ratios, chi-square tests and Yate's continuity corrections are provided. Results are heavily dependent on the assumed probability distributions. Finally, the winners and losers among banking divisions are evaluated, and some performance incentives suggested.

Financial institutions have developed sophisticated practices and methodologies for managing the market and credit risks to which they are exposed. Many are now also focussing their attention on improving the way in which other risks - commonly called operational risks - are managed. This presentation examines the concept of operational risk management and highlights the leading practices being used within the industry.

A specific leveraged swap contract is analysed in detail and is used to evaluate the usefulness of VaR

Monetary policy under the single currency: can Mr Duisenberg beat Mr Greenspan at whatever game it is they have to play, or is the report of the dollar's impending demise premature, and Gresham's law still fundamentally alive? 50 years of German monetary experience: those who do not take the trouble to understand the peculiarities of the past may never get to enjoy its comforts.

1. In economic terms, the theoretical and empirical underpinnings of EMU are anachronistic.

2. In practical terms, EMU represents a giant, hypocritical inconsistency

3. EMU must be
anti-capitalist in its effect

4. It must run counter to free capital movements and the
Single Market

5. It is so badly structurally flawed in its present architecture that it will
have to be radically restructured, but this will happen only after a violent financial crisis in
euroland

In this paper we investigate three different techniques for the estimation of a time-varying beta: a bivariate GARCH model, the Schwert and Seguin approach, and the Kalman filter method. We apply these approaches to a set of monthly Morgan Stanley country index data over the period 1970 to 1995 and compare their relative performance. In-sample forecast tests of the performance of each of these methods for generating conditional beta suggest that the GARCH-based estimates of risk are optimal. Using the empirical cumulative distribution of these GARCH generated conditional betas, the G-7 region was found to exhibit greater risk in comparison to non G-7 countries and the European region was less risky compared to the South East Asia region.

We introduce a general class of interest rate models in which the value of pure discount bonds can be expressed as a functional of some (low-dimensional) Markov process. At the Abstract level this class includes all current models of practical importance. These models are arbitrage free and can be efficiently implemented. We give examples both where the functional form is chosen and where it is implied from market option prices. What results is a very efficient model consistent with relevant market prices.

Two recent papers by Baxter (1997) and Jin and Glasserman (1997) consider the relationship between several techniques commonly used to model the term structure of interest rates. In this talk we review this work and examine more closely the connections and differences between the (seven) different modelling frameworks

We investigate the numerical solution of American financial option pricing problems, using a novel formulation of the valuation problem as a linear programme (LP). By exploiting the structure of the constraint matrices derived from standard Black-Scholes "vanilla" problems we obtain a fast and accurate revised simplex method which performs at most a linear number of pivots in the temporal discretization. When empirically compared with projected successive overrelaxation (PSOR) or a commercial LP solver the new method is faster for all the vanilla problems tested. Utilising this method we value discretely-sampled Asian and lookback American options and show that path-dependent PDE problems can be solved in 'desktop' solution times. We conclude that LP solution techniques which are robust to parameter changes can be tuned to provide fast efficient valuation methods for finite-difference approximations to many vanilla and exotic option valuation problems.

The Willow Tree mathematical model calculates financial instruments in all markets faster and more accurately than any conventional binomial or trinomial tree. Tests have shown that its unique marginal density shaped structure allows an 800% improvement in calculation speed in one dimension and up to 2000% improvement for two factor instruments when compared with conventional binomial and trinomial trees. This significantly speeds up the process of working with:

American/Bermudan style equity/commodity/foreign exchange exotic derivative deals.

Hull & White and Black Karasinski type trees.

Large portfolios of options, especially when marking to market or running risk-management processes.

The algorithm significantly speeds up two factor exotic models in all markets. This leads to an enormous improvement in speed for two factor interest rate models (Hull & White or Black Karasinski). Individual long

Dated American style swaptions or interest rate-equity exchange options could greatly benefit from this technique.

The typical neural network viewpoint is based on the theorem that a three layer neural network can model every bounded function. The flip side of the coin is that the result of the modelling depends totally on the data base which is used for the model fitting. However, applications in economics and finance often suffer from relatively small and noisy data bases. The question arises whether we can specify a more appropriate class of functions which contain more information about the task we have to solve: i.e. identify a dynamical economic system. We have developed an eleven cluster feedforward neural network architecture which is especially designed to model dynamical systems and which handles the different problems of the data modelling in an explicit way in different parts of the network.

We propose a neural network architecture which implements a portfolio management system similar to the Black-Litterman approach which distributes funds across various assets while simultaneously obeying application specific allocation constraints.

Both the forecast of the future returns of the relevant assets and their transformation to investment decisions are realized as neural networks. Penalized optimization over time assures that the allocations comply with investors' constraints and that risk exposure of the portfolio can be controlled.

We demonstrate the profitability of our approach by constructing internationally diversified portfolios across 21 different financial markets.

A model has been developed to simulate the complete range of investments available in global markets with the intended application of risk analysis and strategic planning. This model simulates a range of economic variables which form the basis for asset class returns. Economic variables include price inflation, complete government interest rate curves, equity price/earnings ratios, equity earnings growth rates, gross domestic product levels, currency exchange rates, and other interest rate curves. The model simulates these variables for multiple currencies in a symmetric fashion. The intended application of such a model, the structure of the model, the method of calibration of the model, and the integration of such a model into a complete risk management system will be described.

There are two schools of thought regarding the construction of simulation models for asset-liability work. The first school appeals to neo-classical financial economics, to construct price frameworks based on absence of arbitrage, or on general equilibrium. This is the route which has traditionally been followed for pricing derivatives. However, these models can get unwieldy for large scale multi-asset problems. Furthermore, arbitrage considerations can appear to impose strong constraints on model structures, which makes it difficult to achieve acceptable calibration to real world behaviour. The second school of thought argues that neo-classical theory is inadequate to explain many observed features of markets. Instead, more emphasis is placed on direct time series modeling of the historical data. However, the rejection of economic theory comes at a high price - time series models only produce future distributions, and these frequently suggest opportunities for arbitrage. In the presence of arbitrage, there is no accepted framework for valuing general cash flows. As a result, although management strategies can be mapped to the distributions of various quantities, it is difficult to reach a definitive ruling on which strategy is most valuable. This talk will provides practical and theoretical techniques to assist the development and calibration of large scale neo-classical models. The use of gauges enables the economic, statistical and practical constraints faced by model developers to be simultaneously incorporated, without having to trade off between different constraints. The formulae behind some such models in practical use will be revealed. Use of such models in the future is likely to result in convergence between asset-liability models and contingent claim valuation techniques.

Financial markets, such as the global foreign exchange (FX) market, often exhibit trending and trend-reversing behaviour. During such behaviour, the market level oscillates with changes in market consensus. Continued oscillations of this type result in the formation of patterns, such as the channel and the head & shoulders, which are used by technical analysts as trade entry signals. A sample space of these patterns has been constructed from a set of US Dollar/British Pound Spot FX tick data from 1989-97 using pattern recognition algorithms and the profitability of trading using such patterns has been estimated. A number of attributes of the resulting collection of patterns has been subjected to statistical analysis with the aim of classifying formations that can be traded profitably using a number of simple trading rules. Results indicate that such analysis can be used to enhance the profitability of this area of technical trading.

Most financial houses have access to high-frequency data, which typically gives the time, price and amount of every trade (or quote) in a particular asset. Such detailed information should be more revealing than a single price per day, but it will be hard to extract the additional value if one tries to use a model which supposes that the observed prices are a diffusion process! In this talk, we present a class of models for such data which treat the data as intrinsically discrete, and we show how easily-updated estimation procedures can recover parameter values from a range of simulated examples.

We present a new family of yield curve models, which is significant in two respects. Firstly, it enables us to construct multi-factor "market models" in a simple but extremely flexible manner, while the values of "vanilla" and path-dependent interest-rate derivatives can be calculated easily, whether or not their payoffs depend solely on the particular market rates being modelled directly. "Market models" carries here the general sense that the modelling focuses upon particular observable market interest rates; rather than the narrower sense of requiring lognormal simple rates. In many currencies, Black [1976] implied volatilities now exhibit "skews" with respect to strike; thus fitting market option prices closely requires rich alternatives to lognormality. Our models provide this through tremendous flexibility in the dependence of conditional covariances of term structure movements upon both rate levels and maturity. Secondly, our models allow direct incorporation of GARCH dynamics into the whole term structure, within an arbitrage-free multi-factor model. This provides an alternative to existing models, in which heteroskedasticity (other than dependence on rate levels) enters only through stochastic volatility or GARCH dynamics of a single interest rate. Valuation remains easy under GARCH.

We build a no-arbitrage model of the term structure, using two stochastic factors, the short-term interest rate and the premium of the forward rate over the short-term interest rate. The model can be regarded as an extension to two factors of the lognormal interest rate model of Black-Karasinski. It allows for mean reversion in the short rate and in the forward premium. The method is computationally efficient for several reasons. First, interest rates are defined on a bankers' discount basis, as linear functions of zero-coupon bond prices, enabling the use of the no- arbitrage condition to compute bond prices without resorting to iterative methods. Second, the multivariate-binomial methodology of Ho-Stapleton-Subrahmanyam is extended so that a multiperiod tree of rates with the no-arbitrage property can be constructed using analytical methods. The method uses a recombining two-dimensional binomial lattice of interest rates that minimizes the number of states and term structures over time. Third, the problem of computing a large number of term structures is simplified by using a limited number of 'bucket rates' in each term structure scenario. In addition to these computational advantages, a key feature of the model is that it is consistent with the observed term structure of volatilities implied by the prices of interest rate caps and floors. We illustrate the use of the model by pricing American- style and Bermudan-style options on bonds. Option prices for realistic examples using forty time periods are shown to be computable in seconds.

A very interesting approach in the literature to the problem of hedging under transactions costs is that of Leland (1985). He claims that even in the presence of transactions costs a call option on a stock price S, described by geometric Brownian motion, can be perfectly hedged using Black-Scholes delta hedging with a modified volatility. Recently Kabanov and Safarian (1997) disproved this claim, giving an explicit (up to an double integral) expression of the limiting hedging error (as the number of revision intervals tend to infinity). It appears to be strictly negative and depends on the path of the stock price only via the stock price at expiry ST. We will show that the limiting hedging error, considered as a function of ST, exhibits a removable discontinuity at the exercise price K. Furthermore, we provide a quantitative result describing the evolution of the discontinuity, which shows that its precursors can be well observed, including for cases with a reasonable length of revision intervals.

This talk is concerned with relaxing the Black-Scholes assumption of constant volatility of the underlying security in option pricing. It presents a critical survey of techniques for the inverse problem of option pricing including specifying processes for local volatility implied from market data and practical constraints on their values.

This paper provides an integrated theoretical framework to guide a firm's market risk management decisions. This framework is based on two key principles: the use of a Sharpe rule to assess prospective changes in the firm's risk-return profile, and the maintenance of a constant probability of default, which determines the firm's leverage. The rules suggested here are not restricted to normal return distributions, and can accommo

Date a variety of non-normal distributions as well. The approach suggested here also highlights the importance of the twin concepts of excess-return VaR and net-return VaR for financial risk management.

Value at Risk (VaR) has become a very important risk management tool and is widely used in a regulatory and managerial context. Several technical innovations have been developed in the academic literature to enhance VaR. The talk on Friday will review these developments in the literature, and assess their strengths and limitations for practical market risk measurement and management. Issues that will be addressed include alternate distributional assumptions, extremal value theory, converting one-day VaR numbers to N-day measures, backtesting techniques, and robust capital allocation rules, among others.

This paper provides an approach which combines features of structural and reduced form models for pricing defaultable bonds when interest rates are stochastic. Default occurs when the value of the assets of the firm hits a stochastic boundary of default or at the first jump time of a point process. An analytical solution for the price of risky debt is provided. Finally, the case of multiple firms and the issue of default correlations are considered.

When it is far out of the money, the value of a convertible bond reduces to the value of an ordinary bond, which must reflect the credit spread of the issuer. How to include this factor has been the source of some controversy. An ad hoc procedure is suggested in John Hull's book, and another approach is given by Tsiveriotis and Fernandez (Journal of Fixed Income, September 1998). Here we give a model which specifically includes the possibility of default as a way of explaining the credit spread, and investigate the effect of credit spread volatility on the convertible bond's value.

Currency overlay is a growing area in international finance and an exciting area in which to pursue research consisting as it does of microscopic modules of currency movements. Bank One Currency Advisors is a leading player in the field. Jessica James, Head of Research for the company reveals (nearly) all.

This talk will describe recent and ongoing research undertaken by the CFR with a view to emulating a successful FX technical trader with an intelligent computer system. Results will be presented which show that such a real time system can trade profitably out-of-sample and needs to be adaptive. In particular, risk adjusted performance measures and the computational learning technologies employed presently and in the future will be discussed.

The purpose of this talk is to discuss the role of style in building global stock selection models for use in asset management and to propose a procedure, based on Bayesian variable selection procedures, to identify the key styles that drive returns and to monitor changes in return model specification. We find that whilst style may be useful in forecasting returns, it does not seem to add explanation over and above that provided by country and sector in terms of in-sample explanatory power.

There is good evidence that markets have internal dynamics of their own -- prices change even when there is no apparent news. Based on a set of simple axioms it is possible to derive a unique functional form for nonequilibrium price formation. The result is much more tractable analytically than the standard temporary equilibrium models. Profits and losses of individual agents can be decomposed into the aggregate of the pairwise two point correlations of their strategies. This leads to a model for capital allocation that is equivalent to classic predator-prey models from biology. Simulations of market evolution lead to chaotic equilibria that give insight into how speculators drive the irregular dynamics of prices.

PDF version of the talk available -

The business of designing, pricing, hedging and managing complex financial contracts is rapidly consolidating and changing. It is no longer possible to organise an even moderately sophisticated activity around (even very clever) models implemented on, say, a spreadsheet.

The maturing financial industry needs tools that combine both

- Flexibility (try and use new models on a whole book, simulate a new trading strategy, build new kinds of contracts,...)

- Security (well specified contracts, correctly implemented models, well understood risk management including operational risk supervision, correct and cost effective back-office contract execution,...).

Existing tools tend to be ad hoc, and deal only with a fixed range of financial instruments and models. We will argue for both the necessity and feasibility of a domain specific language for describing financial contracts, trading strategies and evaluation models. This formalism enables us to:

Describe a rich (indeed infinite) variety of contracts precisely Reason about the relationships between contracts (e.g. C1 dominates C2) Compute the value of a contract, under a variety of evaluation models, in a modular, or "compositional", fashion Our two presentations draw together experience from a large financial institution, and experience in the design and implementation of computer programming languages. We will show that an approach drawn from programming language semantics can reduce or solve many of the problems typically encountered every day in an industrial "trading room". Numerous practical examples from pricing, risk management and back-office activity will be given. We will not assume any background in programming language theory, however!

PDF version of the talk available -

The International Monetary Fund was set up in Bretton Woods in 1944 in a context of war and with the memories of hyperinflation, depression, high unemployment and fluctuating exchange rates still fresh. The institution has arguably served the international community well over the years (though it has never been exempt of criticism and controversy) and has demonstrated a tremendous ability to adjust itself to new economic circumstances. Despite the abandonment of the par value regime in the 1970s, the importance of the IMF has remained undiminished. The IMF played a leading role in the sovereign debt restructuring of the LDC countries in the 1980s, in the transition to a market economy of formerly communist countries in the early 1990s and in the resolution of financial crises in Mexico and Asia in the mid to late 1990s. As we enter the XXIst century it is time to rethink the institution. Should the Articles of Agreement - inspired by the proposals of John Maynard Keynes and Larry Dexter White in the 1940s - be revised to reflect the needs of the new international financial system? Should the IMF mandate be broadened to adopt a development role or should the IMF's mandate be restricted to the management of financial crises? Should the IMF adopt a formal international lender of last resort role? Should conditionality be relaxed? Should surveillance extend beyond macro-economic policies? In particular, should the IMF's surveillance function extend to [micro] prudential financial supervision and regulation? These are some of the questions that I discuss in my paper.

Background on operational risk and its increasing importance in financial services
- problems with defining operational risk

- Organisation and management of responsibilities for addressing operational risk issues

- Tools available for the monitoring of operational risk

- Challenges and benefits associated with quantification of operational risk.

PDF version of the talk available -

Most financial institutions define operational risk in terms of complementary space - namely everything not covered by exposure to credit and market risk. We study operational risks with the aim of integrating widely accepted methods, such as Value at Risk and credit modelling with capital allocation for unexpected catastrophic losses. Our risk measures are based on thresholds derived from statistical analysis of profits and losses of a financial institution and its business units. Since sufficient data are seldom available for accurate estimation of extreme values, we have developed Bayesian simulation techniques for fitting heavy-tail distributions in the case of small sample sizes. These techniques are applied to rare but potentially disastrous events for which little data is available.

PDF version of the talk available -

It seems common sense that the optimal portfolio designed to hedge an exotic derivative contract should contain freely traded call options as well as the underlying stock and the numeraire. However, standard stochastic volatility models are not complete, and so do not in general give hedging information.

We point out that there are quite reasonable models for the "volatility of implied volatility" and that such models are complete providing they satisfy a consistency condition. Experimental data does not contradict, and in some parts confirms this approach. We hope that in the long term it will lead to effective approaches to protecting portfolios from volatility risk.

PDF version of the paper available -

In any modelling process the calculation of an observable effect is a mapping from the space of parameters associated with the theory to the space of observable parameters. The form of the mapping may range from an explicit formula through to an intensive numerical calculation. The inference of theoretical parameters from observations represents an inversion of such a mapping and it is necessary to be careful to establish when the inversion represents a process that is both well-defined and stable.

The inverse function theorem is a critical element of the inversion process when the mapping is non-linear.

In option pricing an inversion of common interest is the computation of implied volatility from market price data. This talk will explore the consequences of the failure and near-failure of the inverse function theorem as applied to volatility for some simple options of interest. I will argue that except in very limited circumstances, the implied volatility may well be meaningless.

The role of the European Central Bank in the regulation of European financial markets. In particular, the allocation of authority between the ECB and EC institutions and national supervisory authorities in the regulation of financial conglomerates and banks that operate in the EU. Can home country control work? What is the role of the ECB as a lender of last resort? and the implications for systemic stability in European financial markets. What vacuum does the ECB try to fill in international financial markets given the limited competencies of national regulators?

We revisit the mean-variance (MV) model of Markowitz and the construction of
the risk-return efficient frontier. We examine the effects of applying buy-in
thresholds, cardinality constraints and transaction roundlot restrictions to the
portfolio selection problem. Such discrete constraints are of practical
importance but make the efficient frontier discontinuous. The resulting
quadratic mixed-integer (QMIP) problems are NP-hard and therefore computing the
entire efficient frontier is computationally challenging. We propose an
efficient approach for computing this frontier and provide some insight into its
discontinous structure. The computational results are reported for a set of
benchmark test problems.

PDF version of the talk available -

Banks have adapted modern portfolio theory to their own needs in some
very particular ways; bringing their risk measures back to the investment
management community requires significant adaptation. This talk discusses the
differences between the risk measurement goals, and actual measures used, for
banks, pension plans and asset managers; compares value at risk to the "classic"
risk measures used in the investment industry; and explores the shortcomings of
value at risk for investors, describing how these shortcomings can be
overcome.

PDF version of the talk available -