Skip to content

Project

The Future of Property Forecasting

Unit(s) of assessment: Architecture, Built Environment and Planning

School: School of Architecture, Design and the Built Environment

Overview

Real estate professionals have long been involved in developing implicit forecasts of market values. Until the 1980's this was largely based on intuition, since the 1990's market collapse, there has been greater emphasis on quantitative methods and formal modelling techniques.

The rise of quantification has led to some convergence in views, forecasters tend to use similar models, the same datasets and a standard set of statistical procedures. This means, of course, that most forecasts will be subject to similar sources of systematic bias. The failings of this approach and its pervasiveness within the industry have been the source of considerable critical comment in the recent past.

Addressing the Challenge

Most property forecasts are generated by combining econometric predictions with a more subjective market overlay process. There are a large number of ways that errors might enter the forecasting processes used in real estate such as:

  • the data used are inaccurate
  • the limited variables included do not cover all of the key drivers of the market
  • the statistical methods used to estimate relationships are not sufficiently sophisticated to deal with the complexity of the market
  • the assumptions made about future trends in key real estate and economic drivers are erroneous.

The purpose of this report is to explore current forecasting practice and to consider how forecasts might be improved. We have sought to achieve this aim through:

  • a review of current practice
  • an investigation of the performance potential of ‘standard’ (based on current practice) and advanced econometric models
  • an investigation of the potential use of ‘alternative’ (non-econometric) behaviourally oriented techniques including methods such as scenario planning and neural networks that draw on attitudinal and other survey data as key inputs.

Making a Difference

Finally, we use these two elements of the project to reflect on how forecasting practice might be strengthened.

We showcased two different types of econometric models: autoregressive integrated moving average (ARIMA) and Error Correction Mechanism (ECM) approaches. These have been used to demonstrate the sensitivity of forecasts to changes in model structure, methods of estimation, data used and variables measured. We have also undertaken a more qualitative, judgement-based experiment (Scenario Forecasting Exercise) that invited forecasters to estimate future outcomes under different circumstances.

The scenario exercise served to illustrate the way in which a market overlay process introduces differences in views about macroeconomic and market-specific prospects, including investment flows. The exercise highlighted the potential variation in the scale of overlay and demonstrated the difficulties associated with trying to avoid further distortions being introduced by the ways in which individual views enter the process.

The analysis shows, perhaps unsurprisingly given the similarities in inputs and model structures, that most of the variation in forecasts is derived from differences in the overlay process.

We produced a range of forecasts for the next three years, derived using a variety of techniques based upon both econometric techniques and the scenario exercise. From this we suggest some potential improvements that could be adopted in order to increase forecast accuracy. These include considering econometric methods that can better identify structural breaks and on the qualitative side, an appreciation of the different drivers of market overlay processes and provide a more systematic basis to capture the influence of this aspect of this process. Our scenario exercise provided an exemplar of how this might be achieved.

Collaboration

Department of Town and Regional Planning, University of Sheffield.

Funded By: The Investment Property Forum (IPF)