媒体

Third Bridge Views: Quant Funds Are Data Hungry: Where Will They Get the Numbers from?

In this edition of “Views from Our Executive Team,” co-founder and managing director Joshua Maxey follows up on his article discussing the growth of hedge fund quant strategies with a look at how human insights are needed to get the most out of data-intensive investment strategies

“Rubbish in, rubbish out” is an old adage in equity research. Whether an analyst or fund manager’s financial modelling is basic or intricate, the forecasts and conclusions that come out at the end can only be as good as the data or assumptions that are input at the beginning.

Analysts have always been in a constant battle to find, and verify, the right inputs into their financial models to reasonably predict the future performance of a company or asset. And it is common for the best financial analysts, and models, to get things very wrong because they are being fed irrelevant or bad assumptions.

The need for relevant inputs and accurate data has been fundamental to the work of equity analysts and fund managers since the days of modelling future financial performance on an Excel spreadsheet. The requirement for quality data is becoming even more central to their work as they move into quantitative, or systematic, investment strategies that augment or replace human judgement with data, algorithms and machine learning to manage funds.

Data Hungry

Quantitative investment is gaining increasing importance in the fund management industry.  By the end of 2016, funds run by systematic strategies made up close to 20% of total hedge fund assets. This figure ignores the increasing number of hedge funds using ‘quantamental’ techniques, which add quant analysis and big data inputs to support traditional fund management teams in search of investment ideas. 

These quant and ‘quantamental’ funds, which have benefited from the significant growth in cloud storage capacity and Artificial Intelligence (AI) programming to crunch through enormous datasets, invariably boast proprietary models and different methodologies to analyse inputs. Most of these models are ‘black box’ or secretive, as individual fund managers try to maintain their edge over the competition.

What they all have in common is a need for data. As with the old method of an analyst tinkering with his Excel spreadsheet model, the “rubbish in, rubbish out” theory is still crucial. The data needs to be relevant and accurate for the machines to get to the right conclusions – and there needs to be lots of it.

Trawling Through the Data

There are numerous examples of how fund managers are applying big data analysis – from using AI programmes to trawl through collated credit card statements to predict quarterly Netflix subscribers to designing software that captures satellite imagery to count the number of cranes in Guangzhou and produce accurate forecasts of Chinese housing supply.

There are massive amounts of data sets out there. According to US quantitative fund manager Two Sigma, the world produces a billion gigabytes of data per hour. And much of this is new – IBM estimates that 90% of data in existence today is less than two years old.

Nowadays, easy and cheap access to cloud storage and clever programming can easily handle the mass of data, and these resources are becoming prevalent in the fund management industry. The real trick is determining which data is relevant and helpful for fund managers seeking to get to the right financial conclusions. Essentially, there is little point counting the Guangzhou cranes if the satellite cannot see that a third of them are, in fact, idle.

Too Many Words

So where will quality data come from? Fund managers have traditionally relied upon investment banks and brokers for research and information needed to build their models and make investment decisions. But these organisations have large legacy research departments that mostly provide a wordy narrative. While they do build earnings, balance sheet and industry sales forecast models, which can be amalgamated and fed into a fund manager’s quant models, much of their output is in the form of text, opinion and soft variables, such as evaluating a company’s management team. 

Data analytics companies that service the fund management industry have sprung up and can provide vast data sets of real-time information across numerous industries and markets. But in its raw form, much of this data is available to all and gives no genuine information edge. Back to the Guangzhou cranes: there is no advantage in counting them if everyone can do it. 

The real challenge, therefore, is how to make the data intelligent and achieve the highest signal-to-noise ratio. London-based Winton Asset Management, which manages over $30bn in pure quant funds, spends much of its time “cleaning” the vast datasets it receives in order to extract value. Multi-Asset fund manager GAM, which runs systematic funds with its Cambridge-based Cantab team of data scientists, overlays alternative data sets, such as weather patterns, with financial data to signal assets that can outperform the markets.

New generation research firms, that have emerged as challengers to the investment banks’ research model, may be in a better position to provide the intelligent and clean data that quant funds increasingly need. These firms collectively analyse a multitude of companies, industries and markets and are unencumbered by large investment bank research structures that rely on the wordy narrative. 

They distribute a streamlined, data-led primary research product and are probably best placed to develop methods to turn soft variables into data. They also have the financial market experience, and know-how, to choose what data is relevant and help interpret the results. They have the knowledge of, and contacts with, Guangzhou’s local construction companies to be able to contextualise the crane data and add a qualitative layer to the AI-generated data.

Blending Human and Machine

As quant and quantamental investment strategies evolve, the real value for the fund management industry will come in devising methods to sift through the vast available data and blend it with the soft variables that are an everyday part of investment decision-making.

While data is increasingly ubiquitous and available to all, fund managers and their research service providers will need to focus on which of it serves their analytical needs and enables them to gain an edge over the competition. What matters is how they frame the data requests, verify the quality of the data and quantify the relevance of individual data sets to their investment processes. In this process, traditional human investment expertise is likely to remain crucial. The data analysis can only be as effective as the human know-how that sets its parameters.

Both investment banks and new generation research firms have analysts and market professionals with years of experience in evaluating market trends and the instincts to spot mis-priced assets. These very human qualitative skills are likely to be decisive in enabling fund managers to work out what data they really need to look at and how to interpret it once it comes out of the other end of the machine.