Models – How do computers play chess?

Chess computers have been good enough to beat me my whole life. It took until 1997 for them to beat a reigning World Champion, when Deep Blue beat Kasparov. They can now comfortably beat all the best players in the world. But development of Go computers has been far slower, it was only this year that AlphaGo defeated Lee Sodol, the 9 dan professional. What is the difference?

The most common reason given for why Go is harder, is that it is more complicated. In chess, there is a choice of 20 first moves, in Go the choice is 361. So in Go, the permutations and, as a result, possible games are far higher than in chess. Since computers play these games by simulating permutations, it makes intuitive sense that this is easier in chess than Go.

This argument is logical but highly misleading. The problem is that BOTH games are insanely complex and unsolvable by brute force. Imagine I am trying to move a pair of large rocks. One weighs 100 tonnes. The other weighs 10,000 tonnes. Is it sensible to say that the second is 100 times harder to move? Or simply that both are unmovable. Degrees of impossibility is not a very useful concept.

There is a more important difference between the two games. In chess we can build a simple model that acts as an excellent first approximation to evaluate who is winning. Just count the material and use a simple scoring of queen= 9 pawns, rook = 5 pawns etc. to come up with a single total for both sides. The one with the higher number is winning. This is how beginners think of the game, the aim is to grab material. Thus, it is easy to code a simple model to get the computer started. Once this first order approximation is worked out then second order models can be added such as pawn structure, space advantage or use of open files. In Go there is no such simple evaluation metric and how they managed to programme a computer to win is a fascinating topic and likely a separate post on AI.

A good first order approximation often gets you a decent way to a solution. If you don’t have this, you may have trouble finding a solution that doesn’t take an infinite amount of time to solve, as the early versions of Go computers found.

This has an interesting link to the way I approach financial markets and economics. I think it is most important to spend time thinking about appropriate first order approximations to help with the general understanding of what is really going on. But the influences around us often obscure this, for example from news or complex analysis.


Reminiscences of a Stock operator

I first read this book when I was 17 but it took me many years of trading and painful experiences to realise that the character who had the most to teach me was Partridge. He is the older, experienced trader who whenever presented with a stock tip by an excited young trader would always reply

“Well, this is a bull market, you know!” as though he were giving you a priceless talisman wrapped inside a million-dollar accident-insurance policy. And of course, I did not get his meaning.


Useful trading models

In economics and finance, it is the development and understanding of models of a first approximation that are the most useful and the most important. This is primarily the method I am using for models of asset market pricing described in other posts. Far too much effort and time is spent on far more “complex” analysis and models, which often focus on second or third order drivers by assuming away the first order ones. The “news” constantly blaring out on cable TV is at best a focus on factors causing minute differences in asset prices. At worst, it is just distracting white noise. Precise directions for the last 100m of your trip are a not much use if you are not sure which town you are going to. It is far better to be approximately right than to be precisely wrong.

Trade Ideas

A common type of trade idea proposed to me will be in this form:

  1. There is a recent development or upcoming event which matters for the Australian dollar (substitute in any other market) e.g. a piece of economic data
  2. We should buy/sell it

What is rarely done however, is considering how important this driver is in context. Commonly the idea is logical but essentially rests on the idea that the current market price is already correctly priced. This approach fits well with many people’s education in which assumptions of efficient markets are often embedded without them realising. The reason that these trade ideas often fail is that the new information will only dominate the market movements if and only if the more important drivers of the currency are correctly priced. Instead of assuming the market is fairly priced, I would prefer to question whether this first order approximation is appropriate before moving on.

Australian Dollar Example

To use an Australian dollar example, the value of the currency doubled between 2001 and 2008

It did not rise like this because of a succession of incremental pieces of random news which happened to cumulate in a massive movement. It happened because the currency was by first order approximation far too cheap. A useful first order approximation model for currencies is Purchasing Power Parity (indices are freely available and calculated by the OECD). The chart below shows that the PPP of the AUD was very steady at around 0.70 cents. In 2001 it was very cheap, and when it approached parity it was very expensive. Capturing these kinds of move is where I spend my time and historically where I have made my biggest profits.

Conclusion

I have learned to focus on the bigger picture and look for large market movements. In my experience, these are most likely to happen when the market price is a long way from a good first approximation model. I therefore put time into building these first approximation models across asset classes as I have briefly described so far in fixed income, and will follow up with ones on currencies and equities. Just as in chess, a good understanding of a first approximation model can get you a long way. Focusing on very new information or complex models which are actually third order features, while neglecting the first order drivers, only leads to confusion and major mistakes.

How to reduce your Risk Part III

Trick question (click here for the question, and here for the answers)
There is no right answer because risk cannot be minimised.
It can only be transformed from one type into another.


What did people choose?

Option A was the most common answer. For those who trade in financial markets, this may be surprising.

If I reframed the question and asked:

  • Please calculate the DV01 of Options A and B
  • Please calculate the VAR of Options A and B
  • Please tell me which of A or B has greater risk

You would quickly work out that B has zero DV01 and zero VAR. Hence by the definition of risk used on trading floors, A has higher risk. Unsurprisingly asking this question to a room of traders at investment banks, I get the overwhelming answer B because that is the context in which they think about “risk”.

If I ask the question to people who work in property or private equity, then I am more likely to get the answer A as certainty of cashflow is critical, especially when thinking about assets and liabilities. In the accrual accounting world of regular banking, they think about Earnings at Risk (EAR) and Option A is the way to reduce the risk.

The answer given likely relates to your personal circumstances and the exact framing of the question. If I had the time running a series of experiments with slightly different wording, rates or quantities I think would give interesting results.

But for now, the practical lesson is important. People do not instinctively understand risk at all well. We are presented with questionnaires from investment advisors which ask us for our risk preferences with no definition of risk. From the results of typically recommended portfolios, it would suggest that bonds are low risk and equities high risk.

My approach

I think that the best way to think of this question is in terms of a balance sheet. Whether choice A or B “reduces” your risk depends on the extent to which it matches the tenor of your liabilities. If your liability is short term then Option B is the sensible answer. For investment banks, they have no corresponding long-term liability apart from capital. They typically hold wafer-thin amounts of capital against market-to-market assets so naturally recognise A as a risk. For someone who is keenly aware of what they see as fixed longer-term liabilities such as paying school fees or retirement expenses then the choice of a long-term asset i.e. Option A, is far more natural.

Risk matters

Whenever risk gets mentioned, I very rarely observe a discussion of this nature. Often only one side of the balance sheet is being examined and the vastly important implicit assumptions from the liability side are not considered. I am an advocate of multiple forms of risk measurement, including VAR, but only if it is used in the correct context. Many of the worst financial disasters have occurred by taking a risk and accounting concept that was appropriate in one context and transplanting it to another. AIG and Enron are the biggest ones that spring to mind.

Framework for valuing fixed income – Long end

I do a very different analysis of the long-end of the yield curve, compared to the front-end. (Framework for valuing fixed income – Front end) Mathematically, you could take the same approach and bootstrap the curve from a complete set of forecasts of short-term rates for the next 30 years. But this seems a bit silly and begs the question of how you would get these forecasts anyway.
To simplify the analysis, what we have to work out is what the long-term “equilibrium” rate will be and ignore for now how we get there or use the analysis from the front end to build a path.

Simple Hypothesis: Long-Term rates = Nominal GDP

An approach that appeals to me is to look for a link between long term interest rates and long term nominal GDP. I think of it as a “Wicksellian” natural rate which the market will tend to revert to i.e. If interest rates are consistently far away from the growth rate of nominal GDP then there would be a persistent drag or stimulus to growth which would not be sustainable. You can get to a similar idea from several different economic frameworks.

If we look at the data then, the hypothesis looks reasonable. Below is the 10-year average of nominal GDP growth alongside the 10y10y interest rate for the US. The 10y10y rate is the rate you can calculate as what the market implies the 10y interest rate to be in 10 years’ time.

Before the early 2000s, interest rates were consistently a little higher than GDP. Academics were happy with this and explained it in terms of some type of premium which bond owners would demand to own bonds. They were then confused in the early 2000s by the “conundrum” that long term yields dipped, explaining it either by Chinese ownership of Treasuries or a global “savings glut” which was forcing down yields.

Outlook for Nominal GDP

Current yields do not look very remarkable to me, but they are only correct if you think that nominal GDP will remain as low as for the past decade. The most prominent argument that we should expect this to continue comes from Larry Summers and his promotion of the idea of “Secular Stagnation” – http://larrysummers.com/2016/02/17/the-age-of-secular-stagnation/

I find these arguments a little hard to engage with as we must recognise how utterly useless long-term forecasts of anything generally are. I should admit that I am not a big fan of anything which looks like a restatement of the savings glut theory to me, but I do not want to engage here in an academic debate. As a more practical question, I think that the burden of proof is on ideas such as Secular Stagnation and the “New Normal” that the world will need permanently far lower rates than it has in the past. Arguing that nominal GDP will be lower, due to slower population growth, demographics and potentially lower productivity is easy. Explaining why it is 3% lower is not so easy.

My view is that this economic cycle does not require new theories to explain it. A financial crisis results in a very deep recession and leaves scars which mean the recovery is slower than many expect. These hangovers from the financial crisis are what Yellen refers to as “headwinds” which are slowing down the economy. Risk aversion among consumers and businesses after such a bad recession is only to be expected and the impairment of the credit channel after such a disruption is also understandable. But there is no reason to think that these headwinds are permanent. They can abate and we can return to a world similar to the one before, both in terms of the level of nominal GDP and also the relationship between interest rates and growth. The financial crisis has been traumatic, especially for countries like the US and the UK, that have not seen one like this recently. However, the history of financial crises is that they are worse than people think, but they are not permanent.

Are we renormalizing?

Unemployment fell slowly but is now down to 4.5%. wages have been sluggish but are now picking up.

If I draw the first chart again but this time use a 5yr rather than 10yr moving average then perhaps I can argue the market is reacting too slowly. Nominal GDP has been rising recently and with rising wages and inflation can easily be seen to be likely to continue to do so. If that is true then market rates are too low.

Why are long term rates still so low?

The idea that long term rates are too low is hardly new. After all this was the whole point of QE!! The central banks buy huge amounts of long term debt to drive up bond prices and yields down. This helps to stimulate the economy and boost other asset classes which look relatively cheaper to bond markets, and so drives reallocation flows.

As I mentioned in this post (https://appliedmacro.com/2017/05/01/government-debt-framework-uk-follow-up/), we are living in a new era of financial repression. Therefore, I really do not need any grand theory from the supply side of the economy to explain low rates. I just look at the huge boost in demand for bonds from the central banks.

Is there a catalyst for change?

  1. One potential catalyst would be from the front end. If the Fed hikes rates faster than the market expects, then this can cause a shock to ripple down the whole curve. We saw an extreme version of this in 1994.
  2. If wages start to accelerate then the Fed, economists and market participants would have to radically reassess their assumptions about the inflation outlook and the appropriate level of rates. If you are very confident this cannot happen, you have more faith in our understanding of this type of macro variable than I have.
  3. Even without any fundamental driver we may see a repricing simply from a change in the supply and demand dynamics of the bond market.

QE buying has been high for the past few years but it is finally slowing down. This may be the catalyst for a repricing of bonds.

Conclusion

A simple and yet historically useful framework for considering long term rates is to use nominal GDP. In recent years, we have seen the combination of a major downshift in long term expectations for both nominal GDP and the level of rates relative to nominal GDP. While many arguments justifying this change as permanent have some merit, I think that they are more temporary then current market pricing implies. Which means that I do not think that bond markets are cheap. In fact, I think they are wildly expensive.