## Models – How do computers play chess?

Chess computers have been good enough to beat me my whole life. It took until 1997 for them to beat a reigning World Champion, when Deep Blue beat Kasparov. They can now comfortably beat all the best players in the world. But development of Go computers has been far slower, it was only this year that AlphaGo defeated Lee Sodol, the 9 dan professional. What is the difference?

The most common reason given for why Go is harder, is that it is more complicated. In chess, there is a choice of 20 first moves, in Go the choice is 361. So in Go, the permutations and, as a result, possible games are far higher than in chess. Since computers play these games by simulating permutations, it makes intuitive sense that this is easier in chess than Go.

This argument is logical but highly misleading. The problem is that BOTH games are insanely complex and unsolvable by brute force. Imagine I am trying to move a pair of large rocks. One weighs 100 tonnes. The other weighs 10,000 tonnes. Is it sensible to say that the second is 100 times harder to move? Or simply that both are unmovable. Degrees of impossibility is not a very useful concept.

There is a more important difference between the two games. In chess we can build a simple model that acts as an excellent first approximation to evaluate who is winning. Just count the material and use a simple scoring of queen= 9 pawns, rook = 5 pawns etc. to come up with a single total for both sides. The one with the higher number is winning. This is how beginners think of the game, the aim is to grab material. Thus, it is easy to code a simple model to get the computer started. Once this first order approximation is worked out then second order models can be added such as pawn structure, space advantage or use of open files. In Go there is no such simple evaluation metric and how they managed to programme a computer to win is a fascinating topic and likely a separate post on AI.

A good first order approximation often gets you a decent way to a solution. If you don’t have this, you may have trouble finding a solution that doesn’t take an infinite amount of time to solve, as the early versions of Go computers found.

This has an interesting link to the way I approach financial markets and economics. I think it is most important to spend time thinking about appropriate first order approximations to help with the general understanding of what is really going on. But the influences around us often obscure this, for example from news or complex analysis.

Reminiscences of a Stock operator

I first read this book when I was 17 but it took me many years of trading and painful experiences to realise that the character who had the most to teach me was Partridge. He is the older, experienced trader who whenever presented with a stock tip by an excited young trader would always reply

“Well, this is a bull market, you know!” as though he were giving you a priceless talisman wrapped inside a million-dollar accident-insurance policy. And of course, I did not get his meaning.

In economics and finance, it is the development and understanding of models of a first approximation that are the most useful and the most important. This is primarily the method I am using for models of asset market pricing described in other posts. Far too much effort and time is spent on far more “complex” analysis and models, which often focus on second or third order drivers by assuming away the first order ones. The “news” constantly blaring out on cable TV is at best a focus on factors causing minute differences in asset prices. At worst, it is just distracting white noise. Precise directions for the last 100m of your trip are a not much use if you are not sure which town you are going to. It is far better to be approximately right than to be precisely wrong.

A common type of trade idea proposed to me will be in this form:

1. There is a recent development or upcoming event which matters for the Australian dollar (substitute in any other market) e.g. a piece of economic data