Since the beginning of the computer age, people have been comparing the capabilities of “electronic brains” to their own.
One of the ways to approach this matter is to use an abstraction of a game, specifically one that has a certain set of rules and victory conditions, and see whether a computer player will be able to win against a human player. To put it another way, can a computer out-think a human opponent?
Playing games like Chess enables us to test and train AI, so that one day they can not only solve problems faster than us, but solve the questions we’ve yet to answer at all.
That said, it’s been a gradual journey, with more and more advanced AIs being developed for more complicated scenarios and games. In this article, we are going to follow the trail of the competition between us and computers, as it relates to the field of playing games of various complexity and properties. In this first part, we will look at some of the oldest challenges, such as Backgammon, Chess or Go, which have been played since centuries, as well as younger inventions like Scrabble or even a modern television show such as Jeopardy!
1950s – Putting Games and AI Together
Artificial Intelligence, as a branch of science crucial in this endeavor, started in the fifties and, around this same time, the first simple programs were written to play Noughts and Crosses, Draughts and Chess. However, the implications of this race between man and machine go vastly beyond the games and entertainment industry, however.
But every good story must start somewhere and our trail starts in 1950.
Noughts and Crosses – Baby Steps
Noughts and Crosses, also known as Tic Tac Toe, is a simple, classic pen and paper game, where a player wins by creating 3 marks in a horizontal, vertical or diagonal row on a 3 by 3 matrix. Because of these limited moves, it’s very easy to discover, that perfect play on both sides always leads to a draw.
It was also first to take the form of a video game. In 1950, a computer called Bertie the Brain was displayed during the Canadian National Exhibition. It was not winning all the time– but it was capable.
That changed two years later when a program called OXO developed for the EDSAC (Electronic Delay Storage Automatic Calculator) was developed. It was able to play perfectly across all of 26,830 possible game sequences (excluding board rotations and reflections). The program was a classic example of a minimax algorithm that evaluated a tree of possible moves. The problem space is small enough, even for a seventy year old computer that accepted digits from 1 to 9 selected from rotary phone controller as an input, and displayed the results on a 35 by 16 pixel CRT display.
As a fun fact, the game of Noughts and Crosses was famously featured in a 1983 movie WarGames where it served as analogy to global thermonuclear war to teach an artificial intelligence that some games cannot be won.
Backgammon – The First Triumph
Backgammon is one of the oldest known board games, dating back to five millennia ago in ancient Mesopotamia. It involves rolling dice and moving 15 pieces across a board, which features 24 triangles. Whoever gets their pieces off the board first wins.
This game is vital to our story as it was the first case in human history when a human champion was beaten by a computer program in a recognised intellectual activity. In 1979, the program BKG 9.8 entered into a match against Luigi Villas and won 7 games to 1. The program was based on heuristics for solving a set of linear equations, as well as the original SNAC (Smooth Nonlinear functions and Application Coefficients) method.
Draughts – A Strategy Solved
Draughts, also known as Checkers, is often known as Chess’ younger brother and is played on the same 8 by 8 board, but instead uses 12 identical pieces per side (albeit other variants exist as well). The goal is to capture all the opponent’s pieces by jumping over them in diagonal movements on the board.
The first program to play Draughts was written by Arthur Samuel and ran on the Ferranti Mark 1 computer in 1951, making it one of the first attempts to challenge a human player with a machine opponent in our story. By the early sixties, it was able to defeat respectable amateur players.
However, we had to wait until 1994 to see a human champion fall. A program called Chinook was able to win against Marion Tinsley. The score was 4 to 2, with 33 draws. In 2007, the team behind Chinook announced that they have solved the game of Draughts and proved that perfect play on both sides will always result in a draw.
Chess – Diverse Movement and Massive Parallelism
The Game of Kings, as Chess is also known, is one of the most prominent ways to prove superiority over one’s opponent, as we can remember from the cold war era, when Soviet Chess players were an important part of communist propaganda.
This game is more complicated than Draughts, as we have many more options to move our pieces at any moment of the game, which makes it more problematic for traditional AI methods based on searching and scoring a tree of possible moves for many rounds ahead.
Nevertheless, in a famous match in 1997, IBM’s Deep Blue machine was able to beat the reigning world champion, Gary Kasparov, using this method, with a close score of 3.5 to 2.5. Deep Blue took advantage of massively parallelized custom build VLSI (Very Large Scale Integration – sometimes it’s just better as an acronym) chips to conduct its computations. It had 30 nodes, each containing a single multi-purpose CPU, orchestrating 480 such chips. Many journalists back then announced this as the end of the man versus machine race, with a silicone winner.
This was, however, vastly premature. There were still many more challenges to come…
Scrabble – Imperfect Information and Malicious Moves
Scrabble is over 80 years old, yet it’s still quite new when compared to the games we just explored. In this game, players build a crossword on a 15 by 15 board, trying to score a maximum number of points, which vary per letter and location.
As opposed to the other games we’ve considered so far, it introduces an element of imperfect information, as we don’t know what letters our opponent has at his or her disposal. This means there is no perfect information and machines have to consider the data that is hidden from them.
We had to wait until 2007 for the first AI victory, when computer program Quackle beat human champion David Boys. The program follows a brute-force approach that is similar to what we have seen before, building and evaluating a tree of possible moves. Yet, what is interesting in a computer’s Scrabble strategy is that, instead of intuitively optimising to put the highest possible scored word on board, it may attempt to guess its opponent’s hand and, based on probability, attempt to do a move with a less direct benefit for us, but one that will block the opponent’s best options, thus sliding the probability of a win to the computer’s side more efficiently.
Jeopardy! – Natural Language Understanding
Jeopardy! Is a television show where contestants are presented with general knowledge clues and need to provide a solution in the form of a question. To win the game, the player has to combine reasoning with knowledge in seemingly unbound problem space.
In a 2011 exhibition match, IBM’s Watson machine won a match against two of the highest-earning professional game contestants – Ken Jennings and Brad Rutter. Watson is a complex computer system that utilises Natural Language Processing, Information Retrieval, Knowledge Representation, Automated Reasoning, and Machine Learning to answer arbitrary questions.
When confronted with a sentence, Watson decomposes it into chunks, running in parallel through over a 100 different natural language processing algorithms that formulate hypothesises, based on its knowledge databases. It then performs soft filtering and scores these hypothesises against another database. All these outputs are then merged and scored against each other, using a model trained with Machine Learning. As sources of information, Watson uses custom organised encyclopaedias, dictionaries, various articles, books, and online sources. The original hardware underneath consisted of 90 eight-core POWER7 processors.
Go – Complexity Explosion and Deep Learning
Go was invented in China over 2,500 years ago and is still played in its original form. It comprises of putting pieces on a 19 per 19 board to surround more territory than the opponent.
Yet, despite its simple rules, Go is much more difficult for computers than Chess. In the latter, we have an average of 37 possible moves at the time while, in Go, it’s over 150. It’s also more difficult to score a move’s value, as opposed to considering material advantage gained upon capturing an opponent’s piece, as is the case in Chess.
After defeating Kasparov, many thought that we will have to wait a hundred years to achieve the processing power required to challenge a human champion in Go, yet things turned out differently! AlphaGo, created by the Deep Mind team, won a famous match against 9th dan professional player Lee Sedol, finishing 4 to 1 in 2016, and just a year later, it defeated reigning champion Ke Jie.
AlphaGo uses a much more sophisticated method than what we discussed earlier. While it also constructs and searches a tree to find optimal move, it utilises knowledge derived from training with a deep neural network on a vast set of historic matches, all played between human opponents.
This technique from Deep Learning attempts to mimic how our brain processes information. The network has many layers, each processing input information at a different level of abstraction. An earlier version of AlphaGo relies on Graphics Processing Units to efficiently perform matrix operations, lying at the heart of neural networks and taking advantage of the many cores working in parallel. Later versions went a step further, running on Tensor Processing Units – hardware strictly optimised for Machine Learning calculations.
Summary – What Have We Learned So Far?
This is not the end of our journey – in fact, there’s so much to cover I’ve broken this down in to two parts. However even this early on, it’s clear we’ve come a long way. When options are limited, with set moves and a restricted/predefined number of outcomes, technology has long solved such issues – it’s often just a case of scaling processing up with the larger scale of games.
However, what we’ve started to see is problems that use hidden information, or rely on ‘understanding’ your opponent. This is where we’ve started to develop new approaches, truly showcasing the potential of AI, Machine Learning and Deep Learning solutions over traditional tree-based decision making processes.
We have only two years left to cover in our story but, believe me, this is where things are starting to get really interesting. In part two, we will talk about challenges with problem spaces magnitudes beyond the game of Go, including those that even a while ago were considered too intractable to solve in the near future. Most importantly, we will also discuss how those achievements translate into the plethora of other fields beyond games, so make sure you don’t miss the second part of this article.
In the meantime, you can check out Tomasz Zielinski’s blog post on our own attempts in this field. We wrote an AI bot to play Azul – a simple yet very popular abstract family game!
- How Quackle plays Scrabble
- AIs are better gamers than us, but that’s OK
- Top 5 Computer vs. Human Game Matchups
- Backgammon Computer Program Beats World Champion
- A Computer Program Wins Its First Scrabble Tournament
- The Priesthood at play: computer games in the 1950s
- Computer Wins on ‘Jeopardy!’: Trivial, It’s Not
- Meet Bertie the Brain, the world’s first arcade game, built in Toronto