Do Androids dream of Electric Spreadsheets? A Beginner’s Guide to AI in Treasury – Part I

| 24-02-2020 | treasuryXL | Cashforce |

Do Androids Dream of Electric Sheep by Philip K. Dick, adapted to the movie Blade Runner in 1984 (yes, it’s that old), ponders the question whether technology can replace humanity in every aspect of life. Whether advanced technology could attain a comprehensive cognitive interpretation of dreaming is a philosophical conundrum that I’ll leave for the brightest among us. However, while this doomsday scenario in Hollywood movies in which robots rise against their human creators is far from happening, the reality is that a computer has already surpassed the level of strategic thinking of a human being. This rise of artificial intelligence carries the potential to disrupt any industry, including treasury, but often leaves you wondering if the hype lives up to reality.

Abstract blue lights background. Vector illustration, contains transparencies, gradients and effects.

[Spoiler alert] In the dystopian world of Blade Runner, the protagonist called Deckard, a bounty hunter or “blade runner” hunts outlawed androids or “replicants” while feeling no remorse due to them being machines. An interpretation endorsed by the iconic unicorn dream sequence hints that his human memories might have been artificially implanted, implying he might be an android himself. Is this the course artificial intelligence will eventually take us to?

Man vs. Machine – A Boardgame Evolution 

In 1997 IBM’s computer Deep Blue beat Gary Kasparov, the world champion at that time, in a game of chess. Deep Blue was able to analyze thousands of high-level chess games that were stacked into its system. When proposed with a move, it would choose the best outcome out of different scenarios. By basic number-crunching it was picking out the move that would lead to the best position on the board. This milestone was heralded as a boon for technology and viewed as almost exclusively disruptive for many industries.

Go, an abstract strategy board game invented in China, has simpler rules than chess, but many more moves at each point in the game. Just to give you an idea, the size of possible outcomes is larger than the number of atoms in the universe. Looking too far ahead in the game, or considering all possible moves and counter-moves is therefore nearly impossible. In 2016, distinguished Go player Lee Sedol was put up for the task to beat the next high-tech invention, named AlphaGo. Created by the sharp minds at Google’s DeepMind, its intelligence is based on its ability to learn from millions of Go positions and moves from previous games. Once again, machine triumphed over its human equivalent when it came to strategic thinking.

AlphaZero, released in 2017, is a version of the same program that takes it a step further. It can play chess, Go and other games and is only given the rules of the game, nothing more. By playing millions of games against itself without any previous knowledge of plays, tactics or strategy, it was able to master these games on its own. So how much time went by from the moment they launched AlphaZero to the moment where it achieved a superhuman level of playing Go? Less than 24 hours. Even more baffling is, while humans have been playing it for the past 2500 years, it came up with brand-new strategies that have never been seen before. While it is ‘only’ about fun and games, this sheds a new light on technological concepts that seemed at first like far-out fiction.

Artificial intelligence systems can dazzle us with their game-playing skills and lately it seems like every week there is a baffling breakthrough in the field with mind blowing results. It is almost unthinkable that the finance sector would be untouched by the rise of AI, any sector for that matter. Nonetheless, with the present hype around it, many of the used concepts and terminology seem to be used carelessly, which makes it hollowed and deprived of any meaning. You have probably heard of the terms “machine learning” and “deep learning”, sometimes used interchangeably with artificial intelligence. As a result, the difference between these concepts becomes very unclear. To understand this distinction and why AI will disrupt current technologies, we have to understand where it comes from.

Let there be l(A)ight – A brief History

Simply put, AI involves machines that can perform tasks that are similar to human tasks. A very broad definition which can go from simple solutions such as automated bank tellers to powerful and complex applications such as androids, which inspired the movie Blade Runner.

Surprisingly, the script on AI arises from a time when James Dean was rocking the screen and Elvis was celebrating his first “Blue Christmas”. While the statistical framework is based on the writings of French Mathematician Legendre from 1805, most AI models are based on technology from the 50’s.
1950, the world-famous Turing test is created by Alan Turing (who will soon be commemorated on the new £50 note). The test reflects on the question whether artificial intelligence is able to appear indistinguishable from a human in terms of thought and behaviour.
1951, the first artificial neural network was created by a team of computer scientists: SNARC (Stochastic Neural Analog Reinforcement Computer). They attempted to replicate the network of nerve cells in a brain. It imitated the behaviour of a rat searching for food in a maze. This was largely an academic enterprise.

SNARC computer

In the same way, 1952 rouses the birth of the first computer learning program or machine learning by Arthur Samuel. The program played checkers and improved at the game the more it played. Machine learning, a subset of AI, is defined as the ability to learn without being explicitly programmed what to “think”. It enables computers to learn from their own mistakes and modify to an altering environment. Machine learning also includes other technologies such as deep learning and artificial neural networks. Nowadays this technology can, among other things, use data and statistical analyses to predict possible future scenarios such as for Cash flow forecasting.
The Dartmouth Summer Research Project was a 1956 summer workshop and widely considered to be the starting point of artificial intelligence as a scientific field. With this uprising of technology there came a lot of excitement for the potential of automation in finance and treasury. It was believed to help accountants and bankers speed up their work. But if wishes were horses, beggars would ride. And in this case, androids would be riding along with them. Unfortunately, a reduced interest in the field and many failed projects leave artificial intelligence stranded in what is called the ‘AI winter’.

Blade runnerinfographic artifical intelligence

Luckily humans are not one trick ponies, so our story doesn’t end here. After a period of economic & technological proliferation in the 1980’s, Expert Systems found their way into the world of finance. These are computer systems that are capable of decision-making on the level of a human expert having been designed to solve complex problems. But when push came to shove, the technology wasn’t mature enough and didn’t meet client’s demands.

In 1991 the World Wide Web was opened to the public. It’s the start of the data revolution. In 2005, it reached 1 billion people and today more than half of the world’s population is connected to the internet.
Coming back to our first example, it was in this period (1997) that Deep Blue challenged the capacity of the human brain and proved it could think more strategically than a human being.
Today AI is demanding so much computing power that companies are increasingly turning to the cloud to rent hardware through Infrastructure as a Service (IaaS) and Platform as a Service (PaaS) offerings. That’s why around 2006, players such as Amazon Web Services (AWS) opened up its cloud environment to broaden the capacity of AI even further.

In the same year, Geoffrey Hinton coined the term “deep learning”, helping the progress of operating AI applications in the real world. This brought the world one step closer to bridging the fuzzy gap between humans and androids.
2015, AlphaGo is introduced to the world. Two years later in 2017, its successor AlphaZero sees the light of day.
2019, the first picture of the black hole M87 the constellation Virgo is rendered through artificial intelligence opening the door to new knowledge in the universe. The path of AI took us a giant leap forward, but we’re far from the finishing line. Roughly 90% of the universe exists of dark matter or dark energy that leaves us in confusion. Accordingly, a similar percentage of untapped dark data, the fundamental building block to understand a company’s future, isn’t being used.

Source