AI Redefined: Unveiling the Latest Advancements and Transformative Power

Today we kick off a series of blog posts by different authors on Artificial Intelligence, focusing on the latest advancements in AI. Over the next few weeks, we aim to provide a snapshot of the remarkable progress made in the last few years, showcasing how the technology is reshaping industries, solving complex problems, and driving innovation across the board. In the upcoming articles, we will delve deeper into specific areas within the realm of AI and its interactions with society; we will explore topics such as Governance, Open Source, Explainability, and Management, unravelling their significance and shedding light on the critical aspects that shape today’s AI landscape.

Of course, we can only provide little more than a look at the immediate present. The field is moving at a breakneck pace, and scientists and developers are solving things that seemed impossible only a month ago as we speak. This – first – blog post aims to give a high-level overview of where we were, where we are, and why it is significant.

So, without further ado, let's dive in and explore the fascinating world of AI.

When I started my studies of the Mind and AI, a few things seemed abundantly clear to everyone: We won’t see a machine play GO, we won’t be able to predict the shape of a protein in our lifetimes, and we won’t be able to talk to a machine and be uncertain whether it’s got a mind of its own.

The reason for all of these is that they represent a class of problems that aren’t computationally tractable by traditional means: The number of possible 3d structures arising from different combinations of amino acids is governed by forces that are affected by every single new addition, the best next step in GO has to be found from a set of possible moves larger than all the molecules in the universe, and the next word in a sentence depends on everything that has been said before. And, well, a general understanding of language is one of our world's most complex constructs. This complexity makes it practically impossible to solve these problems by simply enumerating all available choices and selecting the best one – there simply are too many.

And then, in April 2016, an -admittedly small- circle of people got excited about something that most of humanity pretty much ignored. Some would declare we had just landed on the moon, metaphorically speaking, while others simply sat in awe, thinking about the possibilities of the technology that had made the unthinkable possible. What had happened was that Lee Sedol, the former international second-ranked grandmaster of the ancient game GO, had played a computer program, and for the first time in history, the program had won.

While not the sole thing making the victory, part of what enabled the machine called AlphaGo to win is a class of algorithms called Neural Networks. They are roughly based on a simplified understanding of how Neuroscience thought the brain worked a few decades ago – and so far, we have not found a problem that can’t be solved by using a huge number of them - in combination with large amounts of data in a technique called Deep Learning - and a bit of elbow grease.

The next thing to fall to Deep Learning was, by the way, the protein folding problem. AlphaFold, an algorithm inspired by AlphaGo, is today being used by researchers all over the world to speed up the scientific discovery process by orders of magnitude. Where, a few years ago, you’d have to go through lengthy – and costly! – processes that involve crystallising a protein and then shooting it with X-rays over and over again, today, you can go to the AlphaFold Protein Structure Database and find an often good-enough approximation of the structure you need to solve your problem.

And then, of course, language. The big one that most people have heard of by now. In 2019, OpenAI released a program called “Generative Pre-trained Transformer“2 (GPT-2), which – to everyone’s astonishment and some people’s horror – could form coherent sentences on any topic it was asked about. There was an acute awareness that this might change the world, but the model was too small and limited in its scope to be considered a threat back then. This changed with the next version, GPT-3, released in 2020, as a model much more capable of producing longer and better-written texts, ushering in the age of the Large Language Models (you might see this abbreviated as “LLM”). In addition, the field of prompt engineering was born essentially overnight, as people realised that a unique skill set to have is the knowledge of how to ask a language model to produce a particular output and that this skill could be used to solve many problems that previously took months to complete.

And then, of course, ChatGPT got released in late 2022 and rose to one of the most popular websites in the world. Notably, only a little had changed in the underlying technology since GPT3. There was a new way of training the network – essentially, a way to show it what we expect – and a user-friendly front-end framework, making it easy to interact with the AI.

The actual technical innovation came with GPT-4, which you can interact with via your Plus subscription or, more likely, if you are using Language Models in business, through Microsoft Azure. GPT-4 is said to be able to pass the US bar exam and have a particular ability to think about thinking that enables it to simulate even more complex scenarios. The future will tell what GPT-like models can achieve, either via smart prompting or through yet-unexplored approaches. And, of course, other players in the field and the open-source community are catching up quickly.

This concludes our introduction to AI; we have discussed how we got to where we are. In the upcoming posts, we will see how this incredible technology can generate value, how it can help your company, and what we need to watch out for, especially regarding how to make sure such a powerful tool isn’t abused to the detriment of society. It’s a wild ride. Join us at Skad!

Thomas Rost has studied the Cognitive Sciences and Artificial Intelligence all over Europe. He has been working as an advisor to companies harvesting the benefits of Data Science, Machine Learning and AI for many years, seeing the rise of Deep Learning first-hand and helping his customers derive value from futuristic technologies, generate insights from data and support decisions with evidence.

He is a Senior Manager Consultant at SKAD. If you have questions, reach out to him at t.rost@sk-advisory.com