19 Mar 2024 4 min read

Artificial intelligence: building bits into brains

By Matthew Rodger

Recent AI advances are remarkable, but can progress be sustained?

AI-computer-face.jpg

Science is plagued by technologies that are, as the saying goes, “just 20 years away, and always will be”. Fusion energy, materials that superconduct at room temperature, and artificial general intelligence (AGI) have often been described this way. Humanity’s brightest minds have thrown themselves at these problems, yet breakthroughs remain elusive.

However, the launch of ChatGPT raised the prospect that one of these dragons might be slain within our lifetime. The program, now in its fourth iteration, is one of a family of AIs designed to tackle human language, protein folding and stable materials.

The early evidence indicates that this new generation of AI systems could have profound implications for the global economy and, by extension, for portfolio allocation.

We have written on the impact of AI previously, but in this series we examine the differential aspects of the technology, its short-term effect on labour markets, and the long-run impact of AI on growth.

Big things have small beginnings

Research into AI is as old as the computer itself. Since the work of Alan Turing and John von Neumann, humans have wondered whether calculations made by computers could be arranged to produce something we might recognise as intelligence. AI research produced sporadic waves of enthusiasm in the 1950s and 1980s, but these ebbed as it became clear the programs could not learn as humans could, nor could they be commercialised profitably.

AI-test-scores.png

What makes the current wave of AI excitement different? Firstly, rather than using direct programming, this generation of AI is based on large volumes of data combined with vast computing power to train the models.

Secondly, the results are scalable. Using greater data and computing power generally improves the model’s skill in ways that are predictable in aggregate.

Lastly, these models have expanded their domain into a larger expanse of areas than ever before, and at a faster pace. As shown in the chart above, it took roughly 15 years for AI to match human ability in speech recognition. In reading and language understanding, it took just under two years.

AI evangelists foresee these programs steadily expanding their mastery across various domains, as ever more computing power and data are thrown into model training. With continued investment, many expect that AGI, which can solve any technical problem a human can solve, will be with us before 2050.

Penalties for thinking differently

Yet there is room for doubt amid the hype. Sceptics argue that greater abilities in narrowly defined tasks is not the same as greater intelligence. While generative AI might be able to merge Lord of the Rings and Byzantine art, it still cannot solve basic logic problems. They also observe so-called brittleness in model outputs when confronted with a departure from the status quo. While humans can also react irrationally to black swan events, they may be less prone to a total breakdown in reasoning.

While simply adding additional computing power might solve these problems (as it has with other issues), without more elementary understanding or reasoning amid uncertainty, talk about generalisation appears premature.

Another concern is around AI safety. Industry leaders have signed an open letter declaring that extinction risk from AI was a risk comparable to global pandemics and nuclear war. The problem of AI alignment (ensuring humans and AI act with the same goals) remains unsolved. Therefore, there is the potential that as these models get smarter, their ability to harm humans by pursuing bad objectives or assisting humans with harmful motives will also broaden.

More questions than answers

Despite conflicting perspectives between AI enthusiasts and sceptics, neither side can say with certainty where the technology will end up. The relationship between data, computing power and AI skill may break down, or not lead to generalisation of these models’ abilities. Moreover, with much of the technical know-how hidden within privately held startups, the state of frontier AI research is opaque.

AI could impact the economy in a manner akin to other technologies such as the PC and electricity, but it is important to remain humble when attempting to assess the technology’s trajectory and its effects going forward. Risks are firmly to the upside. Should the enthusiasts be proved right, AI would rank among the most transformative technologies developed by humankind, with transformative impacts on portfolios and society at large.

Whatever your stance, the rapid progress in a field so closely related to knowledge itself is something to be watched closely.

This is the first in a series of blog posts covering recent breakthroughs in generative AI; future instalments include a review of the impacts of the technology on labour markets and a discussion on its long-term effects on growth.

Matthew Rodger

Assistant Economist

Matthew is an economist covering emerging markets. He uses countries’ historical experience, alongside fresh economic data and quantitative methods, to recognise new investment opportunities. Prior to joining LGIM, Matthew graduated with an MSc in Economics from the London School of Economics and worked in various economic research roles. When not studying EM economies, he is enjoys reading, hillwalking and skiing.

Matthew Rodger