Where tech aligns

Anton Troynikov: Artificial Intelligence Will Cause Pandemonium, Then Normalization and Optimism

Within five years of human-level AI being created, there will be initial pandemonium, followed by normalization. I am generally optimistic about humanity’s future, but foundational technological progress has always come with upheaval.

The 2020s have seen unprecedented acceleration in the sophistication of artificial intelligence, thanks to the rise of large language model (LLM) technology. These machines can perform a wide range of tasks once thought to be only solvable by humans: write stories, create art from text descriptions, and solve complex tasks and problems they were not trained to handle.

We posed this question to six AI experts in two parts including James Poulos, roon, Max Anton Brewer, Robin Hanson, and Niklas Blanchard. —Eds.

1. What year do you predict, with 50 percent confidence, that a machine will have artificial general intelligence (AGI) – that is, when will it match or exceed most humans in every learning, reasoning, or intellectual domain?

2. What changes to society will this affect within five years of occurring?

Anton Troynikov:

AGI will be here by 2032. Then will come pandemonium – but be optimistic. 2032. My timeline is short, though perhaps not as short as some others, because I am increasingly of the opinion that the human intellect is not especially complex relative to other physical systems.

In robotics, there is an observation referred to as Moravec’s paradox. At the dawn of AI research in the 1950s, it was thought that cognitive tasks which are generally difficult for humans – playing chess, proving mathematical theorems, and the like, would also be difficult for machines. Sensorimotor tasks which are easy for humans, like perceiving the world in three dimensions and navigating through it, were thought to also be easy for machines. Famously, the general problem of computer vision (a field in which I’ve spent a large fraction of my career so far), was supposed to be solved in the summer of 1966.

These assumptions turned out to be fatally flawed, and the failure to create machines that could successfully interact with the physical world was one of the causes of the first AI winter, when research and funding for AI projects cooled off.

Hans Moravec, for whom the paradox is named, suggested that the reason for this is the relatively recent development, in evolutionary terms, of the human prefrontal cortex, which handles abstract reasoning. In contrast, the structures responsible for sensorimotor functions, which we share with most other higher vertebrates, have existed for billions of years and are, therefore, very highly developed.

This also explains why we hadn’t (and to a large extent, still have not) managed to replicate evolved sensorimotor performance by reasoning about it; human intellect is too immature to reason about the function of the sensorimotor system itself.

Machine learning, however, represents a way to apprehend the world without relying on human intellect. Like evolution, machine learning is a purely empirical process, a general-purpose class of machines for ingesting data, finding patterns, and making predictions based on these patterns. It does not make deductions, nor does it rely on abstractions. In fact, the field of AI interpretability exists because the way in which AI actually functions is alien to the human intellect.

Given sufficient data, and enough computational power, AI is capable of determining ever more complex patterns, and making ever more complex predictions. The ways in which it will do so will necessarily be increasingly alien as it outstrips our own capacity to find and understand these patterns. A concrete demonstration of this principle is the success with which AI has been able to model language. Linguists have been unable to provide any successful framework for automatic translation for the entire history of the discipline. AI cracked the problem as soon as enough data and computing were available, using extremely general methods.

Language is an expression of reason. An emulation of reason itself – through the prediction of what a human would reason, with a mechanism alien to that reason – cannot be far behind. We’ll get there not because AI became particularly powerful, but because the human intellect is, in the grand scheme of things, rather weak.

Within five years of human-level AI being created, there will be initial pandemonium, followed by normalization. I am generally optimistic about humanity’s future, but foundational technological progress has always come with upheaval. Yes, we got the printing press, but we got the Thirty Years’ War along with it.

I don’t presume to know what shape the upheavals will take, but they are likely to be foundational as societies must reorient around the capability to produce machine intelligences as good as the average human at will. But we’ll figure it out.

Anton Troynikov has spent the last seven years working in AI and robotics as a researcher and engineer. His company, Chroma, makes AI better by increasing its interpretability.

Lots more where that came from.