Skip to content

The Brief History of Artificial Intelligence: The World Has Changed Rapidly: What Could Be Next?

The AI ​​systems we have just considered are the result of decades of steady advances in AI technology.

The large chart below puts this story into perspective over the past eight decades. It is based on the set of data produced by Jaime Sevilla and colleagues.7

Each small circle in this chart represents an AI system. The circle’s position on the horizontal axis indicates when the AI ​​system was built, and its position on the vertical axis shows how much computation was used to train the particular AI system.

The training computation is measured in floating point operations, or FLOPs for short. A FLOP is equivalent to an addition, subtraction, multiplication or division of two decimal numbers.

All AI systems that rely on machine learning must be trained, and in such systems, training computation is one of the three fundamental factors that drive the system’s capabilities. The other two factors are the algorithms and the input data used for training. The visualization shows that as training computation increases, AI systems have become increasingly powerful.

The timeline goes back to the 1940s, the beginnings of electronic computers. The first AI system shown is ‘Theseus’, Claude Shannon’s robotic mouse from 1950 that I mentioned at the beginning. At the other end of the timeline are AI systems like DALL-E and PaLM, whose capabilities to produce photorealistic images and interpret and generate language we have just seen. They are among the AI ​​systems that used the largest amount of training computation to date.

The training calculation is plotted on a logarithmic scale, so from each grid line to the next it shows a 100-fold increase. This long-term outlook shows a continuous increase. During the first six decades, training computing increased according to Moore’s Law, doubling approximately every 20 months. Since about 2010, this exponential growth has accelerated even further, doubling in just about 6 months. This is a surprisingly fast growth rate.8

Fast doubling times have built up to big boosts. PaLM’s training computation was 2.5 billion petaFLOPs, more than 5 million times larger than that of AlexNet, the AI ​​with the largest training computation just 10 years earlier.9

The expansion was already exponential and has accelerated substantially over the last decade. What can we learn from this historic development for the future of AI?

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *

en_USEnglish