Some in the AI industry have proposed concepts similar to Moore's Law to describe the rapid growth of AI capabilities.
Although there is no universally accepted law or principle akin to Moore's Law for AI, people often refer to trends that describe the doubling of model sizes or capabilities over a specific time frame.
For instance, OpenAI has previously described a trend where the amount of computing power used to train the largest AI models has been doubling roughly every 3.5 months since 2012.
The first AI development was done around 1945, and what we are seeing now is the 3rd AI renaissance. The problem with AI until now was that it showed great potential, but it ran into issues we didn't have then technology to solve, which completely killed the field of AI for decades.
CPU development was very linear, AI development was not. There were decades with very little AI research, and there were decades with explosive development.
I'd say that when playing chess was the premiere achievement of AI, it was as good as dead, playing chess proves very little, as it's basically a task that can be achieved computationally. Investments in research had almost completely dried out for a couple of decades.
AI development was almost completely dead, but calling it the AI winter is fine too. ;)
AI made very little progress for 40 years from the 70's, basically just some basic pattern recognition like OCR in the 80's.
Up until recently AI development has been extremely underwhelming, especially compared to what we hoped back in the 80's.
Although results are pretty impressive, autonomous cars are still a hard nut to crack.
Most impressive IMO are the recent LLMs (Large Language Model), but these results are very recent, compared to the many decades research has been done to develop better AI.
Honestly an AI beating a human at chess is not that impressive AI research IMO, as it's an extremely narrow task, you can basically just throw computational power at. Still for many years that was the most impressive AI achievement.