Mayans could calculate positions of Moon and timing of eclipses. They knew.
But they didn’t understand why. They didn’t know about the underlying principles and mechanics that governed these celestial events.
Nobel laureate Richard Feynman, explained the difference between knowing and understanding, in this five-minute video:
He argued that knowing is being able to do calculations that agree with experiments. Understanding is being able to explain the underlying phenomena.
As Feynman describes, the Mayans knew positions of the moon and could predict eclipses, but they didn’t understand the reasons for their correct calculations. That understanding did not come until Newton and others explained gravity and its impact on rotating bodies. This lack of understanding allowed the Mayans to falsely attribute things to gods, and not to physical laws.
Today, we’re seeing a similar clash of knowing without understanding in… AI. We Artificial Intelligence does calculations to try to predict home prices, supply chain demand, outbreak of infections or match applicants with jobs. It does it with a level of incredible proficiency that often surpasses human capabilities. But it is not sentient, even if we sometimes like to believe so.
Just like the ancient Mayans, AI’s “knowing” is limited to the data and algorithms it is trained on. It lacks the deeper understanding of the underlying principles and causal relationships that govern the phenomena it analyses.
AI must also try to understand. Without understanding, it will always underperform. Some call it the AGI, the Artificial General Intelligence.
Today’s AI’s LLMs and measuring their intelligence
Today’s AI, including advanced LLMs like ChatGPT or Claude, demonstrates a form of “knowing” without true understanding. These systems perform calculations to predict outcomes, match patterns, and generate human-like text with remarkable proficiency, often surpassing human capabilities in specific tasks. However, they lack sentience and deeper comprehension.
Similar to the ancient Mayans’ astronomical predictions, AI’s “knowledge” is as best as the training data and algorithms it’s trained on. It lacks genuine understanding of underlying principles and causal relationships governing the phenomena it analyses.
For AI to truly advance, it must develop understanding, not just pattern recognition. This is the goal of AGI – Artificial General Intelligence. However, many scientists believe that the current LLMs, despite their impressive capabilities, are unlikely to lead directly to AGI for several reasons including.
- Lack of causal reasoning: LLMs excel at pattern recognition but struggle with understanding cause and effect relationships.
- Limited adaptability: While LLMs can generate coherent responses across various topics, they struggle to adapt to entirely new scenarios or tasks outside their training data.
- No true learning: LLMs don’t learn or update their knowledge through interactions. Each response is generated based on their initial training.
- Absence of common sense: LLMs often lack basic common sense understanding that humans take for granted.
- Evaluation limitations: LLM responses are typically evaluated based on coherence, relevance, grammatical correctness, and factual accuracy against human judgment. This doesn’t necessarily measure true understanding or intelligence.
François Chollet highlights most of these reasons in his concept of Abstraction and Reasoning Corpus. ARC challenges algorithms to solve unknown tasks based on few demonstrations. While humans can solve about 80% of ARC tasks, current AI algorithms manage only up to 31%.
Just as humanity has progressed from mere observation to a deeper scientific understanding of the natural world, we must design AI systems with a similar process in mind.
We need collaboration between data scientists, psychologists, engineers and others who understand the application and “breathe” it, whether they are doctors in hospitals or human resource people in job hiring.
Only then, AI can move beyond mere calculation and prediction towards genuine insight, innovation, and groundbreaking discoveries.
An equally scary and exciting thought, if you ask me.