Why AI replaces apprendtices !
- JMS

- Jan 10
- 1 min read
The mechanics of AI are still somewhat unclear, even to experts. However, we do know one important thing: it learns.
Have you ever gotten shocked while fiddling with electronics? I have, and that experience taught me never to do it again. We humans learn from experience.
AI models learn from data. An AI model with limited data is like a toddler. In contrast, one

with extensive data is like an experienced grandfather.
The data paradox
Which is harder: driving a car or writing code? Most would say coding. Yet in AI development, the opposite seems true.
Large Language Models (LLMs) are relatively new. Before ChatGPT, few associated AI with chatbots – more likely the Terminator. The LLM era began around 2013-2014 with neural networks like word2vec. Autonomous driving, on the other hand, began in the 1980s. In 1987, Ernst Dickmanns’ team had a Mercedes-Benz van drive itself at 96 km/h on a German highway using computer vision.
Despite this massive head start, autonomous vehicles still lag behind LLMs. While ChatGPT performs reliably across countless scenarios, AI drivers remain inconsistent.
But why? Companies like Tesla and Waymo have invested billions. Yet if a new company wanted to enter the space – even with brilliant engineers and unlimited funding – they’d still need thousands of hours of diverse driving data. Some accident types are so rare they’re nearly impossible to train for.





Comments