Tiny Teachers, Big Impact: What Babies Can Teach AI
The Surprising Lessons AI Can Learn from Drooling Bundles of Joy
So is AI going to be amazing and solve all your problems? Or is AI the biggest threat to humanity that will take your job and wipe you out? Or, maybe neither of those is true, and drooling babies might have the real answer to helping you understand the truth about AI? Which do you feel is more likely?
Quite the choice, isn't it.
I'll be straight with you - I'm fed up of the media bullshit and clickbait-driven AI news reporting that is more often based on fantasies designed for clicks, and based on little to no research or evidence.
The media do an appalling job of giving you a balanced and accurate understanding of AI, just when you need it most. Click-bait maybe isn’t the best way to help you really understand AI, would you agree?
It also doesn't help when attention-seeking scientists like Max Tegmark from MIT decide to go full-tilt batshit crazy and claim that 'AI is like a cancer which can kill all of humanity' with absolutely no evidence to support his delusional claims.
Tegmark often uses his scientific credibility in physics, to mislead the public and make fraudulent and false claims about a different discipline he has no expertise with, AI. It's a bit like your plumber claiming to give you trusted, expert professional financial advice.
Scientists should know better than journalists about the importance of evidence-based reasoning, but apparently, standards seem to have fallen quite dramatically at MIT as they have in the media, but I digress...
But what if, AI might not be amazing or our total extinction? What then? And what if AI was a bit amazing, but also a bit stupid? Not just one, but a bit of both? And perhaps more crucially, what does the research and evidence suggest?
I imagine nuance, complexity & subtlety are probably no good for media clickbait or deranged MIT physics professors, but could they be closer to the truth?
Some recent research has highlighted not only how many things babies can do that even the most powerful AI today cannot do, but also how babies might be some of the best teachers to help AI learn to be more like us and become more intelligent.
So could babies have more to teach AI than Professor Mad-Max Tegmark? Seems like an intriguing possibility... why don’t you join me & lets investigate…
What Babies Can Teach AI
People are amazed about what AI can do these days, and I include myself in this. Despite being an AI professional, I'm really impressed by the abilities of modern AI, that even now could replace various jobs and tasks in the workforce.
But were you also aware that despite this, the best AI still can't do things babies can do very easily? Probably not. This is a fact that doesn't quite fit with the popular AI media narratives of god-like or killer-robot AI.
A recent article by Melissa Heikkilla highlighted some research looking into this, and what babies might be able to teach AI. As she says in her opening paragraph:
Human babies are fascinating creatures. Despite being completely dependent on their parents for a long time, they can do some amazing stuff. Babies have an innate understanding of the physics of our world and can learn new concepts and languages quickly, even with limited information. Even the most powerful AI systems we have today lack those abilities. Language models that power systems like ChatGPT, for example, are great at predicting the next word in a sentence but don’t have anything even close to the common sense of a toddler.
Just let that sink in: ChatGPT doesn't have anything even close to the common sense of a toddler. It's true. Maybe the killer robots won't destroy us all tomorrow then after all, if ever. Sorry to disappoint you, Mad-Max, Elon.
Melissa also highlights the work of researchers at New York University who have looked at this more closely and considered: what if we could help AI learn as a baby learns? Could this help AI develop more of the common sense and the understanding a toddler has?
The short answer based on this research seems to be yes, babies can help AI understand the real world a lot. The researchers took the following approach:
Researchers at New York University wanted to see what such models could do when they were trained on a much smaller data set: the sights and sounds experienced by a single child learning to talk. To their surprise, their AI learned a lot—thanks to a curious baby called Sam.
This involved strapping a camera to the baby's head to record all the sights and sounds the baby would experience, as we can see here demonstrated by our great AI teacher Sam.
The research was conducted over 18 months collecting huge amounts of visual and audio data. As Melissa describes:
The researchers strapped a camera on Sam’s head, and he wore it off and on for one and a half years, from the time he was six months old until a little after his second birthday. The material he collected allowed the researchers to teach a neural network to match words to the objects they represent.
Example footage from baby Sam's head camera (courtesy of Sam's dad & MIT Technology Review)
Are you impressed to see how hard baby Sam, and cats are working to contribute to research to help improve our understanding of AI? I’m very impressed!
The research paper concluded:
Our model acquires many word-referent mappings present in the child’s everyday experience, enables zero-shot generalisation to new visual referents, and aligns its visual and linguistic conceptual systems. These results show how critical aspects of grounded word meaning are learnable through joint representation and associative learning from one child’s input.
You can read more about this research in this article and in this research paper.
These are impressive and promising results, given how difficult it has been for AI researchers to develop similar abilities in AI that come easily and naturally for humans
But what does this mean for AI now and in the future?
Is AI Really Intelligent?
Let's leave aside all the hype about AI for a second and I'd like you to think about some more basic things: what do you think Intelligence really is? Does the answer seem obvious to you?
Fair enough.
I'd like to share with you a conversation I had recently with one of my future podcast guests
. She's a philosopher, and she has a lot to say about this topic and more that's fascinating. Also, don't be fooled - she's not only a philosopher, she's a technical person too having done web development coding and more. She also has a very good idea of how modern AI works under the hood.Anyway, Rita asked me this question:
Pranath, do you think current AI is really intelligent?
This was my answer to Rita:
That's a great question Rita, and it depends on what you mean by intelligence. People speak about AGI or Artificial General Intelligence. But some say including me, that AGI is a misleading idea. Why? because what we really mean by AGI is human intelligence, there is no 'general intelligence'. This helps us better answer if current AI is intelligent, because I'd then say no. Why? because modern AI like ChatGPT works by learning how to generate language and text by learning from human books and texts. It learns how to string words together that sound like a human and answers questions by simply predicting the next word in a sequence of words. But that's not the same thing as understanding the world like we do, or acting with common sense like a baby can, and knowing and being in the world feels like we do.
Rita knows this too, which is why she asked me (to test me I think, she is a philosopher after all, they tend to ask good questions) and she also skeptical about the idea that current AI is really intelligent.
Because of the way modern AI (called LLMs) works, generating text by predicting the next likely word in a sequence, is that really what we mean by intelligence? does that sound like real intelligence to you?
I've written more about how modern AI (also known as Large Language Models or LLMs) works under the hood in plain English in Originality on Trail: AI's Challenge to Creative Ownership.
Someone else who also questions if modern AI (based on LLMs) is really intelligent is Yann LeCun, one of the original pioneers of AI for over 20 years and Meta head of AI. On what Intelligence is he has this to say:
Every intelligence is specialised, including human intelligence.
Intelligence is a collection of skills and the ability to acquire new ones quickly. It cannot be measured with a scalar quantity.
No intelligence can be even close to general,
Which is why the phrase “Artificial General Intelligence” makes no sense. There is no question that machines will eventually equal and surpass human intelligence in all domains. But even those systems will not have “general” intelligence, for any reasonable definition of the word general.
Yann also has this to say about whether he thinks modern AI based on LLMs is intelligent:
These are four essential characteristics of human intelligence: reasoning, planning, persistent memory, and understanding the physical world — also animal intelligence, for that matter — that current AI systems can’t do...We’re easily fooled into thinking they are intelligent because of their fluency with language, but really, their understanding of reality is very superficial... They’re useful, there’s no question about that. But on the path towards human-level intelligence, an LLM is an off-ramp, a distraction, a dead end...Most of human knowledge is not language so those systems can never reach human-level intelligence — unless you change the architecture
For balance, it also needs to be said not everyone researching AI agrees with Yann. For example Sam Altman head of OpenAI, often speaks about AGI, and other researchers remain hopeful that LLM-based AI might reach human-level intelligence. You can decide for yourself which seems more likely to you.
However, the New York University research summarised here has shown using a very different approach to teaching AI using babies experiences, is helping AI to learn common sense intelligence that other AI researchers and LLM-based AI has failed to achieve.
So what if Yann is right? What might this mean for you?
What If AI Isn't Intelligent?
If Yann is right, and I believe he is, that LLM-based AI systems right now are not really intelligent like humans are. So what does that mean for you?
Firstly, I think that means you can use that to debunk the crazy media stories claiming super-intelligent AI will save us or destroy us very soon as people like Mad-Max Tegmark would like you to believe. AI is a useful tool, yes, it can do some impressive things, and yes a very powerful tool.
And yes, there are some harmful ways to use it which we should be careful to avoid. But right now, a baby or a dog has more real intelligence than our current LLM-based AI systems.
However, that is not to say that in future, we might develop AI systems as Yann suggests that might be able to learn more as humans do, and share more of the common sense intelligence that we have.
Yann Also describes some ideas he has for a very different 'objective driven AI systems' very different to LLMs that might achieve real intelligence:
Objective-driven AI systems are built to fulfil specific goals set by humans. Rather than being raised on a diet of pure text, they learn about the physical world through sensors and training on video data. The result is a “world model” that shows the impact of actions. All the potential changes are then updated in the system’s memory. What would be the difference, for instance, if a chair is pushed to the left or the right of a room? By learning through experience, the end states begin to become predictable. As a result, machines can plan the steps needed to complete various tasks.
These new types of AI could learn about the world in the same way the New York university research used baby Sam’s footage to teach their AI about the world.
However for now this is still mostly theoretical, especially given most current AI research is still mostly focussed on LLM based AI. Yann, while optimistic long term, cautions about how long it will take to get AI to this level:
Eventually, machines will surpass human intelligence… it’s gonna take a while though...It’s not just around the corner — and it’s certainly not next year like our friend Elon has said.
If Yann and others in AI who share his view including me are right, this could have many other benefits. This suggests we will have more time, more time to think about how AI will integrate into our work and lives, more time to think carefully about good regulations, and what things we want AI to do or not do.
Do we want a society where AI could take over more work, and make more decisions? how would that society work if most people didn’t have jobs?
Having more time to think about these issues because AI will take longer to reach genuine human-level intelligence, will help you and humanity adjust and better adapt to these changes AI will bring us.
Babies still have a lot to teach AI. As long as that's true, I think you should take that as a good sign that god-like AI is not coming to get you next week, or any time soon.
But what’s your perspective? do you agree, or do you have a very different perspective? I’d love to know what you think whatever that is, let me know in the comments.
Thank you for mentioning me in your piece @Pranath! My feeling is that the use of the term AI is a really successful marketing ploy, since we have so many creative works, say of fiction, that exercise our fantasies and fears about AI. By doing this, those putting LLMs in front of the public are pushing those fear and fantasy buttons, and it's very, very effective. But I question whether LLMs are AI, or could ever be AI in this sense. LLMs process data algorithmically. Unless one believes that there is a leap that occurs at a certain scale of data that leads to 'intelligence' of some kind (which some do believe), LLMs don't fit the bill, and never will. This, because the consciousness at the basis of intelligence, is not about data processing. I'd say it is more about "care" in the phenomenological, exsitential way - it's Martin Heidegger's term but others have notes similar things. Finally, take a dispassionate, objective look at the products that they are building, the problems they think that they are solving, rather than what they are saying about LLMs (or the hype) and perhaps a different picture emerges for you, as it has for me. Let's keep talking about this, I know we have an upcoming session to do a podcast and I am gearing up. Will be a good conversation, frens.