Why Banning AI Is the Worst Thing You Can Do for Your Safety
What K-Pop can teach you about good AI regulation
What K-Pop can teach you about good AI regulation
Is giving young children access to nail guns a good idea?
Unless you have a very dark sense of humour, you know it’s not a good idea.
Your ability to imagine potential dangers, your ability to feel worry and fear, and take action to protect yourself from what you fear is essential for your survival.
But here’s a different question for you…
Do you always make your best decisions based on the things you fear?
For Kids + Nail Guns, that looks like a good choice.
But should you avoid going to the doctor because you fear what they will tell you about your health? Probably not a good choice.
So you know that sometimes fear leads us to good decisions, and other times fear can lead us to bad decisions.
So how do you know when?
Perhaps you are worried about AI, and you feel it’s important we have the strongest regulation so people are protected. Or maybe you’re not sure.
But do your fears about AI lead to a good decision about AI regulation?
France & EU AI hopes crash into it’s fears
You know that for many issues, some people will have great hopes about something, while others will fear it.
Immigration is a good example.
I’m sure you’d agree, that AI is another great example of this.
A recent article in Politico highlighted a huge conflict in France about this in their article about how France’s AI hopes collide with French love of regulating tech.
Frances AI supporters
In one corner, many in France are trying to encourage and support AI development in the country to help it become a leader in the world.
These supporters include French President Emmanuel Macron, many French companies including leading AI startup Mistral AI, as well as AI ‘God-father of AI’ and pioneer Yann Le Cun from Meta.
The French government agreed with them and joined forces with Germany and Italy to oppose the regulations on AI that the European Parliament suggested.
As French Digital Minister Jean-Noel Barrot said about the proposed AI regulations:
I worry that in the recent past few weeks, the EU Parliament … has taken a very sort of strong stance on AI regulation, using in some sense this AI act as a way to try and solve too many problems at once
He went on to say:
What we want is a regulation that offers both protection for users … and establishes trust, but that is also very flexible enough to allow for the development in the next few weeks, and next few months in France and Europe
France’s AI skeptics
In the other corner, are many EU regulatory bodies such as the European Parliament and the European Commission, in particular Frenchman and EU Internal Market Commissioner Thierry Breton, as well as many other players in French civil society.
As Breton said recently about Mistral AI:
Mistral is lobbying — that’s normal…But we are not fooled: it is defending its business, not the general interest
Many of these EU organisations have been hostile to Big tech companies generally for many years, believing they usually put profit before public good and must be tightly regulated.
The French cultural sector has also been very worried that the pro-AI approach led by Macron and others will sacrifice intellectual property rights in the interest of encouraging more AI development.
The president of the French Authors Society SACD said:
This is the first time that France, where copyright was invented, has not defended intellectual property
Nobody is seriously saying there should be no regulation of AI.
This issue is how much, and what and where.
The copyright issue
The issue of copyright perhaps best illustrates what some see as too much or too little regulation for AI.
AI needs data to train — and for models like ChatGPT that data is text data like books, articles and lots of it, like millions of books, articles etc.
AI companies say they need to train their models on copyrighted material like books, otherwise, they can’t make AI as valuable useful and powerful in the way ChatGPT is.
They also say that copyright is not infringed anyway, because AI only uses copyrighted works to learn about language. The AI does not memorise the copyrighted content and does not reproduce the copyrighted content in its outputs.
Many AI opponents disagree and say that’s an infringement of copyright, and AI should prevented from using copyrighted materials by the strong regulation proposed by the EU and others.
I’ve written more extensively about AI and copyright in my article Originality on Trial: AI’s Challenge to Creative Ownership if you’re interested in knowing more about this issue
Can you imagine what it feels like to be running an EU institution?
One of your most important duties is to protect people and keep them safe, and you can do this by passing regulations of course.
When regulators can’t keep up
Remember some of the recent technological revolutions, the internet, and social media which have happened over 20 odd years.
These things happened fast, faster than any technology we have had before, for example, it took around 300 years to develop automobiles.
Because they happened so fast, regulators struggled to understand it to be able to pass good regulations.
People have had 300 years to slowly get used to the idea of cars before mass adoption by the public to help us have plenty of time to think about and develop appropriate regulations.
For the internet and social media, there were like 10 years for each say, and let’s be honest how tech-savvy is your average politician? Not very.
So basically, big tech companies were able to develop fast, making decisions that would affect millions, with little regulation because regulators could not keep up.
While big tech companies and their platforms have had many obvious benefits, many downsides to unregulated big tech have been well documented.
As a Brookings Institute report Big Tech Threats: Making Sense of the Backlash against Online Platforms said:
How online platforms are currently engineered have come under fire for exacerbating polarization, radicalizing users, and rewarding engagement with disinformation and extremist content
The regulator’s revenge
Authorities had little control over the last tech revolutions because of the speed of development and their lack of ability to keep up, and they know it.
You know it.
Now you have AI, which makes huge advances not in decades, or even years, but months and weeks, faster than anything before.
You have probably been amazed and the impressive abilities of AI like ChatGPT.
Authorities fear losing control of how AI will be applied to society, and it’s fair to say the potential to transform society with AI is huge.
Given how most authorities have failed to regulate past big tech revolutions, you can understand how they might feel afraid of not making the same mistake again, and feeling like the safest thing to do this time is to have more regulation, not less.
What would the consequences be for AI?
Let’s use the copyright issue as an example.
So let’s say, you are going to pass this EU law as proposed to ban the use of copyrighted materials for training AI.
But what happens the day after you ban it?
What happens the day after you ban AI at home
Yann LeCun Meta’s chief AI scientist explains very clearly what would happen if you banned the use of copyrighted material for training AI in your country:
I’ll tell you right now, the AI industry stops. It can’t work without it
I’ve worked as a Data Scientist and AI Consultant for many years and can tell you he is right.
Why? Essentially it’s about data you use to train AI.
More data is better. But also, more diversity in your data is better.
What Yann is saying correctly is, that without copyrighted data — you won’t be able to produce an AI as powerful as ChatGPT, because it not only needs a lot of text data but also the full diversity of most text data to learn well.
Ok so maybe you say, I don’t care — this is wrong to use copyrighted materials to develop AI.
Even if this stops AI development or slows it down, you don’t care, we need to protect people for example by protecting against copyright infringement.
You need to ban it.
You know how big tech has gotten out of control before and caused much social harm, we cannot afford to make the same mistake again.
This helps keep people safe.
But does it? What else happens the day after you successfully ban AI?
What happens the day after you ban AI — everywhere else
The thing is, other countries exist.
Other countries have different laws, and some countries will probably have laws that might allow AI to develop more freely, for example using copyrighted materials.
We live in a globalised connected world that crosses the borders of countries.
Just because something is banned in one country doesn’t stop it from being developed anywhere else, and it can be very difficult to prevent people from your country from accessing things in another country that might be banned at home.
As you know, the internet exists.
So basically, heavy regulation or a ban might stop AI development at home but it will not stop it from developing elsewhere and even being accessed by people in your own country.
What K-Pop can teach you about AI regulation
If you restrict the development of something yourself, you are not a leader in it, so you don’t have much influence on what happens in it beyond your borders in other countries.
As an example, lets say maybe North Koreans would make the best K-pop bands, better than even South Korean ones.
But we will never know, because North Korea is not a leader in K-pop, it bans it, even though many of its citizens can easily get around this ban and watch South Korean K-Pop anyway.
Because of this, North Korea not only does not influence how K-Pop develops worldwide, but it also misses out on the huge economic and cultural benefits being a leader in something brings.
Maybe the EU has good ideas about how AI should be regulated, but who cares when the EU is not an AI leader and has no AI industry anyway because of all its regulations?
As well as missing out on shaping how AI is regulated worldwide by not being a leader or having an industry, the EU and France would be missing out on the huge economic benefits AI is likely to bring to countries that develop it.
If you can imagine the economic benefits North Korea is missing out on by banning K-Pop development, can you imagine the huge economic benefits the EU is missing out on by banning AI development?
So not only does banning AI not make you safe, but because it’s going to happen somewhere else in a globalised world it will impact you anyway.
By banning it, you stop yourself from getting the economic benefits of AI, as well as having little say or influence over what happens with it.
Using our minds not our fears to decide on AI regulation
Coming back to how we started, I hope you can see that the kind of strong regulation that bans AI development paradoxically makes us less safe, and less able to control and understand it.
This is where you need to be ruled not by your fears, but by the consequences of acting on your fears, which I hope you might agree are not good here.
You need to use your mind, not your fears when deciding on AI regulation.
Are there AI risks as well as the benefits? sure absolutely.
But how can you understand and mitigate those risks if you have banned AI development and you don’t have an AI industry?
So that's why Macron, Yann Le Cun and others are right.
Yes you need some AI regulation, but not so much that you destroy your ability to develop it, understand it, and be a leader in it so you can better influence and control it.
Otherwise, France and the EU might end up making the same mistakes they made with the internet and social media.
Why is there no EU big tech? Why is there no EU Google or Meta?
An article in Foreign Policy magazine explains why:
There’s a chicken-and-egg dynamic here—American companies are the ones targeted by regulations because they are the firms that have succeeded and grown large enough to matter. The lack of significant European tech giants, then, is also one of the reasons the EU’s regulations are as harsh as they are in the first place
Will the EU fail to learn from the mistakes of the past and over-regulate AI and so fail to control it, and fail to help keep people safe again?
Or is North Korea more likely to embrace K-pop and become a world leader in it?
Time will tell...
Like to keep informed & updated? Subscribe for my free weekly email newsletter ‘The FuturAI’ on the latest news & developments, helping you understand how AI is impacting society and humanity now & in the future.
But what’s your perspective? Do you agree? Do you think we need more flexible AI regulation that allows its development? Or do you have a very different perspective?
I’d love to know what you think, whatever that is, let me know in the comments and let’s continue this important discussion about AI regulation and safety.