The Uncomfortable Truth About AI Creativity: What Creatives Refuse to See
What Creatives Get Wrong About Generative AI's Rise
Can AI be creative? what does that even mean? And if AI can be creative, how does that compare to human creativity, could we measure that even?
These are some of the questions increasingly arising from the ongoing development of generative AI that are harder to ignore as AI gets more capable.
There's much to unpack here. For example Arts vs Science: how do different disciplines look at the world? For the arts, the subjective and personal, what we feel, is often most important. For the sciences, it is usually more about what works, what can be measured, and what is more objective.
These disciplines and perspectives often remain segregated, and when they do combine, it's often a clunky compromise rather than a genuine synthesis, and the different worldviews of the arts vs science usually remain unchanged within those of each discipline.
Yet this is a fundamental and catastrophic error. I have trained (and have degrees) in both the sciences and the arts, and I feel this segregation of worldviews promoted by mainstream academia, has been very unhelpful, arbitrary, and counterproductive for improving our knowledge of the world generally.
Here we will explore how this segregation of the arts vs science worldviews to AI and creativity has led to considerable oversights and mistakes in how many creatives fundamentally understand AI and creativity.
AI Beats Humans on Creativity Tests
The MIT Technology review reported on a recent study that showed AI achieved higher than average scores than humans on a test commonly used on humans to assess creativity.
The article & study were careful to distinguish between passing the tests and if AI is being creative in a way that we understand:
The findings do not necessarily indicate that AIs are developing an ability to do something uniquely human. It could just be that AIs can pass creativity tests, not that they’re creative in the way we understand.
The test involves trying to come up with different solutions to a problem, specifically:
Researchers started by asking three AI chatbots—OpenAI’s ChatGPT and GPT-4 as well as Copy.Ai, which is built on GPT-3—to come up with as many uses for a rope, a box, a pencil, and a candle as possible within just 30 seconds.
While the results showed that on average AI performs better than humans on this task, the study also notes that the best humans do better than the average AI as well. The authors were also careful to specify the limits of their conclusions in this study:
While the purpose of the study was not to prove that AI systems are capable of replacing humans in creative roles, it raises philosophical questions about the characteristics that are unique to humans, says Simone Grassini, an associate professor of psychology at the University of Bergen, Norway, who co-led the research.“We’ve shown that in the past few years, technology has taken a very big leap forward when we talk about imitating human behaviour,” he says. “These models are continuously evolving.”
Another academic Ryan Burnell of the Alan Turing Institute also cautioned about the conclusions of this study:
The chatbots that were tested are “black boxes,” meaning that we don’t know exactly what data they were trained on, or how they generate their responses, he says. “What’s very plausibly happening here is that a model wasn’t coming up with new creative ideas—it was just drawing on things it’s seen in its training data, which could include this exact Alternate Uses Task,” he explains. “In that case, we’re not measuring creativity. We’re measuring the model’s past knowledge of this kind of task.”
Burnell’s comment while a common response, does raise questions about his own assumptions. What's the difference between an AI drawing on its 'training data' and a human drawing on its 'training data' to be creative?
Apparently, according to Burnell, it's creative if a human does that but not if an AI does it? Doesn't that feel a bit human-biased perhaps? a bit anthropocentric?
Where's the evidence to support Burnell's assumptions? That evidence might be hard to come by, given how little we understand how the human brain works and how creativity works in the brain, let alone how AI works. As a recent paper highlighted:
The neuroscientific study of creativity is stuck and lost. Having perseverated on a paradigm that is theoretically incoherent, there is little we know for sure about the brain mechanisms of creativity.
Burnell conveniently ignores the fact that human brains are also still essentially 'black boxes' we don't fully understand.
Leaving this aside, the study results do seem quite intriguing and suggest we should take the possibility of AI creativity much more seriously.
However, we should also be careful to distinguish as the study suggests that passing human creativity tests does not imply AI is doing the same thing as humans internally, or doing creative tasks in a way that is similar or understandable to humans.
Creative's Respond to AI Beating Humans at Creativity Tests
Of course, creatives are going to vary in how they think about and respond to anything, including how they think about AI and creativity, and how they would respond to a study like this.
However, I do find there are some common assumptions and frames of reference many creatives often use to understand AI.
With that caveat, I'd like to use as an example the response by Jasper Kense of the UX collective to this study in his article ‘Being creative in an age of genAI’.
Jasper makes some great points in his article, acknowledging the real impact that AI is having on the creative industry right now, and the understandable worry some creatives might feel about this and potential job displacement.
He's also rightfully cautious about what the implications may be as he says:
While the nascent stages of generative AI’s influence on the global industry are becoming apparent, the true extent of its impact is still unfolding.
He also makes a great point against naive assumptions of AI assuming it is some magic wand to solve every problem when it can’t actually do everything a human can, at least not yet:
Sometimes it is too easy to say: “Let’s use AI!” — Just like your manager might. We need to find the right tools at the right time.
When Creatives Confuse Feelings for Reality
However he then goes on to make several other points that seem more problematic, yet sadly common among most creatives.
For example, he makes the following assumptions about AI in the technical vs creative industries:
While it might be good to find patterns for certain industries, like data science, for creative industries it is but a tool. While it might be helpful for a designer to find 100s of variations of a toothbrush design, having one creative designer will give you the most unique design. A designer can put the generative capability to their advantage, leveraging visualization tools like Dalle2 and Midjourney. They can explore what the world has to offer. But to truly be creative, one would need to mostly explore their ideas and experience.
Firstly, it does seem somewhat hilarious he seems to believe that AI isn't a tool for some other industries, but it's 'just a tool' for the creatives. Newsflash Jasper: it's a tool for everyone.
Secondly, his belief that 'having one creative designer will give you the most unique design' raises many questions.
Who decides what's the most 'unique design'? It's one thing to use one subjectivity to create works of art, it's quite another to use one's subjective beliefs to make grand statements about all creativity.
The way ‘unique design’ is used here reminds me of
recent great article on Naive Realism and ‘self evident truths’ stated with total confidence, that on examination appear to lack much evidence or basis.This is perhaps one area where having some objectivity and dare I say, a little evidence can be helpful. The lack of it, perhaps is revealing.
Does Human Creativity & Uniqueness Even Exist?
The idea of human uniqueness and creativity may be more a myth than a reality.
Mark Twain said about originality:
For substantially all ideas are second-hand, consciously and unconsciously drawn from a million outside sources, and daily use by the garnered with pride and satisfaction born of the superstition that he originated them
Penelope Alfrey in her MIT article ‘Petrarch’s Apes: Originality, Plagiarism and Copyright Principles within Visual Culture’ also said:
The problem is that plagiarism permeates everyday life. It is not only accepted, it is encouraged and integral to the creative life. you depend on it, for you learn through copying others and you use it to reinforce social bonds. Thus it is the basis of any art or design training. Yet paradoxically notions of originality are also entrenched in the creative process, as an ideal, no matter how unrealistic.
I've written much more about the dubious basis of creativity and creativity ownership and how it relates to AI in ‘Originality on Trial: AI’s Challenge to Creative Ownership’.
However, what is the point of the obsession with human uniqueness in creativity, if ultimately creatives are mostly there to produce works and other creative outputs as a product, and if AI can produce these products of sufficient quality that people are happy to pay for them?
Sneering that AI isn't being 'truly creative' in this light seems a little futile, puerile & a bit academic, at best. Hidden within Jaspers's subjectivity are some (perhaps accidental?) facts that do seem to hold some weight of evidence.
For example, current AI tools cannot currently fully replicate or replace what creatives do. He rightly points out that AI tools like Dalle2 and Midjourney are limited in what they can do, so actually a human is required to produce the higher quality results that are needed for professional design work for example.
This is something objectively true.
But for how long? That's the thing about AI, it's not a linear technology like most of our previous technology it's an exponential technology.
Linear vs Exponential Technologies
Most creatives and most people are used to linear technologies, which get gradually better over time. Your iPhone, the internet, and common design tools like Adobe Illustrator or Photoshop, get gradually better over time, incrementally better.
Your iPhone gets a bit thinner, a bit faster and brighter with each iteration. Adobe Illustrator gets new edit tools or new compositional features. These are linear, incremental technologies - which are most technologies.
Linear technologies improve at a roughly constant rate.
AI is an Exponential Technology. This means that each iteration is not an incremental improvement, the rate of improvement itself is increasing.
For example, the difference between ChatGPT 3 and 4 includes far superior reasoning abilities and multi-modal understanding (4 can understand images as well as text) which were huge improvements, not incremental ones.
Unfortunately, Jasper and many other creatives fail to understand the exponential nature of AI improvement, and this shows up in many ways in the way they see the world.
For example, assuming that 'having one creative designer will give you the most unique design'. Yes, perhaps today - but how can you assume this will be true next month or next year, or assume this will be true forever as Jasper and many creatives do - when AI is improving at an exponential rate?
That can be the weakness of relying on feelings as many creatives do to the exclusion of more objective reasoning and understanding of what you are trying to look at, especially when trying to predict the future as he attempts defiantly:
The last year has made clear that we, as a design community, should not feel feared by the developments. We should rather feel empowered by the new shortcuts in our workflow. But creativity will always stay with humans.
Blind assertions of human superiority, with little understanding of what an exponential technology like AI is, and presuming to be able to know the future, are sadly not uncommon sentiments among many creatives as he continues:
As the design community navigates this evolving landscape, it is crucial to recognise that while AI can mimic certain aspects of creativity, it cannot replace the authentic, groundbreaking ideas that stem from the rich tapestry of human experience. The future is not a dichotomy between humans and AI but a collaboration that propels creativity.
Yes, Jasper, that may well be the limits of current AI, but nobody gets to define the limits of exponential technology like AI for the future and all of eternity, least of all those who don't even really understand it.
Scientific vs Artistic Ways of Dealing with Uncertainty
What's also quite bizarre is Jasper seems to forget his more cautious and more realistic opening statement later on:
While the nascent stages of generative AI’s influence on the global industry are becoming apparent, the true extent of its impact is still unfolding.
Indeed, quite right. The truth is Jasper we don't know the future, let alone about what AI will be able to do.
However, holding onto uncertainty can be difficult, that's the uncomfortable truth most creatives are refusing to accept about AI.
Scientists have their way of trying to create certainty out of uncertainty - theories, and laws etc, while being open to new evidence. This is how Scientists attempt to create knowledge.
The tendency of creatives to believe their feelings and subjective experiences are the best source of knowledge when dealing with uncertainty, can perhaps ultimately be their undoing when trying to predict the future of an exponential technology like AI.
What feels right, or what feels reassuring to us, is not likely to be a good predictor of what is likely to happen in the future with AI and creativity.
While humans have been a tool using species for thousands of years, the current exponential rate of change we have with AI is unprecedented in human history. This is hard, but not impossible to understand.
Yet it's also important to acknowledge these changes could feel like a threat to creatives. But what exactly does AI threaten?
Tristan Wolff wrote an insightful article about this topic recently called ‘Artificial Intelligence & The Misconception Of Creativity’ In which he said:
AI threatens creative egos. People who identify so strongly with their supposed creativity that they feel personally threatened by a machine intelligence that appears to deliver creative results. In this sense, the threat is an illusion. It is an ego trip.
Creatives who continue to assume that AI will improve only incrementally, that continue to refuse to see that AI has no predefined limits on what it might become, are not preparing themselves for a future they cannot control or predict.
Creatives that assume they are inherently superior to AI just because they are human, and that they always will be, seem to be asking for trouble that is probably well on the way to them.
Hubris doesn't tend to end well.
But what’s your perspective? Do you agree? Do you feel artists are right to be angry? Or do you have a very different perspective?
I’d love to know what you think whatever that is, let me know in the comments and let’s continue this important discussion about AI and creativity.