On the weekend I attended one of the regular Trampoline events in Melbourne. The event is an ‘un-conference’, where no talks, discussions or workshops are planned, but instead the attendees get up and offer to talk about this topic, or lead a conversation on that topic and the day’s content is filled in on a big board. If the idea sounds intriguing, it is, and the event works much better than I’d have ever guessed with a lot of really fascinating discussion and thoughts tumbling out of the day.
In one of the last sessions of the day, one presenter played the short film Humans Need Not Apply by CGP Grey.
Humans Need Not Apply is a short film discussing how advances in Artificial Intelligence (AI) and robotics may render huge volumes of work that is currently done by humans, unnecessary for human hands. One point that the short film makes is that even if creativity is uniquely human magic (it’s not), then an economy based on creativity cannot function because by definition people are successful as creatives by being popular and not everyone can be popular.
But I wanted to discuss that other point – can human creativity itself be automated? Might AIs take over the role of being creative as well as everything else? This requires some breaking down of the topic into a couple sub-points.
First off, where do creative ideas come from? One well established bit of knowledge around creativity is that people who spend more time ‘leaning against a problem’ – as John Cleese has put it – are more likely to come up with the more creative solutions and ideas. Creativity is a function of time invested into a problem or challenge or project – but why should this be?
Creativity at it’s simplest is the bringing together of concepts or ideas that are not usually placed next to each other. A creative metaphor takes two distinct things that share a trait we don’t usually consider in tandem, and the metaphor places the two things next to each other. In a simpler way, creativity can involve simple visual juxtaposition (the first person to draw a green horse was being creative because green horses don’t exist, or didn’t before that first drawing) or word juxtaposition. Being creative with word juxtaposition is relatively easy. “purple marmot thaumaturge” (with the quotes) returns no results in Google. I’m the first person in human history, evidently, to imagine a purple marmot who is also a worker of magic and miracles. However, this creative invention probably won’t star at the emotional core of a great work of literature any time soon. Ditto for “skydiving cowgirl nurse”, “repressed dinosaur genius”, and “giant Chinese salamander who fixes spaceship engines”.
These examples illustrate the next step in the creative process, which also feeds into why time is important. Time is needed because quite often the first ‘creative’ ideas we have are not the best ideas we might have. We have to be able to identify what is good and what is not so good and discard the not so good. On this point, I’m unsure how well any of the non-aware ‘domain knowledge’ classes of AI would fare. A domain knowledge AI, incidentally, is an AI that lacks real intelligence and self-awareness, but is able to process a massive amount of information. As I write this article, domain knowledge AIs are already writing newspaper articles (you’ve probably read a short newspaper article written by an AI and didn’t know it), they are replacing para-legals and lawyers in big law firms, and they are planning to inveigle themselves into medical diagnosis. IBM’s impressive Watson AI, which can understand questions in human English spoken aloud is a domain knowledge AI – and incidentally has recently been reduced to the size of a pizza box, making it substantially more portable and affordable, all things considered.
Whether or not an AI will easily be able to look over a massive amount of creative churn/output and decide which ideas are the gems among the rocks… well, this I’m unsure of… although given enough information about human preferences for story tropes and shapes, maybe? It seems a difficult thing to judge one way or another.
The final step then is weaving all this together in a readable way. Natural language is something AIs are already managing quite well, so the next question would be – can a AI work out the difference between a story that is creative yet accessible and a story that is too creative for its own good. That is, could an AI find the fine balance between surprise and familiarity? This question almost needs to settle on whether an AI could be trained to enjoy fictional stories itself… the AI would need to be able to judge whether a story was enjoyable in a human way.
That feels also like a moot point at the moment. Until AI sophistication tops-out, it’s going to be very difficult to know where it might top-out… that said, it certainly doesn’t seem outside of the realm of possibility that an AI could be trained using learning algorithms to prefer or ‘enjoy’ one type or fiction over another at the most basic level.
And of course, now that I think about it, a story about training AIs to enjoy great works of human literature might itself be an interesting story. Something to mull over… one plus side being, such a tale might even have market appeal for our new computer overlords once they are fully installed.