In the first part of this two-part blog series, we explored the convergence of art and artificial intelligence, where pioneering creators have found unique and exciting ways to use AI to produce works of art that would otherwise not have been possible before. A lot of the magic of AI-generated art stems from the very nature of how it’s made: algorithms are able to absorb and process terabytes upon terabytes of data, using deep learning and other functions to create fascinating, often surreal artworks. Artists like Refik Anadol have used AI to create gorgeous light projections, 3D art, and data sculptures that fill entire rooms, cover entire buildings, and feel alive.
That was the previous article, and if you haven’t read it, you should definitely check it out. All of these were examples of a human artist using AI to create art that they had envisioned or had a direct involvement in. But as AI-generated art becomes more mainstream, we need to start asking ourselves not just how we can use algorithms to make art, but how algorithms can create art themselves.
The very notion seems more rooted in science fiction than reality, like an abstruse Arthur C. Clarke novel about an AI inhabiting the mind of an artist. But the crazy thing is, it’s already started happening. Computer scientists around the world have already been devising algorithms that can process images of paintings and human art, and generate their own interpretation of it. In 2018, an artwork created by one such algorithm sold for an astounding $435,000.
If one thing’s for sure, it’s this: ‘CG’ is taking on a whole new meaning today.
AI as the Artist
One of the biggest advances for AI-generated art came in 2014, in the form of a research paper by computer scientist Ian Goodfellow. He introduced a new class of algorithms called Generative Adversarial Networks (GAN), which use two models to output data. The ‘Adversarial’ part refers to the way these two models interact with each other: the first one — the Generator — takes in a set of data (in this case, portraits from the past 500 years) and generates an output (an image imitating the styles and tones of the input images). The second one — the Discriminator — has been taught to analyse the output image and judge it based on its similarities and alignment with the original input images.
Now, this doesn’t seem like a big step up from the way many artists have already been using AI so far — it’s still heavily reliant on human intervention to feed the input images and curate the output images afterwards. But it differs in one critical aspect: the algorithm itself determines which images fit with the theme provided. In that regard, GANs exercise a fair bit more autonomy than algorithms that just generate images randomly.
The Portrait of Edmond Belamy, the aforementioned AI-generated painting that sold for $435,000 at the Christie’s Auction in London, is a product of a GAN. The creators of the painting trained the algorithm on a set of 15,000 portraits from various periods spanning the 14th to the 19th centuries. But the painting—which features the portrait of a wearing a black overcoat and a white shirt—looks nothing like any painting from any of those centuries. It’s a distorted image, lacking any distinctly human features that stand out. In that sense, it looks more like a contemporary abstract piece than an 18th century-style painting.
By his own admission, one of the painting’s creators, Hugo Caselles-Dupré, says that the AI still does not reliably recognise human faces and features. Although it’s the Discriminator’s job to filter out images that don’t bear resemblance to their reference data, he says “for now it’s more easily fooled than a human eye.”
Is AI capable of creativity?
GANs are an interesting twist on a standard model of algorithm that has just one input and output by introducing its second component, the Discriminator, which curates the output to align it better with our desired result. But researchers at the Art and Artificial Intelligence Lab at Rutgers University, headed by Ahmed Elgammal are working on what they call a CAN: a Creative Adversarial Network.
It uses the same two-sided nature of the GAN model, but with an extra element of artistic curation. “On one end,” Elgammal explains, “it tries to learn the aesthetics of existing works of art. On the other, it will be penalized if, when creating a work of its own, it too closely emulates an established style.”
The idea behind this was to eliminate any works that could be considered (although unintentionally) copying or unoriginal. To rein this in, however, the AICAN (Artificial Intelligence Creative Adversarial Network) is carefully tuned to produce art that isn’t too far removed from what it’s been fed with. This ensures that what it creates don’t stray so far as to be considered unacceptable or weird, but not so similar that it could be considered a ripoff.
They introduced the algorithm to over 80,000 images of the most important or archetypal works in the Western art world in the last five centuries, without a focus on any one style or genre. And with just a click of the button, these were the images it produced.
It’s hard to tell that they were made by an AI, right? They could just as well be the work of a modern or contemporary abstract artist. To test this, Elgammal and his team created a set of images consisting of AICAN’s works mixed with paintings by actual human artists and showed them to people, asking them which ones they thought were generated by a machine. What they discovered was that people really couldn’t make out the difference. They thought that images produced by AICAN were created by a human artist 75% of the time.
This really does call into question the notion that art is the exclusive domain of living, breathing people such as ourselves. If you people look at two paintings and can’t tell if one was made by a person and the other an algorithm, is there even a justifiable difference between machine-made art and human-made art? And more importantly, if we do accept the argument that you cannot effectively distinguish one from the other, what does that say about creativity?
It’s long been believed that art is born of a creative mind; a mind that can manipulate reality to its own rules and fashion something that didn’t exist in the world. Why is this any different? Can we actually say that machines can possess creativity? And if they can, then what else do they possess that by all counts only we humans seem to have?
It brings to mind the Turing Test, a test that theoretically should be able to exhibit intelligent or sentient behaviour that’s indistinguishable from that of a human. In this test, devised in 1950 by the British mathematician Alan Turing, an evaluator would engage in text-only conversations with two separate individuals, one a human and the other a machine. The evaluator, who knows that one of the subjects is human and the other isn’t (but not which is which), should be able to have separate conversations with them and be able to determine which of them is the machine. If the evaluator can’t reliably tell which of the subjects is a machine, the machine has passed the Turing Test. Basically, it’s a way to answer the question, “Can machines think?”
Now, I’m certainly not going to go so far as to suggest that the AICAN or any other GAN-based algorithms possess any sort of real consciousness or sentience. These algorithms can’t even really recreate a human face properly, and you get weird, distorted creations like The Portrait of Edmond Belamy. AI-generated art is still getting somewhere, but it’s not there yet.
The Future of Machine-Generated Art
For now, art generated by GANs and CANs are still very much human-driven. “Just because machines can almost autonomously produce art,” Elgammal says, “it doesn’t mean they will replace artists. It simply means that artists will have an additional creative tool at their disposal, one they could even collaborate with.”
The most important thing that sets human and AI-generated art apart, at least as it seems right now, is the fact that an algorithm lacks intent. All it’s doing is learning patterns from reference images and generating artworks according to how it’s been programmed. There’s no real intention to create a work of art there, just the need to follow instructions. The intent—the need—to create something beautiful and unique is what art is all about. It wouldn’t exist if humans didn’t feel some inexplicable need to express themselves in anything more than simple prosaic language.
AI-generated art is still relatively in its infancy, and although it seems to have passed the Turing Test of abstract art, it still has a long way to go prove, at least on a philosophical or scientific standpoint, that it’s anything more than a computer program doing what it’s programmed to do. When that happens, though, we might be in for a whole new era of art, one that’s not entirely dictated by the human palette.