By Aubrey Sanders
Imagine a world in which feathered fish swim through prismatic seas, and where dog-headed humans walk among temples of light. Imagine that every surface glows, as if from within, with an intrinsic luminescence: cyan, emerald, opaline. In this world, each building’s buttresses and colonnades recede into impossibly faraway horizons, like two mirrors had been turned in on each other to reflect infinity. Imagine that nested in the shadows of every fractal nook and crevice, a watchful eye peers back at you.
This mesmerizing and unsettling place is our own world as envisioned by DeepDream, Google’s artificially intelligent image recognition software.
On June 17, software engineers Alexander Mordvintsev and Mike Tyka, with software engineering intern Christopher Olah, sent shockwaves across the internet when they released the first images of the AI’s psychedelic dreams. The name of their project? “Inceptionism.”
The pictures were procured from an Artificial Neural Network (ANN) trained to identify specific features such as animals or buildings in photographs. In order to teach the ANN how to selectively classify images, the team showed it millions of examples and adjusted the parameters until they received the response they desired.
Each ANN consists of 10-30 layers of artificial neurons that communicate with one another from top to bottom. Images are input into the network’s top layer, which assesses the broad features of a picture–its basic edges and outlines–and passes its assessment on to the next layer. This process repeats itself with increasing detail until the network produces an output that represents all of the information it could assemble about the image it was shown.
When the team tested the ANN to determine how well it could understand the images fed to it, they were surprised to discover that it could actually create images, too.
“[We] train networks by simply showing them many examples of what we want them to learn, hoping they extract the essence of the matter at hand (e.g., a fork needs a handle and 2-4 tines), and learn to ignore what doesn’t matter (a fork can be any shape, size, color or orientation),” the Google team wrote on their research blog. “But how do you check that the network has correctly learned the right features?”
Rather than feed the network an image of a starfish and ask for it to be regurgitated, for example, the team provided the network with a blank canvas of static noise and asked that the ANN produce its interpretation of a starfish on that canvas.
The resulting image offers a glimpse into the inner workings of the AI’s deep learning process. In some cases, the network effectively demonstrates its grasp of the fundamental qualities of an object.
In other instances, the ANN fails to differentiate between the features are that are essential to an object and those that aren’t. When asked to show what it thought dumbbells looked like, the ANN succeeded in visualizing weights, but only within the grip of a disembodied arm. By testing the network in this way, the team was able to determine that the network had never seen a photo of a dumbbell without a weightlifter’s arm holding it.
Here’s where it gets interesting. The engineers took the machine’s creative process one step further by feeding the images it produced back into its input layer, making a positive feedback loop. If on its first try, the ANN thought it recognized the shape of a dog in a picture of white noise, on its second, third, and fourth passes it will emphasize that dog with greater assertion, until finally, the dog’s image will emerge from nowhere.
So, in an image that appears to our eyes to be nothing but empty static, the neural network can see an entire hallucinatory fantasia of towering pagodas and animal hybrids.
The human experience of dreaming reveals to us the uncharted cosmos of fears, wishes, and loose associations that lie sealed beneath the delicate film of our consciousness. Upon waking, our senses muddle the visions that, seconds before, surrounded us with a strange and quasi-divine coherence.
To say that Google’s AI can dream is, perhaps, a misnomer. It seems unlikely that anybody would actually regard DeepDream’s visualizations as evidence for the machinations of a dormant robotic subconsciousness. After all, the images do not emerge from spontaneous thought–they emerge as the result of human input. At the root, they are biased amalgamations of everything Google’s software engineers have ever shown to the neural network.
But there is an eerie resemblance between the ANN’s “dream” process and our own: memory forms the basis for both. If we consider our unconscious minds the blank canvas, or white noise, can’t it also be said of us that we project biased distortions of our own experiences onto that space?
Engineers have made great strides in deep machine learning with the development of speech recognition and image classification softwares. As researchers continue to improve upon previous models, the neural networks become sharper tools that are increasingly effective at making sense of the world around them. Yet, despite impressive advances in high-dimensional machine thought, the capacity to dream remains a privilege reserved for the living. At least for now.