Oh! Hello, C3P0! - Algorithm Week

ADDITIONAL CONTRIBUTORS Jess Goulart

By Jess Goulart

Photo courtesy of Saad Faruque.

What if you could jump into the backseat of your car with your morning newspaper and cup of coffee while it sped you autonomously to work? A self-driving machine that is devoid of human error and distraction means less automobile-related deaths… and more time for the funnies.

Now let’s say, a computer existed that could analyze cancer research with the same combination of logic and intuition as a human, only better, and not subject to annoying biological faculties like sleeping, eating, or forgetting? Boom! Cured.

These tantalizing applications are at the root of the ongoing quest for Artificial Intelligence, combined with our species’ trademark curiosity and natural thirst to create smarter, better computers because, let’s face it, who doesn’t want their very own C3P0?

Though the history of AI is replete with failures, a recent re-vamp of an old technique called Deep Learning (DL) is facilitating intriguing advancements and sparking the interest of super companies like Google, who seem more or less intent on ruling the world via robots.

DL is one technique under the parent term Machine Learning, which is one sub-category within the parent field of AI. Machine Learning (ML) is broad and has a wide variety of applications, some of which you already interact with on a daily basis. For example, when you search for something on Google, smart little computer robots (called spiders) sort through a massive amount of results, filtering the most pertinent to the front. Your photo collection on Google Plus uses ML to identify specific objects when you search for them, and the speech recognizer on android phones uses ML to identify your voice. Apple phones use divine intervention.

Just kidding. They use smart little computer robots too.

Within ML, Deep Learning (DL) is a specific technique that can be used to teach computers even fancier, more complex behaviors, like recognizing what a cat looks like without being programed to.  Humans use previously stored information to accurately predict (recognize) that the furry, small, four-legged, two-eyed, purring thing that they are petting is a “cat”, as opposed to a “helicopter”.  DL algorithms, or sets of algorithms, work in the same way.

“The reason why we make the connection with the brain is that the deep learning systems are very often controlled neural networks, which is the computer equivalent of what the human brain uses. Memories are stored in the strength of the connection between neurons, and those are the things being modified when you run a program,” says Yann LeCun, an expert in the field and Silver Professor of Computer Science and Neural Science at the Courant Institute at NYU.

There are several layers of these neurons, hence the term Deep Learning, each responsible for recognizing specific aspects of the information being processed and then filtering that information to other levels. To train a machine for visual object recognition, you feed the algorithm millions of images of the category you want it to learn, until it can accurately predict objects that fall into that category.

The technique itself has been around for 20 years or so, but only recently has it been revisited by the AI community, partly because the world lacked computers that were fast enough to process the incredible amount of data it takes to train a system using these algorithms. Now, a new type of computer has been built out of the chips used in the graphic cards that kids buy to play games on, and it’s powerful enough for DL.

So don’t let anyone ever tell you video games are useless.

LeCun cites the results of a competition in machine vision as a jumping off point for DL’s popularity. Geoffrey Hinton from the University of Toronto, who WIRED magazine refers to as the “godfather of neural networks,” and his team used DL to win by a landslide. They’re all now employed by Google.

This technique was further developed by a team at Stanford, who paired DL algorithms and a new sensory technology called the Kinect to add the element of depth to object recognition. In other words, the computer was then able to recognize 3D objects, an essential advancement towards a robot being able to autonomously maneuver itself through complex spaces, like when it’s cleaning your house.

DL has also facilitated impressive advancements in natural language processing, another subcategory of ML. Richard Socher is a Senior PhD student at Stanford who is working on teaching machines to recognize nuances of human language, like context and hyperbole.

It used to be that a machine could determine whether a sentence was positive or negative based on a simple (and often inaccurate) equation. The computer would take a sentence, add the negative words and positive words, and base its assessment of the sentence on the resulting product. If there were three negative words and one positive word, the machine determined the sentence to be negative.

The problem is the complexity of human speech. When someone says “that movie was tragically sad,” they could in fact mean that tragically sad was what they wanted out of the film, thus the sentence is positive. Or they could be very sarcastic, using “tragically sad” as slang for “very poorly made”, thus the sentence is negative, but not negative emotionally.

A machine computing these sentences would fail to recognize the difference in “tone,” so to speak.  If you’ll recall, Lieutenant Data of Star Trek always struggled to understand human humor. The complexities of it escaped him.

Enter DL. Socher and his team just released the latest version of their demo where a computer using DL algorithms is more accurately able to determine whether a sentence is very negative, negative, positive, or very positive. You can check it out here.

“One of the biggest challenges facing the DL community is training complexity,” says Dileep George, a leading AI scientist who co-founded the AI research start-up Numenta along with Jeff Hawking. “Usually multi-level networks require much more data to train compared to single level networks. Most of the work in Deep Learning has been about figuring out how to train multi-level networks efficiently.”

Another challenge is the fact that consciousness and intelligence are hard to define, and involve countless components. George points out that if an algorithm were to appropriately match human thought processes to the point where it can learn independently, that is consciousness in a certain sense.

“We only know about what it feels like to be conscious as a human. But extrapolating from that feeling, I do think that dogs are conscious and monkeys are conscious. I also think that a child is conscious. A 3-year-old child is more conscious than a 6-month-old child, so there can be different levels of consciousness.”

This holds true for “intelligence” too. There are various types of intelligence, logical, and emotional. To have true AI, a robot would need both, plus a whole mess of other key human features.

“If you build a robot now it will look to you a lot like a fly bumping into a window over and over. How do you teach it advanced planning or decision making,” says LeCun, “DL can be used to solve only perception problems.”

Yet the popularity of DL seems promising. Andrew Ng, Director of the Stanford AI Lab, who was recently commissioned to work for Google, tells WIRED that “we clearly don’t have the right algorithms yet.  It’s going to take decades…but I think there’s hope.” The sentiment is echoed by LeCun, George, and Socher.

Assuming Google lets us all live, BTR would like to take this opportunity to reserve the first release of the self-flying, jet-packed robot with built in temperature control and cronut maker.

recommendations