Could Capsule Networks Bring Us Closer to the End of the World?
When people think of artificial intelligence most imagine humanoid robots. Movies like AI, iRobot and Terminator depict a future in which robots are cognitive beings like ourselves.
Unfortunately the artificially intelligent robots in these movies have a lust for world domination
As of right now, the worst that could happen would be that computers take all of our jobs.
Needless to say, AI is becoming increasingly prevalent in our society. In fact, almost all of us possess a device that uses artificial intelligence in the form of neural nets or “deep learning.”
Neural nets have been around as early as 1957. However, the theory behind artificial intelligence wasn’t able to become a reality until computers were capable of handling it.
With increased computing power over the last decade, artificial intelligence has made its way into almost all of our devices – from voice and face recognition in our phones to parking assistance in our cars.
As impressive as this kind of technology may be, deep learning has some serious limitations. Just as fast as it became a part of our lives, it may shortly be replaced by an even more sophisticated and impressive form of artificial intelligence: capsule networks.
The basis of artificial intelligence theory begins with the brain. The goal is to enable computers to problem solve and think on their own, so the best way to do that is to create a network that accurately simulates the way our brains function.
Just like our brain, neural nets are made up of tiny and densely compact neurons. These artificial neurons are fed information and over time, begin to identify patterns in data. From this, these neural nets are able to “think” and predict patterns without explicitly being told to do so.
For example, if you were to feed the network lots of pictures of dogs and cats, eventually it will be able differentiate between the two animals. Even if you were to show it a picture of a dog or cat the network had never seen before, the neural net would still be able to determine what the picture contains independently.
This has allowed cars to identify if you’re about to rear-end someone and Snapchat to put an animated dog filter on your face when you take a picture.
It is one thing to be able to identify objects based on differences and another to be able to think and reason like humans. In order for neural nets to be effective, they require a ridiculous amount of data samples. But even if the neural net were given millions of samples, like product descriptions for various items, it would still not be able to read an unknown description and properly identify a product it does not already know. Basically, anything that requires logical reasoning or long term planning, deep learning will fall short. Simply put, we are not able to go beyond anything more sophisticated than basic sorting right now. So, your irrational fear of robots taking over can be put to rest.
However, a major breakthrough has been made in AI and deep learning techniques. Geoff Hinton, deemed the “father” of artificial intelligence, has come up with a way to reduce the amount of inputs needed for neural nets to accurately learn and predict information. He calls it “capsule networks.”
Instead of individual artificial neurons (AN), a collection of AN’s are put in small groups of functioning pods or capsules. These capsules are trained to identify specific characteristics of the object being identified, using much less data inputs than traditional neural nets. Compared to traditional neural network techniques, these new capsules halved the error rate and amount of time it took to recognize handwritten digits.
The medical, finance and auto industries could use this breakthrough to make it easier to sort through big data, predict customer behavior, improve self-driving cars and perform precise robotic surgeries.
AI still isn’t capable of being able to reason and rationalize like a human brain can, but these strides show the true potential of deep learning. Who knows, maybe in the next decade computers will be able to think like you and I. Until then, you’ll just have to be content with park assist.