CLS_bigbox

AI Startup ‘Vicarious’ Excites Silicon Valley Elite – But Is It All Hype?


By Loren March

Artificial Intelligence startup, Vicarious, has been getting a lot of attention lately, and it is not entirely clear why. A lot of Silicon Valley bigwigs have been opening their personal pocketbooks and dishing out the big bucks in support of the company’s research. Their website flaunts the recent influx of funding from such notables as Amazon CEO Jeff Bezos, Yahoo co-founder Jerry Yang, Skype co-founder Janus Friis, Facebook founder Mark Zuckerberg and… Ashton Kutcher.

It’s not really known where all this money is going. AI is a highly secretive and protective area of technological development lately, but the public debate about the arrival and use of highly anticipated AI in the real world has been anything but hushed.

Vicarious has been a bit of a dark horse on the tech scene. While there’s been lots of buzz about the company, especially since their computers cracked “CAPTCHA” last fall, they’ve managed to remain an elusive and mysterious player. For example, they don’t give out their address for fear of corporate espionage, and even a visit to their website will leave you confused about what they actually do. All this playing hard to get has still got investors lining up.

Vicarious’ main project has been the construction of a neural network capable of replicating the part of the human brain that controls vision, body movement and language. Co-founder Scott Phoenix has said the company is trying to “build a computer that thinks like a person, except it doesn’t have to eat or sleep.”

Vicarious’ focus so far has been on visual object recognition: first with photos, then with videos, then with other aspects of human intelligence and learning. Co-founder Dileep George, previously the lead researcher at Numenta, has been stressing the analysis of perceptual data processing in the company’s work. The plan is to eventually create a machine that can learn to “think” through a series of efficient and unsupervised algorithms.

Naturally, this has people pretty freaked out. For years the possibility of AI becoming a part of real life has immediately drawn knee-jerk Hollywood references. On top of fears about human jobs being lost to robots, people are genuinely concerned that it won’t be long before we find ourselves in a situation not unlike those presented in the Matrix.

Tesla Motors and PayPal co-founder Elon Musk, also an investor, expressed concerns about AI in a recent CNBC interview. “I like to just keep an eye on what’s going on with artificial intelligence,” Musk said. “I think there is potentially a dangerous outcome there. There have been movies about this, you know, like Terminator. There are some scary outcomes. And we should try to make sure the outcomes are good, not bad.”

Stephen Hawking put in his two cents, essentially confirming our fear that we should be afraid. His recent comments in The Independent led to media frenzy, sparking such headlines as Huffington Post’s “Stephen Hawking is Terrified of Artificial Intelligence,” and MSNBC’s brilliant “Artificial Intelligence Could End Mankind!” Hawking’s comments were significantly less apocalyptic, amounting to a sensible warning: “Success in creating AI would be the biggest event in human history. Unfortunately, it might also be the last, unless we learn how to avoid the risks. The long-term impact of AI depends on whether it can be controlled at all.”

This question of “control” brought a lot of robot rights activists out of the woodwork, advocating for robot freedom, saying that trying to “control” these thinking beings would be cruel and amount to a form of slavery, and that we need to let the robots be free and live their lives to the fullest potential (Yes, these activists exist.)

Show more