Ben Horowitz worries about who will debug artificial intelligence
The venture capitalist sees no signs AI will enslave the human race
SHOTARO TANI, Nikkei staff writer
TOKYO -- The danger of artificial intelligence lurks not in the possibility of it becoming smarter than and controlling the human race, but rather in our inability to debug it, warns Ben Horowitz, co-founder of the prominent Andreessen Horowitz venture capital fund.
Speaking at the New Economy Summit in Tokyo on Thursday, Horowitz dismissed fears of AI taking over from humans as "speculative," saying there has yet to be any AI technology that shows consciousness.
The idea of AI potentially becoming master of the human race has been propagated by some prominent technologists, notably the charismatic entrepreneur Elon Musk. According to the Wall Street Journal, Musk has even created a company called Neuralink, which aims to connect human brains to computers as a way to enhance the brain and allow us to compete with hyperintelligent AI.
"I think when you look at the technology today, nobody has shown AI software that has any intent or will," Horowitz said, citing the internet as an example of a new technology that at first stoked widespread fear but later became essential to everyday life.
"One of the things very early on that we all thought might happen [as the internet spread] was there will be another level of security risks and cyberterrorism and cyberwarfare and guess what? It is happening. But we will still take the internet."
Horowitz, however, warned of what he says is a greater danger: If something goes wrong with AI -- software that programs itself -- humans will not be able to understand what went wrong.
"To me," he said, "the bigger risk than AI becoming self-aware and self-conscious and taking on the humans is that ... if something goes wrong with AI, we [won't] know why." AI, he explained, develops through a process called deep learning, which requires computers to process vast amounts of data.
Horowitz was ready with a parable of sorts.
"In the very early days of AI," he said, "there was a checkers program and the guy who was programming it made a mistake. [As a result] it was optimized to lose, not to win. It was really hard to go back to figure out that the reason why it kept losing was because that was its goal.
"That is a real danger."