The Companies Pioneering Artificial Intelligence and Whether or Not We Should Be, Well, Terrified
Efforts to regulate A.I. might be leading us towards a ‘Sci-Fi movie ending’ we’re all too familiar with.
Elon Musk recently issued an ominous warning about Artificial Intelligence (A.I.). A warning made more ominous by the fact that it comes from a person whom we’d assume has his finger on the pulse of the industry.
“I tried to convince people to slow down, slow down A.I., to regulate A.I. This was futile. I tried for years. Nobody listened.” — Elon Musk
If one of the smartest and most traditionally successful people on this planet is scared of something, should the rest of us be concerned?
Mr. Musk has been warning us about the potential dangers of an unbridled pursuit of artificial intelligence for some time now. Earlier this year, he told an interviewer at South by Southwest that he saw A.I. as more dangerous than nuclear weapons.
In this article, I capitalize Artificial Intelligence when discussing the practice or industry, and use lower-case, ‘artificial intelligence’, to describe the products of the practice — the A.I. itself.
OpenAI: the company that wants to save us from ourselves
Since the previously mentioned interview aired, there have been several steps towards improving our general understanding of A.I., the majority of which have been made by a company called OpenAI.
OpenAI LP is a for-profit corporation, founded by Musk and Sam Altman (Former Y Combinator president). Their mission is to, “ensure that artificial general intelligence (AGI) — by which we mean highly autonomous systems that outperform humans at most economically valuable work — benefits all of humanity.’ They plan on doing so by attempting to, “directly build safe and beneficial AGI”. But have stated they would also consider their mission to be successful if their work helped other companies or individuals achieve the same outcome.
While the mission statement focuses on “ensuring A.I. benefits all of humanity”, the ominous undertones of Elon Musk warning of an eventual artificial intelligence rebellion make it difficult not to read between the lines.
OpenAI was launched in 2015 with a $1 billion endowment pledged to its non-profit parent company; OpenAI Inc. The endowment was funded by Musk himself, as well as several other high-profile investors including Peter Thiel (PayPal co-founder) and Reid Hoffman (LinkedIn co-founder). Since then, they’ve sought out the best researchers in the field of Artificial Intelligence, and grown from a team of nine researchers to a team of over a hundred.
Musk quietly left the OpenAI team in 2018, citing his desire to focus on Tesla as the reasoning. This did little to slow the remaining co-founder, Sam Altman, down. Earlier this year, he announced that Microsoft would be investing $1 billion into OpenAI.
He and his team of researchers hope to build artificial general intelligence, or A.G.I., a machine that can do anything the human brain can do.
In Artificial Intelligence, who’s watching the Watchmen?
While it’s comforting to know there is an organization working on creating a regulated environment for Artificial Intelligence to operate within. The fund allocation for their recent investment suggests that OpenAI may have other priorities in mind. The New York Times referred to this “outrageously lofty goal” in a recent article, “He and his team of researchers hope to build artificial general intelligence, or A.G.I., a machine that can do anything the human brain can do.”
It’s understandable that in order to create ethical standards for artificial intelligence, it might require the creation of a genuine artificial intelligence first. Yet, one can’t help but wonder if the drive to create ethical standards for A.I. will end up funding the same projects we should be regulating. Is OpenAI the hero we need to usher in the next wave of A.I. advancement, or the vehicle to create the science-fiction-esque A.I. villain we’ve all learned to fear.
The state of Artificial Intelligence technology today
While it’s no Bond villain, OpenAI has already created a system that’s giving some humans (and human-controlled video-game characters) grief. A recently developed OpenAI system is capable of beating some of the worlds best video-game players at a game called DOTA 2.
This is of note not only because DOTA 2 is an extremely intricate game involving complex decisions and collaboration across a team: where everything from the character you choose to the direction you turn when the game starts affects the outcome. It’s also the largest eSport by prize pool, with the most recent DOTA 2 tournament giving out over $30 million in prizes.
Not only is this system able to perform a specific skill better than any human, but this specific skill can also be financially quantitated.
I’m not usually one to get on my soapbox and start preaching about the end of the world. But, reading the charter from OpenAI almost feels like watching the opening credits to a dystopian movie. With over $1 billion in funding and some of the best A.I. researchers in the world on their team, it’s hard to imagine anyone but OpenAI making the first leap towards creating a form of consciousness — but if that’s the case, who’s going to regulate the regulators?