Yes, We Are Worried About the Existential Risk of Artificial Intelligence

Posted on

One prominent AI researcher, Oren Etzioni, has taken issue with the media’s coverage of the potential dangers that could result from the field’s eventual triumph (for more, see “No, Experts Don’t Think Superintelligent AI is a Threat to Humanity”). Etzioni accuses Oxford philosopher Nick Bostrom and his new book Superintelligence of being flawed since their “primary source of data on the emergence of human-level intelligence” is based on surveys of the opinions of AI experts. He then conducts a survey of AI scientists and claims that his findings disprove Bostrom’s claims.

Superintelligence has had the effect Etzioni criticises because it clearly explains why superintelligent AI may have arbitrary negative implications and why it is vital to begin addressing the issue far in advance, yet Etzioni does not address this. Bostrom does not argue that AIs capable of doing human tasks will soon be developed. That we are on the verge of a major breakthrough in artificial intelligence, or that we can predict with any precision when such a development might occur, he writes, “is not part of the argument in this book.”

Therefore, we believe that Etzioni’s paper detracts from the book’s main thesis by making an ad hominem attack against Bostrom while pretending to dispute his survey results. We feel compelled to set the record straight. We even had one of our own (Russell) participate in Etzioni’s poll, only to have his answer twisted around by the researcher. Our in-depth research demonstrates that Etzioni’s survey results are consistent with those cited by Bostrom.

But how did Etzioni arrive at this surprising result? By creating a questionnaire that is less effective than Bostrom’s and incorrectly interpreting the results.

According to the article’s subtitle, “few feel AI is a threat to humanity,” not many experts are worried about artificial intelligence. By implication, then, the reader is made to assume that Etzioni posed this issue to those most qualified to answer it, but Bostrom did not. While Bostrom consulted those most suited to answer his questions, Etzioni consulted no one at all. Bostrom polled the 100 most-cited experts in artificial intelligence. More over half of those surveyed said they think the impact of human-level artificial intelligence on mankind will be “on balance terrible” or “very bad (existential disaster)” and that this probability is at least 15%. Unlike Bostrom’s study, Etzioni’s did not include any questions about potential dangers to humanity.

In its place, he poses a single query regarding the timing of human superintelligence. Given that more than half of Bostrom’s respondents gave dates beyond 25 years for a mere 50% likelihood of obtaining mere human-level intelligence, it is not surprising that around two-thirds of Etzioni’s respondents opted for “more than 25 years” to achieve superintelligence. In addition to the fact that one of us (Russell) answered “more than 25 years” to Etzioni’s poll, Bostrom adds of his own surveys, “My own assessment is that the median figures presented in the expert survey do not have adequate probability mass on later arrival dates.”

Having set up a survey in which most participants would select “more than 25 years,” Etzioni now lays his trap, claiming that this timeframe is “beyond the foreseeable horizon” and drawing the conclusion that neither Russell nor Bostrom are worried about the potential dangers posed by superintelligent AI. Not only will this stun Russell and Bostrom, but probably quite a few other survey takers as well. (Etzioni’s title could just as easily read, “75% of experts think superintelligent AI is inevitable.”) Should we disregard catastrophic dangers because the consensus among experts places their occurrence at least 25 years in the future? If we follow Etzioni’s reasoning, we should also dismiss the dire consequences of global warming and attack those who raise the issue.

Pointing to long-term threats from AI is not the same as saying that superintelligent AI and its associated problems are “imminent,” as Etzioni and some in the AI community believe. Several notable figures, including Alan Turing, Norbert Wiener, I.J. Good, and Marvin Minsky, have raised warnings about the potential dangers. These difficulties have been recognised by even Oren Etzioni. None of these, to the best of our knowledge, predicted the arrival of a superintelligent

AI. Neither did Bostrom, as already mentioned, in Superintelligence.

When asked about the possible benefits of AI in minimising medical errors, reducing vehicle accidents, and more, Etzioni reiterates the questionable argument that “doom-and-gloom projections generally neglect to include the potential benefits of AI in these areas and more.” Bostrom, who believes that humankind would make “a caring and joyous use of its cosmic endowment” if it succeeds in restraining artificial intelligence, is immune to this line of reasoning. This argument is just as ridiculous. It’s like saying we shouldn’t talk about or try to reduce the danger of a meltdown in nuclear power plants because nuclear engineers who do so are “failing to acknowledge the possible benefits” of cheap electricity.

In light of the events at Chernobyl, it may be irresponsible to assert that the use of a potentially dangerous technology poses no threat at all. To say that a potentially game-changing technological development will never materialise is, at best, speculative. In a speech given on September 11, 1933, Lord Rutherford, then widely regarded as the world’s preeminent nuclear physicist, called the idea of harnessing energy from atoms “moonshine.” The neutron-induced nuclear chain reaction was conceived by Leo Szilard less than 24 hours later, with full ideas for nuclear reactors and nuclear weapons following a few years later. Without a doubt, it is preferable to anticipate human inventiveness rather than underestimate it, and it is preferable to admit the hazards rather than deny them.

Prominent AI researchers and developers have acknowledged the potential existential threat that AI could pose. Although it may, malicious intent is not a necessary condition for this risk to materialise. Instead, the danger is in using an optimization process that is smarter than the humans who set its goals and may not be able to be undone. Norbert Wiener expressed this issue explicitly in 1960, and we still have not found a solution. The reader is encouraged to join the continuous efforts to achieve this goal.