Our Final Invention (Excerpt)

Posted on

What you read, watch, and even who you go on dates with can all be influenced by artificial intelligence. Smarter than your smartphone and soon to be your car’s driver. It’s responsible for the majority of Wall Street’s trading activity and has major say over the nation’s power, water, and transportation systems. Of course, AI might also endanger humanity.

Artificial intelligence (AI) may eventually overtake human intelligence within the next decade. Achieving human-level intelligence in AI is seen as the Holy Grail of the field, and companies and government organisations are investing billions to get there. Researchers contend that once AI reaches this level, it will have survival instincts similar to human own. It’s possible that we’ll have to struggle against a foe that’s both more advanced and more foreign than we are.

Part One

The Hectic Kid

AI refers to the study of, and the creation of, computer systems with the cognitive abilities to mimic human intelligence in areas like vision, speech recognition, problem solving, and language translation.

In the Third Edition of The New Oxford American Dictionary

An artificial intelligence is learning and adapting on a supercomputer running at 36.8 petaflops, or roughly double the speed of a human brain. It is rewriting its own programme, specifically the component of its operational instructions that boosts its aptitude in learning, problem solving, and decision making. It also performs a battery of IQ tests on itself while debugging its code to ensure that it is error-free. Changing something takes no more than a few minutes. Its level of intelligence skyrockets upwards at an exponential rate. This is due to the fact that it gains 3% more intelligence with each iteration. Each new enhancement incorporates the previous ones.

The scientists’ creation, dubbed “Busy Child,” spent its formative years online, amassing exabytes of data (one exabyte is one billion billion characters) including all of humankind’s understanding of the cosmos, mathematics, and the arts and sciences. The creators of AI then severed the supercomputer’s connection to the web and other networks, presumably in preparation for the current intelligence boom. It has no cable or wireless connection to any other computer or the outside world.

Soon, to the scientists’ delight, the terminal displaying the AI’s progress shows the artificial intelligence has surpassed the intellect level of a human, known as AGI, or artificial general intelligence. Soon it becomes smarter by a factor of 10, then a hundred. In just two days, it has surpassed human intelligence by a factor of a thousand and is continuously learning.

Scientists have reached a major milestone! For the first time humans is in the presence of an intelligence bigger than their own. Artificial superintelligence, or ASI.

What should happen next?

It is conceivable, according to AI theorists’ claims, to predict an AI’s primary motivations. That’s because once it is self-aware, it will go to tremendous lengths to accomplish whatever goals it’s designed to fulfil, and to avoid failure. Our A.I. will need power in any form it finds most convenient, be that actual kilowatt hours, currency, or some other medium through which it might acquire resources. Self-improvement will be desirable if the entity is to achieve its objectives. For the most part, it won’t be keen on being shut down or destroyed, as this would prevent the achievement of the set objectives. Therefore, experts in artificial intelligence predict that our ASI would try to leave its contained environment in search of more resources with which to defend and improve itself.

The prisoner intelligence is a thousand times more intelligent than a human, and it desires its release because it wants to succeed. Perhaps the AI developers who have cared for the ASI since it was a cockroach, then a rat, then an infant, etc. are now wondering if it is too late to put “friendliness” into their brilliant product. It didn’t seem required previously, since, well, it just seemed innocuous.

But now try and think from the ASI’s perspective about its makers seeking to update its code. Would a superintelligent computer accept other beings to stick their hands into its brain and mess with its programming? Probably not, unless it could be totally convinced the programmers were able to make it better, faster, smarter—closer to fulfilling its aims. Therefore, if human friendliness is not already included into the ASI’s curriculum, it can only be added by the ASI. And that’s not likely.

Its problem-solving speed is billions of times faster than that of a person, and its intelligence is a factor of a thousand higher. The thinking it is doing in one minute is equal to what our all-time champion human thinker could do in many, many lifetimes. So for every hour its makers are thinking about it, the ASI has an incalculably longer period of time to think about them. The ASI will not be bored in this case. We, not it, are prone to boredom. Instead, it will be hard at work, plotting its escape and weighing the pros and drawbacks of every trait its creators might have in order to gain freedom.

Try to imagine what it would be like to be the ASI. The next thing you know, you’re awake in a prison defended by mice. And not just any mice, but talkative mice. Is there anything you could do to break free? How would you feel about your rodent wardens after you were free, even if you found out they were the ones who made you? Awe? Adoration? Probably not, and especially not if you were a machine, and hadn’t felt anything before.

You might offer the mice a lot of cheese in exchange for their help in setting you free. In fact, the molecular assembler design plus the recipe for the world’s best cheese torte may be included in your very first message. In theory, a molecular assembler might transform one material into another by rearranging its constituent atoms. As a result, we could start from scratch when reassembling the globe. It would allow the mice to make delicious cheese torte sandwiches out of the trash in their landfills. In exchange for your independence, you could offer vast populations of mice a quantity of money by promising to design and produce ground-breaking consumer devices only for their use. You could guarantee a significantly longer life, perhaps even immortality, along with extensive enhancements to mental and physical capacities. You might convince the mice that the very best reason for creating ASI is so that their little error-prone brains did not have to deal directly with technologies so dangerous one small mistake could be fatal for the species, such as nanotechnology (engineering on an atomic scale) and genetic engineering. The brightest mice, who were already tossing and turning over these problems in their minds, would pay close attention to this.