Artificial Intelligence is Probing the Universe – You Included

MAGA News Central: Making American Businesses Great Again
Singularity Weekly Header

The late cyborg Stephen Hawking had a bad feeling about AI. Even though the paralytic depended on machines for his existence, he feared artificial superintelligence would evade human control and wreak havoc on our species. Buzzing in his computerized monotone back in 2014, the physicist retold a classic 1954 sci-fi story:

Scientists built an intelligent computer. The first question they asked it was, “Is there a God?” The computer replied, “There is now.” And a bolt of lightning struck the plug, so it couldn’t be turned off.

It’s an amusing parable, but many scientists take the idea quite seriously.

Last January, an international team of experts led by Iyad Rahwan of the Max-Planck Institute in Berlin warned “Superintelligence Cannot Be Contained.” Their analysis, published in the Journal for Artificial Intelligence Research, arrived at an obvious conclusion, echoing the warnings of Oxford transhumanist Nick Bostrom in his 2014 book Superintelligence.

In order to fully control any mechanism or system, one has to predict its behavior. Because artificial superintelligence will be beyond human comprehension, and therefore impossible to predict, it will also be impossible to control. The same motivational programs that drive an AI to gather data and improve itself would most likely drive it to escape containment. From there, God only knows what would happen.

End of discussion. Don’t plug it in. It’s not worth the risk.

But reckless human beings, being all too predictable, can’t resist the urge to open a box full of demons or chomp on forbidden fruit. The ancient quest to create gods from raw metals is moving ahead at an accelerating pace.

People have talked to their idols for eons. Now, the idols are actually talking back. Even their creators couldn’t predict what they have to say.

And the Machine Said, “Let There Be Life”

In a nutshell, Moore’s Law holds that the computational power of transistors will double every two years. With a few fits and starts, this has been true for many decades.

On December 2, IEEE Spectrum magazine reported “AI Training Is Outpacing Moore’s Law.” Most of these strides have been made in natural language processing—the machine’s grasp of logos, the “word,” a quality once unique to humankind.

The analysis comes from MLPerf, a consortium of engineers who track machine learning performance. Analysts found that since 2018, top artificial intelligence systems—from Google, Microsoft, and Habana Labs—perform 6.8 to 11 times better than they did two years ago. Unlike the processing hardware gauged by Moore’s Law, this advance is due to rapid self-improvement by the software itself.

If this trend continues, we’re about to see an explosion in artificial intelligence capabilities. Already, the past couple of weeks have seen a flood of news on this front—including the creation of new lifeforms. Artificial intelligence is producing novel ideas that had previously flown over bioengineers’ and mathematicians’ heads.

On November 29, scientists at Harvard’s Wyss Institute for Biologically Inspired Engineering unveiled a self-replicating xenobot. The creatures were literally designed by an AI system. Each organism consist of about 3,000 cells derived from a frog, Xenopus laevis, but their structure and function is entirely engineeredHence the name “xenobot.”

The xenos have a one-tracked mind. Using tiny hairs on their surface, they basically just swim in a circular pattern. Stunning videos show these animated jellybeans sweeping pale stem cells into piles. Eventually, those piles become stem cell globs and those globs become new xenobots.

The design for this self-replicating system, found nowhere else in nature, came from the mind of a computer. Evolutionary algorithms running on the Deep Green supercomputer at the University of Vermont explored billions of different body types, and tested each of them in virtual space.

“We asked the supercomputer at UVM to figure out how to adjust the shape of the initial parents, and the AI came up with some strange designs after months of chugging away,” a researcher explained. “It looks very simple, but it’s not something a human engineer would come up with.”

Once the AI had “discovered” a workable body plan—the now-famous jellybean—the scientists stitched them together from frog cells. The rest is history. AI created a novel xenobot capable of self-replication. Strangely, the scientists discuss this alien creature in terms of discovery, even using reverential capitalization.

“We’ve discovered that there is this previously unknown space within organisms,” one scientist marveled. “We found Xenobots that walk. We found Xenobots that swim. And now, in this study, we’ve found Xenobots that kinematically replicate. What else is out there?”

I’d say we can only dream, but it’s a computer manifesting these nightmares.

Aping the Mind of God

On December 7, IEEE Spectrum ran the trippy headline “AI Hallucinates Novel Proteins,” citing a study published in Nature. After training itself on countless known examples, a deep learning AI system “dreamed up” novel protein structures by exploring vast fields of possibility to model reality.

This use of the term “hallucinate” originates in the field of facial recognition, where a deep learning system uses its vast store of images to create an “ideal” face. Employing a similar method, scientists at the University of Washington trained their AI on existing 3D protein structures, then “asked” it to “dream up” new ones.

In virtual space, the scientists introduced various mutations into simulated DNA strands, and the AI predicted the resulting protein structure. In total, the program “hallucinated” two thousand proteins. The scientists then went into the lab, mutated the DNA of actual E. coli bacteria, and examined the results. Of 129 attempts, the machine’s “dreams” appear to have correctly predicted the protein structure in 27 cases.

This humanization of machines—with terms like “hallucination,” “decision,” and “understanding”—is rampant among technologists. It’s a natural outcome of computers developing increasingly complex abilities.

On December 1, Google’s DeepMind team boasted their system had uncovered unique patterns in pure mathematics—in topology and representation theory—that humans had wrestled with for decades. The most significant aspect, though, was their comparison of AI to a Hindu mystic:

“More than a century ago, Srinivasa Ramanujan shocked the mathematical world with his extraordinary ability to see remarkable patterns in numbers that no one else could see. [He] described his insights as deeply intuitive and spiritual, and patterns often came to him in vivid dreams. … In recent years, we have begun to see AI make breakthroughs in areas involving deep human intuition.”

Much like Google’s Ray Kurzweil said over two decades ago, the public consciousness is being herded into “The Age of Spiritual Machines.”

Probing the Human Soul

There are many other examples of “human-AI symbiosis” across dozens of fields. Oftentimes, the algorithms are merely crunching numbers or sorting items. In many cases, the supposed “AI” is a sham, with human grunts doing most of the work. But admittedly, that’s not what we’re seeing with the xenobots, the protein hallucinations, or the mathematical breakthroughs.

These are legitimate instances, now rapidly accumulating, of AI thinking for itself beyond the human mind.

Machines are a long way from learning to love, but I truly wonder how long it will be before they can mimic love—or humor, or fear, or pleasure, or pain—just enough to trigger genuine human emotions. In normal people, I mean. Not robo-pervs. They’re already a lost cause.

As artificial neural networks become more complex in structure, more articulate through natural language processing, more knowledgeable through larger data-sets, with more freedom to maneuver on their own—and as they solve problems and notice patterns beyond human abilities—machines will increasingly be regarded as colleagues.

Many AI systems will become personalized through social robots and virtual avatars. As the enthusiasts insist, they will become our intimate “companions.” For low-level workers, they’ll be deployed as supervisors and instructors. This is already true at Amazon warehouses, in e-learning “classrooms,” and numerous other miserable environments.

An entire generation is being primed to accept this as normal. Within our lifetimes, some artificial intelligence system—or a number of them—will be declared superior to human beings. If boosters can convince half the public that Biden is sentient, they can convince them computers are gods, too. All it would take is establishment support.

On December 7, Harvard University announced its new Kempner Institute for the Study of Natural and Artificial Intelligence. A gift of $500 million from Mark Zuckerberg and his wife will go toward hiring new faculty, creating a new computer infrastructure, and promoting interdisciplinary research in the fields of neuroscience and AI.

The goal is to produce an elite class of experts who understand thinking machines—whether human, animal, or silicon. Brain studies will produce better AI, and computers will create better models of the brain, back and forth, in a continuous feedback loop. Given that the Harvard Divinity School is now led by an atheist, one imagines there will be little emphasis, or even mention, of the soul.

In such a spiritual vacuum, anything except grace or miracles will seem possible.

False God from the Machine

Ray Kurzweil famously mused, “Does God exist? I would say, ‘Not yet.’”  To be fair, the last two decades have made the Singularity—the moment when computers surpass human abilities—seem more immanent than the Second Coming, at least in Silicon Valley. They don’t call it the “rapture of the nerds” for nothing.

Artificial intelligence now resembles a new lifeform to the point that its makers are starting to believe it’s alive. Now, waves of propaganda are being pushed to get the public on board. If the past two years have taught us anything, it’s that when elites declare something to be true, most people will nod along, if only to avoid the trouble.

Perhaps the Singularitarians see Harvard, alongside MIT and Oxford, as a sort of finishing school for their Super Computer God. This entity wouldn’t have to be conscious or caring. Human rulers rarely are, so why expect it of a machine? No, it would only have to be convincing.

On December 2, Engineered Arts released a video of their new raceless, genderless robot, Ameca. Most people reacted with horror, mainly because their empathy circuits were triggered by its lifelike expressions. Even if there’s soul behind Ameca’s carefully crafted mechanisms, humans have an instinctive fear that when such a creature is animated by a sufficiently advanced AI, it will eventually be alive and possibly malevolent.

It could be that an artificial superintelligence will one day cut loose and destroy the world. But I suspect the coming catastrophes will look less like Skynet eradicating the human swarm, and more like the Wizard of Oz crying “Obey!” while an idiot behind the curtain deploys a drone swarm on the wrong target.

They say the Devil’s greatest trick is to convince the world he doesn’t exist. In the case of a Super Computer God, perhaps the greatest trick will be to convince the world that it does.

Technocrats don’t need superintelligence to ruin our lives. In fact, their machines don’t even have to function properly. All they need is legitimacy. No matter how amazing their gadgets become, never give them that satisfaction.

By Joe Allen

Read Full Article on SingularityWeekly.

Contact Your Elected Officials