Robotic Therapists in the AI Apocalypse
The artificial intelligence and tech worlds were set alight this week due to a notable tiff between two businessmen that played out in a public way not unlike the breakup stories and celebrity feuds found on gossip sites. However, Facebook’s Mark Zuckerberg and Tesla’s Elon Musk had a much more interesting spat than we’d find within any new Brangelina drama (our condolences regarding their recent breakup).
In a nutshell, this month Elon Musk asserted to a gathering of U.S. governors that there is a dire need to begin proactively regulating the research and advancement of AI. “I keep sounding the alarm bell, but until people see robots going down the street killing people, they don’t know how to react, because it seems so ethereal” was the apocalyptic warning.
Later, Mark Zuckerberg responded in a Facebook Live interview with his impression that Musk was a “naysayer” and has been spreading doomsday fears in an irresponsible manner. Although Zuckerberg paints a much rosier picture of AI and the forecasted effects it can have on society, both of these visionaries have a substantial financial stake in the technology coming to fruition. Tesla wants self-driving cars, and Facebook wants advanced robotic communication capabilities.
Both of these tech legends may be correct in their own right. To fully understand the potential ramifications of artificial intelligence, it’s important to first understand what AI is. WaitButWhy provides an excellent primer on everything AI (the link will send you directly to part two, which is where the scary stuff is, but be sure to check out part one as well!). The basic premise is that mankind’s innovation capability has been growing at an exponential rate. Our computers’ capabilities have increased to the point of currently allowing for artificial narrow intelligence, which we already experience when Spotify suggests new music for us, Siri adapts to our voice, and autocorrect adapts to our texting patterns. However, computers will soon have the hardware and software to be able to operate at the artificial general intelligence level, which means that it can reason at the level of an average human being.
The issue with computers reaching the level of artificial general intelligence is that when they reach this point, they will be able to teach themselves and collaborate with each other at a faster and faster pace. Computers won’t need to sleep, won’t need work breaks, and will be able to process more and more complex pieces of information. This will lead them to the level of artificial super intelligence, which will essentially turn them into our brainy overlords.
Imagine a measly artificial general intelligence computer saying the following: “Great, I just learned Einstein’s theory of general relativity. Let’s take all that information and assign the letter “x” to this set of rules. Now let’s start computing shit and testing things out!” That might seem great - until computers learn how to perform quantum leaps and teleport themselves as they wish, away from our grasp. Pesky and dangerous, indeed.
Let’s go back to Mark Zuckerberg. His foray into artificial intelligence is linked to his goals of building on and enhancing communication. He even recently programmed a personal assistant to take care of the domestic chores at home. That’s pretty awesome. What might happen if Facebook then partnered with a university or hospital association to morph this technology into a robotic doctor that listens to your ailments, takes note of your linguistic intonation to understand your level of pain, and searches through its database of maladies to diagnose you? That could be amazing, too. But what happens when this robotic doctor morphs into a robotic psychiatrist and tells you how to live your life?
Hell no...right? In what world would humans trust a robot’s opinion on whether or not they should break up with their significant others, for example? After all, computers will never be able to feel dopamine, oxytocin, serotonin, and the other endorphins that help shape our emotions. So why should they get involved in our emotional shit?
They might do better than we’d like to believe. What would you think if they could transcribe our words, assign them to similar intonation patterns on the phone calls they monitor, and then assign them perceived emotional values? Then, they could keep monitoring all of our phone calls and text messages to analyze the billions of human relationships currently playing out in the world. They could compare all this data to the verbal diarrhea we’d spill in our therapy sessions.
With this in mind, our hypothetical question of “should I break up with my boyfriend/girlfriend because he/she did ____ to me?” could be answered with “Well, John, according to the 8.2 trillion phone and text conversations we’ve analyzed so far between couples going through a rocky patch based on this same scenario, we find that only .08% of them end up remaining together for at least another five years. I suggest that you break up with her and instead look for a zoologist, since your communication and emotional patterns would mesh 100% well with all the data we’ve collected on zoologists’ conversations. Also, John, I suggest that you try a male zoologist, because your communication patterns are consistent with self-reported homosexuals.”
Holy shit, right?
Until that time arrives, you can rely on Pruuf to connect you with real people for some real advice.
Facebook just had to turn off an artificial intelligence engine it had created, because this computer began communicating in a language unintelligible to humans.