Opinion: The real AI nightmare – What if it serves humans too well?


To make sure AI does only what humans desire, a lot of time, money, and effort is being invested. Yet, AI that will do as humans desire is something we should be more concerned about. Who is the real threat? — Getty Images/TNS

The age of artificial intelligence has begun, and it brings plenty of new anxieties. A lot of effort and money are being devoted to ensuring that AI will do only what humans want. But what we should be more afraid of is AI that will do what humans want. The real danger is us.

That’s not the risk that the industry is striving to address. In February, an entire company – named Synth Labs – was founded for the express purpose of “AI alignment”, making it behave exactly as humans intend. Its investors include M12, owned by Microsoft, and First Start Ventures, founded by former Google chief executive Eric Schmidt. OpenAI, the creator of ChatGPT, has promised 20% of its processing power to “superalignment” that would “steer and control AI systems much smarter than us”. Big tech is all over this.

And that’s probably a good thing because of the rapid clip of AI technological development. Almost all of the conversations about risk have to do with the potential consequences of AI systems pursuing goals that diverge from what they were programmed to do and that are not in the interests of humans. Everyone can get behind this notion of AI alignment and safety, but this is only one side of the danger. Imagine what could unfold if AI does do what humans want.

“What humans want,” of course, isn’t a monolith. Different people want different things and have countless ideas of what constitutes “the greater good”. I think most of us would rightly be concerned if an artificial intelligence were aligned with Vladimir Putin’s or Kim Jong Un’s visions of an optimal world.

Even if we could get everyone to focus on the well-being of the entire human species, it’s unlikely we’d be able to agree on what that might look like. Elon Musk made this clear when he shared on X, his social media platform, that he was concerned about AI pushing for “forced diversity” and being too “woke”. (This on the heels of Musk filing a lawsuit against OpenAI, arguing that the company was not living up to its promise to develop AI for the benefit of humanity.)

People with extreme biases might genuinely believe that it would be in the overall interest of humanity to kill anyone they deemed deviant. “Human-aligned” AI is essentially just as good, evil, constructive or dangerous as the people designing it.

That seems to be the reason that Google DeepMind, the corporation’s AI development arm, recently founded an internal organisation focused on AI safety and preventing its manipulation by bad actors. But it’s not ideal that what’s “bad” is going to be determined by a handful of individuals at this one particular corporation (and a handful of others like it) – complete with their blind spots and personal and cultural biases.

The potential problem goes beyond humans harming other humans. What’s “good” for humanity has, many times throughout history, come at the expense of other sentient beings. Such is the situation today.

In the US alone, we have billions of animals subjected to captivity, torturous practices and denial of their basic psychological and physiological needs at any given time. Entire species are subjugated and systemically slaughtered so that we can have omelets, burgers and shoes.

If AI does exactly what “we” (whoever programs the system) want it to, that would likely mean enacting this mass cruelty more efficiently, at an even greater scale and with more automation and fewer opportunities for sympathetic humans to step in and flag anything particularly horrifying.

Indeed, in factory farming, this is already happening, albeit on a much smaller scale than what is possible. Major producers of animal products such as US-based Tyson Foods, Thailand-based CP Foods and Norway-based Mowi have begun to experiment with AI systems intended to make the production and processing of animals more efficient. These systems are being tested to, among other activities, feed animals, monitor their growth, clip marks on their bodies and interact with animals using sounds or electric shocks to control their behaviour.

A better goal than aligning AI with humanity’s immediate interests would be what I would call sentient alignment – AI acting in accordance with the interest of all sentient beings, including humans, all other animals and, should it exist, sentient AI. In other words, if an entity can experience pleasure or pain, its fate should be taken into consideration when AI systems make decisions.

This will strike some as a radical proposition, because what’s good for all sentient life might not always align with what’s good for humankind. It might sometimes, even often, be in opposition to what humans want or what would be best for the greatest number of us. That might mean, for example, AI eliminating zoos, destroying nonessential ecosystems to reduce wild animal suffering or banning animal testing.

Speaking recently on the podcast “All Thinks Considered”, Peter Singer, philosopher and author of the landmark 1975 book Animal Liberation, argued that an AI system’s ultimate goals and priorities are more important than it being aligned with humans.

“The question is really whether this superintelligent AI is going to be benevolent and want to produce a better world,” Singer said, “and even if we don’t control it, it still will produce a better world in which our interests will get taken into account. They might sometimes be outweighed by the interest of nonhuman animals or by the interests of AI, but that would still be a good outcome, I think.”

I’m with Singer on this. It seems like the safest, most compassionate thing we can do is take nonhuman sentient life into consideration, even if those entities’ interests might come up against what’s best for humans. Decentering humankind to any extent, and especially to this extreme, is an idea that will challenge people. But that’s necessary if we’re to prevent our current speciesism from proliferating in new and awful ways.

What we really should be asking is for engineers to expand their own circles of compassion when designing technology. When we think “safe”, let’s think about what “safe” means for all sentient beings, not just humans. When we aim to make AI “benevolent”, let’s make sure that that means benevolence to the world at large – not just a single species living in it. – Los Angeles Times/Tribune News Service

Follow us on our official WhatsApp channel for breaking news alerts and key updates!

   

Next In Tech News

Albania bans TikTok for a year after killing of teenager
As TikTok runs out of options in the US, this billionaire has a plan to save it
Google offers to loosen search deals in US antitrust case remedy
Is Bluesky the new Twitter for teachers in the US?
'Metaphor: ReFantazio', 'Dragon Age', 'Astro Bot' and an indie wave lead the top video games of 2024
Opinion: You can pay for white noise, but you don’t need to
Rumble to receive $775 million strategic investment from Tether
OpenAI unveils 'o3' reasoning AI models in test phase
Qualcomm secures key win in chips trial against Arm
US finalizes up to $6.75 billion in chips awards for Samsung, Texas Instruments, Amkor

Others Also Read