Chatbots and the new AI: What will Silicon Valley unleash upon the world this time?


For now, generative AI often produces inaccurate results. It can’t understand emotion, or nuance, and lacks the common sense to understand, for example with ChatGPT, that a book cannot fall off a shelf because it “lost its balance.” — AFP

Jobs. News. Art. Democracy. Equality. Education. Privacy. Truth. Your bank account. All will be impacted by Silicon Valley’s latest creation: “generative” artificial intelligence.

With new chatbots and AI software that generates text, images and sound, technology companies have smashed open Pandora’s Box, experts say, unleashing a powerful tool with the capacity to profoundly change virtually all aspects of life — and putting it in the hands of every one of us, builders and destroyers alike.

Silicon Valley’s tech industry, famed for its move-fast-and-break-things ethos, has embarked on an arms race to monetise the transformative and potentially destructive technology. Many of those in the midst of the surge are worried about the dangers — expected and unexpected — that await.

A generative AI market didn’t exist just a few months ago. Then late last year, San Francisco’s OpenAI released a stunning iteration of its ChatGPT bot, which has advanced so rapidly that people in many cases cannot distinguish between what is produced by a human or generated by a bot.

Now even many of the most fervent believers in technological advancement worry that this time tech is going to break everything.

“Everybody should pay attention,” said Chon Tang, a venture capitalist and general partner at SkyDeck, UC Berkeley’s startup accelerator. “This is not a new toy. This is not a fad. This is not VCs looking for attention and founders trying to create hype. This is a society-changing, species-changing event. I’m excited by this technology but the downsides are just so immense. We’ve unleashed forces that we don’t understand.”

The White House recently raised an alarm about AI’s “potential risks to individuals and society that may not yet have manifested,” and urged accountability and consumer safety protections.

The technology uses sophisticated computing, but its basic concepts are simple: Software is “trained” through information feeds — from data sources such as Wikipedia, scientific papers, patents, books, news stories, photos, videos, art, music, voices and even previous and potentially problem-ridden AI outputs, much of it copyrighted and scraped from the internet without permission. The chatbot then spits out results based on “prompts” from the user.

Chatbots can write a term paper, corporate marketing copy or a news story. They can conduct research, review contracts, perform customer service, build websites, create graphic design, write code, create a “photograph” of a Congressional candidate smoking meth, or a faked video of your significant other having sex with your neighbour.

A bot can copy someone’s voice from a social media video clip so a scammer can call their grandparents with a desperate plea for money, create a fake charity showcasing heart-wrenching images in the wake of a major disaster, or chat someone into investing in nonexistent stocks.

For now, generative AI often produces inaccurate results. It can’t understand emotion, or nuance, and lacks the common sense to understand, for example with ChatGPT, that a book cannot fall off a shelf because it “lost its balance.”

Microsoft, in a multi-billion-dollar deal with OpenAI, has turned its Bing search engine into a chatbot, and Google is struggling to catch up with its in-development Bard. New bots are arriving daily, with almost any imaginable function, from turning data into charts to getting puppy-raising advice, to scraping the world wide web for the content needed to create an app.

Carnegie Mellon University researchers warned in a paper recently that generative AI could produce recipes for chemical weapons and addictive drugs.

Worries about generative AI also come from inside the house: “Unintended consequences,” the ChatGPT bot told this news organisation recently when asked about its future, “could result in negative impacts on people, society, or the environment.”

Negative impacts, bot? Discrimination in hiring or lending, it said. Harmful misinformation and propaganda, it said. Job loss. Inequality. Accelerated climate change.

Ask Silicon Valley startup guru Steve Blank about generative AI and he’ll start talking about nuclear weapons, genetic engineering and deadly lab-created viruses. Then he’ll tell you about long-ago research scientists seeing potential catastrophes from those technologies and putting on the brakes until guardrails could go up. And he’ll tell you what’s different now.

“This technology is not being driven by research scientists, it’s being driven by for-profit companies,” said Blank, an adjunct professor of management science and engineering at Stanford University. “If the hair’s not standing up at the back of your neck after looking at this thing, you don’t understand what’s just happened.”

Silicon Valley’s history with social media — prioritising revenue, rapid growth and market share, with too little regard for damaging fallout — does not bode well for its approach to generative AI, Blank said. “Morals and ethics are not on the top of the list, and unintended consequences be damned,” Blank said. “This is kind of the ultimate valley thing. I’d be pissed off if I was in the rest of society.”

Blank worries about job losses and weaponisation of AI by governments, and most of all, given the lightning pace of the technology’s evolution, that “we don’t know what we don’t know,” he said. “Where’s this stuff going to be in 10 years?”

Google CEO Sundar Pichai pledged in a New York Times interview last month that in the AI arms race, “You will see us be bold and ship things,” however, “we are going to be very responsible in how we do it.” But Silicon Valley has a history of shipping bold products that ended up linked to eating disorders, foreign meddling in US elections, domestic insurrection and genocide — and Pichai refused to commit to slowing down Google’s AI development.

“The big companies are fearing being left behind and overtaken by the smaller companies; the smaller companies are taking bigger chances,” said Irina Raicu, director of the Internet Ethics Program at Santa Clara University.

An open letter last month from tech-world luminaries including Apple co-founder Steve Wozniak and Tesla, SpaceX and Twitter CEO Elon Musk raised concerns that generative AI could “flood our information channels with propaganda and untruth” and “automate away all the jobs,” but it received the most attention for highlighting future “nonhuman minds” that might “outsmart, obsolete and replace us.”

Emily Bender, director of the Computational Linguistics Laboratory at the University of Washington, said the letter’s fears of an “artificial general intelligence” resembling Skynet from the Terminator movies is “not what we’re talking about in the real world.” Bender noted instead that data hoovered up for AI bots often contains biased or incorrect information, and sometimes misinformation. “If there’s something harmful in what you’ve automated, then that harm can get scaled,” Bender said. “You pollute the information ecosystem. It becomes harder to find trustworthy sources.”

The tremendous power of generative AI has suddenly been handed to bad actors who may use it to create hard-to-stop phishing campaigns or to build ransomware, raising the specter of catastrophic attacks on businesses and governments, Raicu said.

Yet many critics of generative AI also recognise its gifts. “I’ve really struggled to think of a single industry that’s not going to be able to get tremendous value because if it,” venture capitalist Tang said.

Greg Kogan, head of marketing at San Francisco database-search company Pinecone, said companies in a wide variety of industries are developing generative AI or integrating it into products and services, leading to “explosive” growth at Pinecone. “Every CEO and CTO in the world is like, ‘How do we catch this lightning in a bottle and use it?' ” Kogan said. “At first people were excited. Then it turned into an existential thing where it’s like, ‘If we don’t do it first, our competitors are going to launch a product.' ” Silicon Valley, from startups to giants like Apple, has gone on a hiring spree for workers with generative AI skills.

Tang believes engineering and regulation can mitigate most damage from the technology, but he remains deeply concerned about unstoppable, self-propagating malware sowing devastating chaos worldwide, and automation of vast numbers of tasks and jobs. “What happens to that 20% or 50% or 70% of the population that is economically of less value than a machine?” Tang asked. “How do we as a society absorb, support that massive segment of the population?” – The Mercury News/Tribune News Service

Follow us on our official WhatsApp channel for breaking news alerts and key updates!
   

Next In Tech News

Russian court fines Apple for not deleting two podcasts, RIA reports
GlobalFoundries forecasts upbeat Q4 results on strong demand from smartphone makers
Emerson sharpens automation focus with offer for rest of AspenTech in $15 billion deal
Data analytics firm Palantir jumps as AI boom powers software adoption
Tax fraud investigators search Netflix offices in Paris and Amsterdam, says source
Singapore's Keppel to buy Japanese AI-ready data centre
Tesla increases wages for staff at German gigafactory by 4%
Apple explores push into smart glasses with ‘Atlas’ user study
Japan's Kioxia sees flash memory demand almost tripling by 2028
Hacker gets into woman’s email, changes every password, tries to make purchases

Others Also Read