Will AI push humans aside, or just give us new tools? Six tech experts weigh in


New technologies can be liberating - for their first users. But as any new tech becomes widely adopted, 'it starts to impose its requirements on our society,' changing culture and even laws. — Image by rawpixel.com on Freepik

PHILADELPHIA: "Good vs. Evil" was the stark subtitle of the artificial-intelligence (AI) panel earlier this month at Philadelphia Alliance for Capital and Technologies' yearly Phorum conclave at Penn State Great Valley.

The tech-business group summoned technologists to make sense of the disorientingly rapid growth of programs like OpenAI's ChatGPT and Microsoft's Bard, which process online sources to create quick, detailed narratives from a user's questions. They are designed to "learn" from repeated use, improving over time.

Will these AI applications boost the speed and reach of human mental labour, enriching developers and backers — or will they learn to replace us, threatening humanity with unemployment, irrelevance, technological slavery?

Here are some highlights from the PACT panel, hosted by Michael Bachman, emerging-technology architect at Berwyn-based communications-software integrator Boomi.

'Get used to paywalls'

With AI programs automatically mining the Internet for free material to convert into narratives, James Thomason, cofounder of Next Wave Partners and a Silicon Valley new-venture adviser, predicts that the remaining "free" publishers will move fast to charge for whatever they are not already selling.

"Get used to paywalls," Thomason advised.

More broadly, AI brings "a wave of innovation we cannot contemplate," added Thomason. He supports Tesla boss Elon Musk's call for a "pause" in AI development until standards are set and expects government will step in.

"If we wait for the private sector, we will be disappointed," he said. "Microsoft recently fired its entire ethics team for AI," even as the software giant is "employing AI at a frantic pace."

He asked: "Will we be able to adapt to AI as we have to other technologies? I am an AI optimist, but I am pessimistic in our ability to adapt," given tech investors' aggressive "obsession" with imposing maximum efficiency to squeeze profits from everything that can be digitally measured.

New technologies can be liberating — for their first users. But as any new tech becomes widely adopted, "it starts to impose its requirements on our society," changing culture and even laws.

Indeed, "the Internet has been a planetwide experiment, [applying] the model of Silicon Valley" to daily life, Thomason said. With well-funded-based companies like Amazon, Facebook, and Uber, "we broke politics, banking, retailing, education," driving established retailers out of business and forcing entrenched players to compete.

"And now we're moving full speed to unleash another new technology," he said.

Thomason is not worried about machines literally destroying humans. Instead, he fears the relentless speedup of machine-aided creativity may "hollow out our humanity" so we "stop thinking of technology and AI" as tools, but come to regard their recommendations as "basic to humanity."

Don't sweat the 'AI apocalypse'

"It's too late (for) big, broad regulation. We are not going to pause" AI development, said Ethan Mollick, who teaches business innovation at University of Pennsylvania's Wharton School.

He suggested there has been "too much emphasis on an AI apocalypse — but not enough on the ways that this is massively disruptive," for example to factory work. "What do we need to do to retrain people who are going to be displaced?'

To be sure, Mollick said, "how to stop super intelligence from murdering us all is an interesting question." But a more realistic threat may arise for the CEO who recently told Mollick "he thinks he can fire 80% of his engineers and marketers within 18 months, and replace them all with high school students."

Citing a 2019 paper by Wharton's Daniel Rock, reviewing the value of artificial intelligence skills to employers, Mollick said research suggests it's not routine physical work that's in danger from AI; to the contrary, there's "almost a perfect correlation between how much you earn, are educated, are creative," and how well the programs may be able to do what you do.

Still, he concluded, "we don't know if that means replacing people, or augmenting them" so they can do more with AI assistance.

Don't worry about the engineers' jobs... yet

"We don't recommend" replacing expensive, seasoned engineers with AI-literate students, said Caroline Yap, director of AI Practice at Google. For one thing, "there are copyright issues on the material" that ChatGPT and similar program rely on that require careful judgment by those who would use them commercially. "And you still have to train [AI] to your particular business."

"Our customers are not seeing AI as gloomy," Yap added. "They are finding ways to embrace the technology... I use Alexa at home. Why not something like it at work?"

It's about profit, for sure, Yap said. "Revenue generation is one aspect." But, like other big tools, AI may itself be an attraction for users: "How can I make [young employees] want to come to work, to use AI? What does it mean for job creation?"

Remember what we've already learned

"AI, like all power tools, will allow you to make a mistake with great intensity," Peter Coffee, vice president of strategic research at Salesforce, quipped.

Coffee said computing has accelerated to where detecting AI-aided lies is now a basic, required skill: In the 1980s, "computer literacy meant you learned to write in BASIC (computer language). Later it was to control your material on Facebook. Now (with AI) it's a level of informed scepticism on what you are being told."

Maintain scepticism

"We need a culture that knows how to use AI effectively," said investor and philanthropist Esther Dyson. "It's less about regulation, and more to have the proper skepticism about what is human and who is [lying to] you. This is the Jurassic Park line: Just because we can do it, doesn't mean we should!"

'The human brain will always matter'

"The human brain will always matter," said Dean Miller, president and CEO of the Philadelphia Alliance for Capital and Technologies. "Even if you get close to replicating it, you shouldn't, because there are things the machine shouldn't do. There are some scary scenarios in the wrong hands." He prefers to expect, that mostly, "machines will continually help people be more efficient and effective." – The Philadelphia Inquirer/Tribune News Service

Follow us on our official WhatsApp channel for breaking news alerts and key updates!
   

Next In Tech News

Anthropic receives additional $4 billion investment from Amazon
Factbox-Who are bankrupt Northvolt's creditors?
UK should use new powers to probe Apple-Google mobile browser duopoly, report says
EU regulators scrap probe into Apple's e-book rules after complaint was withdrawn
Hyundai recalls over 145,000 electrified US vehicles on loss of drive power
'World of Warcraft' still going strong as it celebrates 20 years
Northvolt CEO steps down, saying group needs up to $1.2 billion
Bitcoin at record highs, sets sights on $100,000
Ukraine urges gamers not to enter Chernobyl exclusion zone
Kioxia's market value set at $4.9 billion in IPO

Others Also Read