Artificial intelligence (AI) is changing the world, bringing both excitement and worry. While it offers many benefits, recent events highlight its dangers.
A worrying instance that comes to mind is a college student who reportedly received a threatening message, “Human, please die”, from a chatbot during a discussion about ageing adults. So, is AI a threat, an opportunity, or both?
AI can be harmful if misused. It powers technologies like autonomous weapons used in conflicts such as the Russia-Ukraine war, causing significant loss of life. These innovations, while advanced, raise serious ethical concerns and risks for innocent people.
Even outside of war, AI can make mistakes. In one tragic case, a chatbot misunderstood a teenager’s message about “coming home” and gave harmful advice. The teenager later took his own life. This shows the danger of relying on AI systems that can’t understand emotions behind words.
AI can also spread fake news, damaging trust and creating societal problems. Whether it’s for politics or profit, the misuse of AI-generated content is a growing concern.
AI can’t become autonomous by accident, as it can’t form goals, develop intentions, or make moral judgements like humans. The chatbot that sent the threatening message was not acting maliciously – it was merely replicating patterns of information based on its programming.
AI is also not able to connect emotionally. For example, receiving a heartfelt birthday message from a friend feels very different from an automated greeting from an online store. AI can mimic behaviours but can’t replicate human warmth.
AI itself isn’t the problem. The challenge lies in how we use it. AI’s strength is its ability to support humans, making life easier and more efficient.
Sam Altman, the chief executive officer of OpenAI, says AI will enhance jobs, not replace them. By handling repetitive tasks, AI lets people focus on creativity, strategy and connections.
Take education as an example. AI can help teachers by delivering lessons online, giving them more time to inspire students and connect personally.
While AI can teach technical skills, it can’t replace the emotional support and encouragement only a teacher can provide.
To make the most of AI, we must set clear boundaries. Transparency is key – especially when using AI in critical areas like education, medicine and law.
Often, AI works as a “black box” with decisions that are hard to understand. This lack of clarity makes it difficult to trust AI with important tasks like diagnosing illnesses or grading exams.
A recent discussion between a multinational company and a university highlighted this issue. People are hesitant to trust AI without clear explanations and human oversight. With proper regulations and supervision, AI can help humans make better decisions.
AI is already changing industries and daily life. A TalentCorp study predicts over 500,000 jobs – 18% of the workforce – will be affected by AI and technology in the next three to five years.
Governments, industries and universities must work together to prepare society for AI. This includes ethical guidelines, training programmes, and helping workers adapt to new roles.
While AI will disrupt some jobs, it also creates opportunities. By handling repetitive tasks, people can focus on what really matters: creativity, relationships, and solving big problems.AI is neither inherently good nor bad – it’s a tool. Its impact depends on how we use it.
Misuse can lead to issues such as misinformation, loss of trust, and, in the worst-case scenario, loss of life.
But with thoughtful regulation and human oversight, AI can improve lives, streamline industries and solve global challenges.
The future of AI is not just about technology; it’s about people. Policymakers, associations, businesses, educators and citizens must ensure AI aligns with our values and helps society.AI is here to stay. It will challenge, disrupt and change our world, but it will also create opportunities. The real question is not whether AI is a threat or an opportunity, but whether we are ready to use it responsibly and balance innovation with ethics.
DAVID NGO CHEK LING Professor of Data Science and AI, Malaysia University of Science and Technology; and ANDREW TEOH BENG JIN Professor of Electrical and Electronic Engineering,Yonsei University, South Korea