Singapore embraces AI to solve everyday problems


The project, backed by Singapore government agencies and Google, has also led to the development of tools to scan job applicant's CVs, develop customised teaching curriculums, and generate transcripts of customer service calls. — Photo by Jisun Han on Unsplash

SINGAPORE: Booking a badminton court at one of Singapore’s 100-odd community centres can be a workout in itself, with residents forced to type in times and venues repeatedly on a website until they find a free slot. Thanks to AI (artificial intelligence), it could soon be easier.

The People’s Association, which runs the community centres, worked with a government tech agency to build a chatbot powered by generative artificial intelligence to help residents find free courts in the city-state’s four official languages.

The booking chatbot, which could be rolled out shortly, is among more than 100 generative AI-based solutions spurred by the AI Trailblazers project, launched last year to find AI-based solutions to everyday problems.

The project, backed by Singapore government agencies and Google, has also led to the development of tools to scan job applicants’ CVs, develop customised teaching curriculums, and generate transcripts of customer service calls.

It is part of the South-East Asian nation’s AI strategy that is light on regulation and keen on “AI for all”, said Josephine Teo, minister for communications and information.

“Regulations are certainly part of good governance, but in AI, we have to make sure there is good infrastructure to support the activities,” she said at a briefing last month at Google’s Singapore office where some of the new tools were demonstrated.

“Another very important aspect is building capabilities ...(and) making sure that people not only have access to the tools, but are provided with opportunities to grow the skills that will enable them to use these tools well,” Teo said.

With an explosion in the use of generative AI globally, governments are racing to curb its harms – from election disinformation to deepfakes – without throttling innovation or the potential economic benefits.

In Singapore, the focus is on AI adoption in the public sector and industry, and building an enabling environment of research, skills and collaboration, said Denise Wong, an assistant chief executive at Infocomm Media Development Authority (IMDA), which oversees the country’s digital strategy.

“We are not looking at regulation – we see a trusted ecosystem as critical for the public to use AI confidently,” she told the Thomson Reuters Foundation.

“So we need an ecosystem where companies are comfortable, that allows for innovation and to deploy in a way that is safe and responsible, which in turn brings trust,” she said.

Responsible AI

With its stable business environment, Singapore consistently ranks near the top of the global innovation index, climbing to fifth place last year on the strength of its institutions, human capital and infrastructure.

On AI, Singapore was an early adopter, releasing its first national AI strategy in 2019 with the aim of individuals, businesses, and communities using AI “with confidence, discernment, and trust”.

It began testing generative AI tools in its courts last year, and uses them in schools and in government agencies, and released its second national strategy in December, with the mission “AI for the public good, for Singapore and the world”.

Also last year, Singapore set up the AI Verify Foundation to develop testing tools for responsible use, and a generative AI sandbox for trialing products. IMDA, along with technology companies IBM, Microsoft, Google and Salesforce, are among its primary members.

The toolkit, on code-sharing platform GitHub, has drawn the interest of dozens of local and global companies, Wong said.

“It provides users the means to test on parameters they care about, like gender representation or cultural representation, and nudges them toward the desired outcome,” she added.

In tests by tech firm Huawei, the toolkit highlighted racial bias in the data, while tests by UBS bank prompted reminders that certain attributes in the data could affect the model's fairness, according to IMDA.

“We want to enable everyone to use AI responsibly. But governments cannot do this on their own,” Wong said.

Goldilocks model

Worldwide, there are more than 1,600 AI policies and strategies from 169 countries, according to the Organisation for Economic Co-operation and Development (OECD).

The United States has opted for a market-based model with minimal regulation, while Europe has embraced a rights-based approach, and China has prioritised sovereignty and security, said Simon Chesterman, a senior director at AI Singapore, the lead government programme.

Singapore has taken a different path.

“For small jurisdictions like Singapore, the challenge is how to avoid under-regulating – meaning you expose your citizens to risk – or over-regulating, meaning you might drive innovation elsewhere and miss out on the opportunities,” he said.

“In addition to this Goldilocks idea of regulation, there is a real willingness to partner with industry ... because industry standards and choices will always be the first line of defence against problems associated with AI,” he said.

“It also increases the chances that Singapore can reap the benefits of the new knowledge economy.”

The 10-member Association of Southeast Asian Nations’ guide to AI governance and ethics, released this month, recommends principles of transparency, fairness and equity, accountability and integrity, and “human-centricity”.

Yet member countries including Singapore, Cambodia and Myanmar have been criticised for using AI to enhance surveillance, including with facial recognition and crowd analytics systems, and patrol robots.

A second edition of the AI Trailblazers project will be launched in Singapore this year, and help up to 150 more organisations build generative AI solutions for everyday challenges, Teo said.

While these collaborations between the government, industry and academia can accelerate technological progress, there are risks, warned Ausma Bernot, a researcher at Griffith University in Australia.

“There is the possibility of becoming overly reliant on these corporations in the medium- to long-term,” she said.

“The challenge is striking a balance between cooperation and maintaining sovereign control over critical AI infrastructure.”

At the Trailblazers event, a short film on the People’s Association’s booking chatbot created a buzz of excitement.

There were more than 140,000 badminton court bookings in 2022, so a tool that can help do it easily is welcome, said Weng Wanyi, director of the National AI Office.

“It will save time and effort,” she said. “At the end of the day, it’s about solving real problems with technology.” – Thomson Reuters Foundation

Follow us on our official WhatsApp channel for breaking news alerts and key updates!
   

Next In Tech News

Japan's antitrust watchdog to find Google violated law in search case, Nikkei reports
Is tech industry already on cusp of artificial intelligence slowdown?
What does watching all those videos do to kids' brains?
How the Swedish Dungeons & Dragons inspired 'Helldivers 2'
'The Mind Twisting Quadroids' review: Help needed conquering the galaxy
Albania bans TikTok for a year after killing of teenager
As TikTok runs out of options in the US, this billionaire has a plan to save it
Google offers to loosen search deals in US antitrust case remedy
Is Bluesky the new Twitter for teachers in the US?
'Metaphor: ReFantazio', 'Dragon Age', 'Astro Bot' and an indie wave lead the top video games of 2024

Others Also Read