PETALING JAYA: Companies should put into place effective policies on the use of the Artificial Intelligence-powered tool ChatGPT in the workplace as its use increases among employees, say human resources and IT experts.
These policies should cover issues such as confidentiality, regulation and quality, they said.
Describing ChatGPT as the latest disruptive technology after the Internet and the smartphone, Universiti Sains Malaysia cybersecurity researcher Assoc Prof Dr Selvakumar Manickam said it could be a powerful tool for companies for effective communications, marketing, and planning.
“As such, policies on effective use of ChatGPT should be encouraged in companies. It helps in data gathering, analysing and providing decision support results.
“Of course, these policies should also cover issues such as confidentiality, regulation and quality,” he said.
However, ChatGPT could be leveraged by cybercriminals to carry out new forms of phishing attacks, using it to create emails or messages that could bypass email security scanners, Selvakumar added.
This is because ChatGPT can accelerate the learning process for anyone aspiring to be a hacker.
“From its usage in companies, employees may inadvertently feed data and information into ChatGPT, which is then incorporated as part of ChatGPT’s knowledge corpus, potentially exposing a company’s sensitive data to other users,” he said.
Legal practitioner Chia Swee Yik said there was always a need for some sort of IT or Internet policy that served as a control.
“This may be done by amending an existing policy, introducing a new one or making a statement to notify employees about it.
“I think employees are already using ChatGPT to aid their work, especially those in content production or generation duties.
“So, this certainly comes with risks to employers such as confidentiality, accuracy of information, and copyright, just to name a few,” said Chia, adding that controls via such policies should be put in place, clearly setting out what the expectations around its use would be.
In terms of reprimanding employees who misuse ChatGPT, he said that it would depend on the policy, but disciplinary action should be consistent with employers’ disciplinary policy and commensurate with the severity of offences.
Malaysian Employers Federation (MEF) president Datuk Dr Syed Hussain Syed Husman said stakeholders had just got to hear about ChatGPT and what it could do.
As such, he said it was still too early to draw up any guidelines or policies until they understand it in more detail.
“Yes, if it’s going to be mainstream. Then like all things, policies must be put in place for governance – it is the right thing to do.
“Like all new technology or system or communication language, we have to see its advantages and limit its negative implications,” he said, adding that at present, stakeholders had not brought up the issue.
The guidelines for ChatGPT would depend on the kind of work or industry it was being used for, said Associated Chinese Chambers of Commerce and Industry of Malaysia (ACCCIM) treasurer-general Datuk Koong Lin Loong.
This was because the nature of the work could be technical or require certain things, and as the variables differ, the policies should suit it accordingly, he added.
“For example, a lot of people use Google, but there is no specific guide to it. Same with social media. But only when things happen, can we formulate what can be done or not be done.
“You can have a pen knife, which is used to open letters and boxes, but if it is misused, the knife can be dangerous. So how we use it is important,” said Koong.
ChatGPT is a member of the generative pre-trained transformer (GPT) family of large language models, developed by a US company called OpenAI.