THERE is no denying that artificial intelligence (AI) is taking the modern world by storm.
Following the launch of the popular AI chatbot ChatGPT in late 2022, AI has dominated headlines and conversations in 2023, leading Collins Dictionary lexicographers to name it the word of the year.
Fears of the effect AI could have on society and humanity at large prompted an open letter signed by thousands of public figures and headed by tech mogul Elon Musk and Apple cofounder and tech entrepreneur Steve Wozniak, calling for an immediate pause of giant AI experiments like ChatGPT until proper guidelines are established.
Of course, the rise of AI has not gone unnoticed by the Malaysian government either, with Science, Technology and Innovation Minister Chang Lih Kang saying his ministry is looking into the possibility of regulating AI use in the country.
However, before formulating any laws to govern AI, Chang’s ministry is looking into drafting an AI code of ethics.
“The plan to come up with AI regulations is the end goal, and it starts with the establishment of AI governance and a code of ethics, which are expected to be ready by next year,” Chang says.
Given that AI is still a burgeoning field, there remains much to learn and understand about its capabilities and potential risks. So developing effective regulations for this nascent technology presents a unique challenge for all stakeholders.
Chang has already identified an obstacle on the road to AI regulation: the possibility that such regulations and monitoring will slow down technology innovation.
“This is because an innovation process that is governed by strict regulations will discourage innovators or businesses from taking risks,” he says.
But an AI code of ethics could prove crucial as an immediate foundation for stakeholders, says International Islamic University Malaysia computer science and AI expert, Prof Emeritus Datuk Tengku Mohd Tengku Sembok.
“The professional computing community and the users of any digital systems should welcome the initiative taken by the Mosti minister to come up with an AI code of ethics and AI laws,” he said, referring to the Science, Technology and Innovation Minister.
“The code of AI ethics will serve as a good immediate foundation for organisations to practise and prepare for the enactment of AI laws in the future.
“The code of ethics is not legally mandatory and is intended to set ethical standards, but the law will be legally mandatory,” he explains.
Tengku Mohd points out that the process of enacting an AI law will take time as it needs to go through the legislative process in Parliament first. Thus it is prudent for the government to craft a code of ethics before going straight to enacting a law.
However, Tengku Mohd wants to go one step further to strengthen the governance of digital systems and proposes the establishment of a Board of Computer Professionals.
“I am of the opinion that the government should have the code of ethics, followed by law, for digital systems in general, with or without the use of AI technology.”
He points out that any engineering project must be endorsed by engineering consultants under the purview of the Board of Engineers because such projects involve safety; similarly, because digital systems are encroaching into the safety and security of every user in all daily activities, there should be the same type of oversight.
For now, the immediate concern is the upcoming AI code of ethics, and Tengku Mohd has some suggestions on what the code should strive to achieve.
Firstly, he says the code of ethics should provide a guideline for individuals and organisations on how to design, develop, deploy, and use AI technology.
This must be done in a way that is trustworthy and prioritises human dignity, equality, preservation of the environment, respect for cultural diversity, and data responsibility.
A standard operating procedure must be developed, and certified AI experts must be recruited to certify and secure AI applications in any system, he adds.
Tengku Mohd also says the code must address the principles recommended by the United Nations, which are to not harm, to have a defined purpose, necessity and proportionality, safety and security, fairness and nondiscrimination, sustainability, right to privacy, data protection and data governance, human autonomy and oversight, transparency and explainability, responsibility and accountability, and, lastly, inclusion and participation.
The government is well aware of the developments of AI regulations worldwide, says Chang, pointing to the European Union’s AI Act which was recently provisionally agreed upon.
Aside from the EU’s AI Act, many other countries have also started the process of developing ethical guidelines or laws related to AI, says Tengku Mohd.
In the United States, for example, there may not be federal AI regulations yet but individual states like California have implemented laws such as the Cali-fornia Consumer Privacy Act giving consumers the right to opt out of any business’s AI or automation programmes. The United Kingdom is exploring AI ethics while bodies such as the Information Commissioner’s Office guide responsible AI use in the country.
In China, authorities have already released guidelines for the ethical use of AI and the government is actively working on AI-related regulations. And right next door in neighbouring Singapore, there is already an AI governance framework which helps organisations validate the performance of their AI systems through standardised tests against 11 principles.