5 mins read

Code Of Conduct: Ethics take centre stage in rapid AI development

November 13, 2023
The Edge
5 mins read

This article first appeared in Digital Edge, The Edge Malaysia Weekly on November 13, 2023 - November 19, 2023

The growing integration of artificial intelligence (AI), particularly in scaling smart devices, is sparking concerns about the ethical and safety standards practised by businesses that employ AI.

While AI systems are already deployed across many domains of daily life, including employment, transport, education, health and even the justice system, there is a gap that needs to be filled in terms of regulatory policies to ensure safe development of AI and for the transformative opportunities of AI to be used in an inclusive manner.

Digital Edge spoke with businesses across multiple industries that have already adopted AI-related frameworks that ensure their customers and users’ privacy and safety are protected.

Reducing non-value-added activities

The Find N3 offers instant access to features needed and unleashes PC-level productivity in the palm of one’s hand (Photo by Oppo)

Jabil, a global manufacturing solutions provider, adopted AI in ways that allow it to capture the benefits that the technology brings across the enterprise.

Its senior director of IT Vivian Sun says Jabil uses mature AI technologies to drive cost reduction, process optimisation and improve quality, focusing on developing reusable AI models instead of point solutions.

“We use AI computer vision to replace previous manual inspection work so our operators can spend time performing value-added tasks in other areas of the production line. In addition, we have rolled out AI capabilities in detecting personal protective equipment (PPE) use and personal safety attire, such as gloves, helmet and glasses, and non-compliance cases are quickly flagged, so we always ensure worker safety in our facilities.

(Photo by Jabil)

“Reducing non-value-added activities or waste has always been a goal at Jabil; by using AI, we are expediting the journey. AI is used throughout the product life cycle, from product design to mass production. With integrated and connected datasets, we understand more comprehensively what works and what does not. So, we can build more efficient solutions to improve our productivity and production methods.

“We have applied AI use cases where we can precisely determine how many chemical composites we use to get to the optimal colour for the product. So, we make sure chemical use is kept at a minimum during production while meeting quality and throughput goals,” says Sun.

Jabil’s use of AI is governed by its data and AI policy, which applies to employees, customers and external partners interacting with or handling data and AI technologies on Jabil’s behalf. The policy provides specific guardrails on how to handle sensitive information and covers areas such as data privacy protection and governance.

“In partnership with the operations team, businesses and cybersecurity solutions, specific data classification rules and guidelines have been established, where Jabil’s Data and AI council — which consists of representatives from all functions, including legal and compliance — helps ensure our data and AI-based solutions are responsible and ethical,” says Sun.

Anonymising data

In general, the telecommunications industry deals with an incredible amount of data, from network performance monitoring, customer services, fraud detection and cybersecurity to routing one’s YouTube content through the least congested links every time one clicks on a content of interest.

(Photo by Oppo)

For solutions providers such as Telekom Malaysia, AI and machine learning have allowed telcos to carry out a multitude of tasks at an accelerated pace with a high level of efficiency. This in turn reduces cost to serve and improves customer experience.

Telekom Malaysia Research and Development (TM R&D) CEO Dr Sharlene Thiagarajah says telcos use AI to predict equipment failures, assist with capacity planning and improve quality of service.

“Instead of reacting to issues after it occurs, our internally built AI monitors the network health and is able to predict the failure at the customers’ network, the location of the fault and its estimated impact.

“With this knowledge, coupled with a scheduling and planning system, engineers can match the issue with the correct network team based on their availability, workload and issue severity. This can help telcos significantly increase customer satisfaction.”

Over the years, TM R&D has developed competencies in big data and analytics, enabling them to reach a stage where products can be built with predictive analytics for use. AI, machine learning and generative AI have since become an intrinsic part of TM R&D’s product design and development language.

“Beyond predicting models, TM R&D also uses AI to analyse network data, customer feedback and preferences, identify patterns and generate insights that can inform product design decisions. It can also be used to automate repetitive tasks such as network monitoring and complaint ticket creation,” says Sharlene.

For TM R&D, private and confidential information is not part of the AI training data sources, even when its AI systems often process large amounts of data, including personal and sensitive information, she continues.

“Keeping private and confidential data open would attract data-scraping bots, which could then become part of the open large language models. This is entirely avoided by storing data in a private cloud accessible internally and subject to various audit controls,” Sharlene explains.

To deal with the private information, TM R&D implements Privacy by Design principles that anticipate and prevent privacy invasive events before they occur, which includes anonymising data, minimising data collection and applying data protection measures, throughout the development and deployment of AI systems.

(Photo by TM R&D)

“The AI systems adopted by TM R&D are trained to interact only with data that is relevant to the specific task they are designed to perform to prevent the AI from generating responses that are outside of its area of expertise or that could be harmful or offensive,” Sharlene says.

“The AI systems are also designed with information guardrails in place to prevent the AI from generating responses that are inconsistent with TM R&D’s values or that could be harmful or offensive. In addition, the AI systems use only secured websites and documents as inputs to prevent the AI from being poisoned by malicious actors who may try to insert malicious scripts into the data.”

In addition to these measures, TM R&D has a team of in-house subject matter experts who are responsible for monitoring the AI systems and ensuring that they are performing as expected. If any harmful or offensive responses are detected, the AI systems can be updated or disabled to prevent the problem from recurring.

Foundation of future tech

AI, especially generative AI such as ChatGPT, has gained prominence recently, making it an accessible tool to be used by the masses, though it was once considered a far-future technology, used only by those living in a high-tech, utopian society.

That is not the case now, as AI-embedded apps, widgets and software are available at one’s convenience.

Blade Liu, associate vice-president and president of AI and Data Engineering System at Oppo, tells Digital Edge that, as a smartphone manufacturer, Oppo recognised this trend early on.

“In China, Oppo has been using its self-developed intelligent assistant, Xiaobu (Breeno). Currently, Oppo’s self-trained large language model, AndesGPT, is integrated into Breeno to provide users with personalised AI experiences, and in the future, it will be further deployed in ColorOS and various other scenarios, making devices smarter and more useful,” he says.

“Oppo’s AI combines cloud and device-side technologies to create a user-specific content library, offering intelligent services in relevant contexts. For example, Oppo’s AI learns and adapts to users’ habits, delivering tailored services to meet individual needs based on the device being used.”

How do Oppo AI features learn about its users?

“There are three ways that our AI features collect data that can assist our users,” says Liu. “It is primarily done through users’ daily interactions with large language models and their authorisation of personal information to Oppo, via personalised Q&A and services. We also brought in collaborations with leading industry platforms to enhance professional knowledge and provide real information and services efficiently.

“It is important to empower the super AI assistant to recognise complex user intents, and intelligently handle various scenarios in work, life and creativity, especially in supporting creative content creation.”

AndesGPT, the underlying technology for Oppo’s intelligent assistant,Breeno, demonstrates excellent overall performance in the SuperCLUE benchmark, ranking among the top Chinese closed-source models. It excels in professional capabilities and knowledge domains, including knowledge and encyclopaedia and computation, with a particular focus on conversational abilities.

Users might ask their AI chatbot assistants about things they want to keep private. In such cases, says Liu, Oppo places a strong emphasis on user privacy and has established dedicated organisations to ensure business compliance.

“To protect user privacy with Breeno and AndesGPT, Oppo takes the next step in training data for large models, where that data undergoes strict anonymisation and review to exclude any user-related private data,” he says.

“In addition, the built-in privacy feature is enhanced with explicit user authorisation. For highly sensitive data, however, on-device processing is prioritised to avoid uploading to the cloud. The advantage of on-device processing is a significant reduction in privacy risks while enhancing personalisation.”

In the early days of AI chatbot emergence, these smart assistants might suggest something unsafe and harmful, based on the prompt and question given to them.

To address that harmful situation, Liu says, Oppo employs multiple safety measures, including accurate safety classification during pre-training data filtering, 100,000-level substantial set of safety command fine-tuning, alignment with human values through reinforcement learning, and real-time sensitive word and semantic checks to ensure AI responses are safe and do not pose risks to individuals or groups.

“In the end, Oppo’s approach centres around meeting user needs, emphasising that every product and service deployed must be useful. AI, machine learning, and generative AI are considered tools to enhance user experience, with the primary focus on addressing the question of whether users need these technologies and whether they can improve the overall user experience,” he explains.