Generative artificial intelligence (AI) is a type of AI model capable of producing new content from existing data. Large language models (LLM) are examples of generative AI specifically designed for natural language processing. These models use deep learning techniques to predict and generate human-like text responses based on vast data they have been exposed to during training. For instance, ChatGPT is an AI language model developed by OpenAI, Bard is another language model designed by Google for creative writing, and most recently, LLama is developed by Meta AI, all showcasing the power of generative AI in producing coherent and contextually relevant text based on the input prompted by users.
The Challenges of Generative AI
Despite its popularity and usefulness, generative AI has raised concerns regarding biases, misinformation, and ethical implications. Biases present in the training data can be inadvertently perpetuated in the model's generated content. These biases could be based on gender, race, ethnicity, or other demographic factors, leading to discriminatory outputs. As generative AI models can create text, they have the potential to generate misinformation or fake news, which can be harmful when spread unknowingly.
Ensuring the accuracy and reliability of generated content becomes a crucial challenge. The ethical implications involve responsible deployment and usage of AI models, ensuring they are not used for malicious purposes, such as generating harmful content or enabling deception. Addressing these issues requires ongoing research, transparency in model development, clear guidelines for ethical use, and efforts to reduce biases in training data to build more reliable and responsible generative AI systems.
Governance AI can play a pivotal role in addressing the issues in generative AI models. It uses AI to implement policies, guidelines, and oversight mechanisms to manage and regulate AI systems' behaviour, decision-making processes, and deployment. It focuses on ensuring that AI technologies are developed and used responsibly, ethically, and transparently.
In combating biases, governance AI can implement comprehensive bias-detection mechanisms during training and fine-tuning processes. Monitoring the model's outputs can identify biassed content, enabling developers to rectify these biases and improve model fairness. Moreover, governance AI can encourage data diversity and inclusivity during dataset curation, ensuring that training data is representative of different demographics and perspectives, thereby mitigating the propagation of biases in the first place.
In the battle against misinformation, governance AI can incorporate advanced fact-checking algorithms. Cross-referencing generated content against credible sources can validate the accuracy of information before disseminating it to users. Furthermore, governance AI can identify and label potentially unreliable content, providing users with transparency about the reliability of the information presented. This capability fosters a culture of trust and empowers users to make informed decisions.
Regarding ethical concerns, governance AI can enforce guidelines and usage policies to prevent malicious or harmful applications of generative AI models. It can actively monitor the deployment of these models in real time, identifying and flagging potential misuse or unethical behaviour. Additionally, governance AI can engage in continuous ethical auditing, ensuring developers and users adhere to responsible AI practices and guidelines.
Research Commercialisation for Governance AI
Research activities are crucial in developing governance AI for combating the issues in generative AI. These activities can contribute to developing explainable AI methods that provide insights into how generative AI models produce their outputs.
Transparent AI systems allow users to understand the reasoning behind the generated results, enabling better identification and rectification of biases and misinformation. In addition, natural language processing research can improve fact-checking and verification systems.
By integrating external databases and reliable sources, generative AI models can be equipped to validate information before generating responses, reducing the risk of misinformation propagation. Research can help shape the creation of ethical guidelines and frameworks for developing and deploying generative AI models. This includes exploring approaches for responsible AI governance, establishing clear boundaries for AI applications, and designing AI systems with built-in ethical considerations.
Research on generative AI requires solid regulatory compliance. Compliance with related regulations, such as data privacy and transparency, ensures legal adherence and fosters public trust in AI-based products.
As AI continues to impact various sectors, governments and regulatory bodies are increasingly developing frameworks to address potential risks and concerns associated with AI deployment. It is essential for companies and developers working on governance AI to keep abreast of changing regulations and evolving standards to maintain ethical and responsible practices in AI development and deployment.
Additionally, engaging with regulatory bodies and stakeholders can help shape governance AI solutions to meet the wider community's needs. In healthcare and finance, sector-specific regulations are designed to protect consumers, ensure ethical practices, and maintain the integrity of the industries. As such, governance AI solutions targeting these sectors must be carefully developed and tested to meet the stringent requirements and build trust among stakeholders.
Collaboration between AI developers, regulatory bodies, and industry experts is vital to balance innovation and compliance while fostering the responsible use of AI in these critical domains.
The research on governance AI is critical for further commercialisation. Several start-ups were emerging in the field of governance AI, aiming to address the challenges and issues associated with generative AI. Fiddler Labs focuses on explainable AI solutions to bring transparency and interpretability to AI models. Their platform allows businesses to monitor and understand AI models' behaviour by providing insights into the decision-making process of generative AI models.
Another start-up named DarwinAI specialises in AI explainability and optimisation. Their platform helps businesses develop AI models that are easier to understand and interpret. This approach can aid in addressing biases and ensuring that generative AI models produce more accurate and trustworthy outputs. In fighting fake news, Alethea AI focuses on synthetic media detection and verification. Their platform aims to identify deepfakes and manipulated content, which can be crucial in combating misinformation generated by generative AI models. By detecting fake or misleading content, their solutions contribute to the reliability of generated AI outputs.
MRANTI as a Innovation Catalyst for Governance AI
Malaysian Research Accelerator for Technology and Innovation (MRANTI) is an agency under the Malaysian Ministry of Science, Technology and Innovation (MOSTI) with a mandate to accelerate research and development into further commercialisation.
Launched in 2021, MRANTI now is gaining prominence with researchers from universities and research institutes to work together with industry players from many sectors, including health care, autonomous vehicles, agriculture, and telecommunication, to solve many industrial challenges through innovation. As for AI, MRANTI gathers Malaysia’s brightest minds in academia and start-up ecosystem to produce many AI solutions for process automation and data analytics.
Research commercialisation for governance AI is essential to translate cutting-edge innovations into practical solutions addressing critical AI deployment issues. As AI technologies become more pervasive, governance AI plays a crucial role in ensuring responsible and ethical use.
Commercialisation allows innovative governance AI solutions to reach a wider audience, benefiting businesses and organisations across various sectors. MRANTI plays an important role in opening the doors of opportunities for Malaysian innovators and researchers to promote governance AI solutions. By bringing these solutions to market, companies can actively combat biases, misinformation, and privacy concerns, thus fostering trust in AI systems.
Furthermore, research commercialisation enables continuous improvement, driving further advancements in AI transparency, explainability, and regulation compliance. MRANTI is also able to support innovators and researchers in engaging with regulatory bodies, industry stakeholders, and end-users helps shape governance AI to meet specific sector requirements, making it an integral part of critical industries like healthcare and finance.
Research commercialisation empowers the AI community to build more reliable, trustworthy, and accountable AI systems, ultimately creating a safer and more responsible AI ecosystem. By growing the potential of innovators and researchers in this ecosystem, MRANTI also helps to develop high-skilled human capital to future-proof Malaysia in preparing the next wave of AI revolution.