5 mins read

AWS:ReInvent 2023 — Utilising generative artificial intelligence to its fullest potential

December 1, 2023
The Edge
5 mins read

LAS VEGAS (Nov 30): Generative artificial intelligence (AI) has become the talk of town allowing people to harness data to work with and organise data while building and customising apps using foundation models (FMs).

Amazon Web Services (AWS) AI and data vice-president Dr Swami Sivasubramanian said building a generative AI application requires a few essentials such as access to a variety of large language models (LLMs) and FMs.

To customise FMs with data, there is a need for secure and private environments as well as the right tools to build and deploy the app.

“AWS has a long history of providing our customers with a comprehensive set of AI and machine learning (ML) data and compute stacks. We like to think of our AI offerings as a three layer stack,” said Swami in his keynote speech at AWS:ReInvent 2023 in Nevada.

The lowest tier is the infrastructure for FM training and inference while the middle layer are the tools to build with FMs and LLMs. Meanwhile, the top layer are the applications that leverage LLMs.

Swami pointed out that to enable search and personalisation experience for users was through a data type called vector embedding.

He said vector embeddings are produced by FMs which translate text inputs such as words, phrases and large units of texts into numerical representations.

Humans may understand text and the meaning behind them. However, machines only understand numbers, he added.

“We have to translate them into a format that is suitable for machine learning vectors to allow models to easily find the relationship between similar words. For instance, a cat is closer to a kitten or a dog is closer to a puppy,” he said.

“This means your FMs can now produce more relevant responses to your customers. Vectors are ideal for supercharging your applications.”

In this respect, AWS offers the Amazon Titan Text Embedding to enable users to translate text data into vector embeddings for a variety of use cases.

To build generative AI applications that are unique to specific businesses, data is critical, said Swami.

He mentioned that data is the differentiator from a generic generative AI application to an application that truly understands a business and its customers. Swami said to customise FMs with data is through a technique called fine tuning.

“You provide a labelled data set which is annotated with additional contexts to train the model on specific tasks. You can then adapt the model parameter offered to your business, extending its knowledge with lexicon and terminology that are unique to your industry customers,” he said.

Swami cited Amazon Bedrock, which is able to remove the heavy lifting from the fine tuning process while leveraging unlabelled data sets to maintain the accuracy of the FM. For example, a healthcare company will be able to pre-train the model using medical journals and research papers to make it more knowledgeable on evolving industry terminology.