Cloud 101CircleEventsBlog
Get 50% off the Cloud Infrastructure Security training bundle with code 'unlock50advantage'

Generative AI: Proposed Shared Responsibility Model

Published 07/28/2023

Generative AI: Proposed Shared Responsibility Model
Written by Vishwas Manral, Founder at Precize Inc & Fellow at Cloud Security Alliance.

Overview

Large Language Models (LLMs) have gained attention due to the recent burst in popularity of ChatGPT, a consumer-centric chatbot released by OpenAI which uses Generative AI capabilities. The impact of ChatGPT on companies and enterprises has been huge, as has been the impact of the open source AI ecosystem.

Cloud service providers have been leading the way, with Microsoft Azure providing OpenAI APIs through its platform and Cognitive Services, while integrating other OpenAI platforms within its platforms. Google has created a chatbot like ChatGPT called Bard and is offering Generative AI capabilities through its Vertex AI platform. Amazon is also tied up with its SageMaker platform.

While we are in the early days of AI, one thing is clear. Cloud is the backbone of the Generative AI platform. Enterprises and companies will consume and create most of their Generative AI applications on the cloud.


Basics

Before we go further lets get the basic terminology in place.

Artificial Intelligence (AI) is the ability of a machine to imitate intelligent human behavior. A chatbot like “Apple Siri” or “Amazon Alexa” would be considered Artificial Intelligence, which is able to talk in a human voice.

Machine Learning (ML) is an application of AI that helps one learn and improve from experience. It is a discipline of computer science that uses computer algorithms and analytics to build predictive models that can solve business problems.

Deep Learning is a subset of machine learning that deals with algorithms inspired by the structure and function of the human brain. Deep learning algorithms can work with an enormous amount of both structured and unstructured data. A key point of Deep Learning is Representational Learning, that is, features are learned automatically and each layer learns to create a more abstract and composite representation of the data.

Generative Adversarial Networks (GANs) are a clever way of training a generative model by framing the problem as a supervised learning problem with two sub-models: the generator model that we train to generate new examples, and the discriminator model that tries to classify examples as either real from the domain or generated (fake).

Generative AI is a category of AI models and tools designed to create new content. It uses machine learning techniques like GANs and transformer models to learn from large datasets and generate unique outputs.

Artificial Intelligence chart

AI Service Provider (AISP) is an entity that provides AI Services on demand to users to help them build AI applications. These entities could be cloud providers like AWS, GCP, and Azure, as well as other specialized providers.

AI Service User (AISU) is the entity that uses services provided by the AISPs to build AI applications for users. These entities are generally enterprises, SMBs, startups, or even individual developers.

Large Language Models (LLMs) are deep-learning based language models trained on vast amounts of language data. These languages can produce humanlike language output and perform sophisticated Natural Language Processing tasks.

Prompting is the mechanism users/software can use to interact with Generative AI models using Natural Language. It consists of instructions given to Generative AI to provide desired output.

Grounding is the process of providing context to the Generative AI model to force it to answer based on context instead of hallucinating. Grounding helps generate contextually relevant responses by accessing customers' specific data.

Retrieval Augmentation Grounding (RAG) is a form of knowledge grounding. It’s a fine tuning recipe that enables mixing data from pretrained models with additional context.


Shared responsibility model

NIST 800-145 defined the service models for the cloud including IaaS, SaaS, and PaaS. These models have since evolved and many more models of services have evolved. Here is the link to the evolved shared responsibility model which includes Container as a Service and Serverless.

As Generative AI applications are being built on the cloud, the shared responsibility model can be extended for Generative AI applications too.


Generative AI applications

The Generative AI applications can be labelled as Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS).


IaaS use cases

For Generative AI IaaS apps, the AISUs select foundational models and train them with their own proprietary data. Applications use the trained models and then are provided access to the user. These are mainly for business-critical apps such as BloombergGPT, Einstein GPT, and Intuit GenOS that are created by the companies for their customer needs.



PaaS use cases

AISUs are creating PaaS apps using services like Azure OpenAI (or Google Vertex AI APIs). The AISPs host the trained models in the tenant's own infrastructure. Using Generative AI is very much like a specialized API call for applications. A lot of apps built by AISUs use this model for their apps.



SaaS use cases

A very large number of existing SaaS applications now include Generative AI capabilities. Well-known enterprise applications like Microsoft365, ServiceNow, and Salesforce now have Generative AI capabilities. None of the model, training, and grounding are exposed to the AISUs. There are also ChatGPT-like apps that are Generative AI focused and are being built. Enterprises need to be able to detect these shadow applications and find ways to keep their usage secure.


Generative AI shared responsibility model

Responsibilities for security and compliance for AI application are shared between the AI Service User, the Enterprise (Application Owner), and the AI Service Providers. This shared responsibility model helps demarcate and provide a clear separation of duties. This enables newer AI applications to be created and deployed at a faster pace, while being secure and compliant.

Gen-AI - Shared Responsibility Model

IaaS responsibilities

For IaaS applications, the infrastructure including GPUs, training, or inference infrastructure is provided by the AISPs. The AISU is responsible for security of the physical and virtual infrastructure. Curated AI foundational or pretrained models can be provided by the AISPs. The model can be trained and tuned by the AISUs, using any data corpus open source or proprietary. The selection of the model lineage and relevance is AISU responsibility. The data corpus to train the model, its lineage, and security needs to be validated by the AISU. The model is hosted by the AISU for Inference. When the application runs any prompt filtering and grounding (context to answer on) is provided by the AISU. All aspects of prompt controls, application security/Intellectual Property (IP), and copyrights are handled by the AISU itself.


PaaS responsibilities

For PaaS applications, the infrastructure as well as the pretrained models are provided by the AISPs. The security of the model and data the model is trained on is the AISP's responsibility. The model is hosted by the AISP itself. If a model is trained on specific tasks, the AISP can do some of the Grounding while user-specific context will be enhanced and provided by the AISU. Some guardrail prompt control can be provided by the AISP, while the AISU handles other aspects of it. All aspects of application security/Intellectual Property (IP) and copyrights are handled by the AISU itself.


SaaS responsibilities

For SaaS applications, the LLM models are abstracted away from the AISU and are the AISP's responsibility. The SaaS application uses user-specific context within the SaaS application as well as other data it gets from other apps to ground the model. Application security, a lot of the prompt filtering, and IP filtering are the responsibility of the AISUs.


Conclusion

While the AI Service models will evolve, as will the shared responsibility, this blog attempts to create some early terminology and framework to help clear demarcation of duties between the AI Service Provider and AI Service User, to enable them to work independently at a rapid pace.

We would love your feedback on this proposed model. Please contact me at [email protected] to share your thoughts.



Check out CSA’s AI Safety Initiative to learn more about safe AI usage and adoption.

Share this content on your favorite social network today!