From the course: Secure Generative AI with Amazon Bedrock

Unlock the full course today

Join today to access over 24,800 courses taught by industry experts.

Single-tenancy inference

Single-tenancy inference

- Single-Tenancy is quite similar to multi-tenancy, but this time the model on the endpoint is being deployed either from the base model bucket. So you have a private version of one of those models, or it comes from a fine tuned customized model bucket. The one that you have built, you have tuned it and it deploys that instead. So when the request comes in on the left through the API endpoint hits the service again, I am permitting, goes to runtime inference service, picks the right escrow account, picks the right endpoint, sends the request, picks up the response and passes it back. And also again, we have stored that information in prompt history store if relevant, because it came from console. And again, we have got the same caveat on the data store and encryption. Everything is still TLS 1.2 across the board from left to right. Nothing is stored in the escrow account as part of the inference. None of the providers can get to that. Therefore, none of the data can be used to train…

Contents