From the course: Microsoft Security Copilot

The workflow of Microsoft Security Copilot - Microsoft Security Copilot Tutorial

From the course: Microsoft Security Copilot

The workflow of Microsoft Security Copilot

- [Instructor] Let's take one step deeper and look at the workflow of Microsoft Security Copilot. Security Copilot use its orchestrator to interact with user interfaces, including the standalone Security Copilot portal, and the Microsoft security solutions with Copilot embedded, like Microsoft Defender XDR, Entra, and Intune, plugins for integrating with Microsoft security products' third-party products or custom services, and AI services, including large language model or LLM, responsible AI for checking input prompts and output responses, and Azure Open AI service. Here's how it works. First, a user prompt is sent to Security Copilot. Next, Security Copilot selects the right plugins to pre-process the prompt so it can retrieve specific context. For example, it can call Microsoft Defender Threat Intelligence to get information about a vulnerability based on the CVE ID in the use of prompt. This process is called grounding. It helps AI generate more relevant and actionable answers. Then the modified prompt is sent to the large language model. Next, the large language model generates results. Once the responsible AI check is completed, the AI response is sent to Microsoft Security Copilot. Then Security Copilot truces the plugins for post-processing or grounding for output. Finally, Security Copilot sends the response, plus app commands, if applicable, back to the requester. One thing I want to point out, this whole workflow is governed by the Microsoft Security trust boundary. In other words, your data are your data. They always remain within your company's boundary. And your data are not used to train the foundational AI models.

Contents