From the course: Complete Guide to Microsoft Copilot for Security: Empower and Protect the Security Operations Center (SOC) by Microsoft Press

1.1 Welcome to your future with AI

- [Narrator] Cybersecurity is hard. And the reason is, the attackers have always had the advantage, from nation state to hacktivists working whenever versus nine to five, having unlimited funds, and most importantly, having to only get it right once, they've always had the advantage. And the numbers show from the number of compromises to the annual data loss, dollar loss, et cetera, the attackers have always had the advantage, but for the first time, there's an opportunity with artificial intelligence to make a shift. For the first time, the defenders can have an advantage. But to do this requires a change. You can't just say, "I'm going to tweak a tool or tweak a process." It requires a whole new approach, a new way of doing defense. For the first time, the defenders can have the advantage. What this means is there needs to be a change and a shift thinking about how machines can do things that humans can't do. There are some tasks machines will always win. Examples will be understanding data like threat data, vulnerability data at a machine speed, and being able to look at what you have in your environment and understand that comparison. Machines will always win. There's no way that humans can go, research something, log into stuff and understand their environment faster than a machine. Other examples would be things like incident response, threat hunting, investigations. Even the most mature teams are going to be quick to understand things, but to take data and look at your environment and find gaps and anomalies and other aspects, machines are always going to win. And finally, the idea of collaborating, summarizing, trying to speak to other team members, breaking down those barriers between a forensics investigator and an incident responder at its non-technical C-level, machines will always win. If I have to write something and think about something, that alone, that idea of the creativity is going to cost time. Machines can do that rapid speed. So by taking advantage of this in your security operations center, your SOC can basically take advantage and become much more effective, surprisingly, than the attacker landscape. The way, though, is going to be a transformation. So thinking about your tools, thinking about your people and process and changing how that all runs by adding AI across the board. This basically means looking at your playbooks, your runbooks, however you call the way you run your SOC, and starting to find areas that you have mundane tasks, speed bumps, things that AI can basically improve. Essentially speaking, thinking about this shift, in the near future, there's going to be two types of security operation centers. Those with and those without AI, and those that don't have AI are just going to accumulate technical debt. So when they do want to start using AI, it's going to cost more time, more money, and you may have to hire new people. I have a few children. My oldest in high school is going to be taking a prompt engineering course next year. What this indicates is that future IT professionals, the people that you want to hire for your security operation center, are going to have skills like prompt engineering and they're going to expect the tools to be able to use those skills. So also, without having AI in your SOC, as you bring on new people, they're going to expect tools that you don't have available. AI is the future, and whether you make the decision now or later, if you're going to be able to keep up with the attack landscape, eventually you're going to have to have AI within your security operations center. Now with that comes in two major commitments, meaning when I talk to organizations that are starting to use AI, I stop 'em and say, I need all of you to basically raise your hand and commit to two things. One, you need to change how you run your security operation center. You can't just have it as a cool tool. You can't just add AI like, "I'm going to add this chat thing," or "I'm going to enable this function." You actually need to look at your runbooks, promptbooks, playbooks and say, which steps do machines win? Which steps are we going to stop having this manual effort done here? And now a critical step will be incorporating AI to do this thing. For example, researching threats, summarizing the reports. These things that machines do really effectively force the SOC to stop and use AI for those steps. And that's how you get value. If not, it'll just be a tool in the background and you won't see the impact. And two, very important one is the idea of prompt engineering. This is a new muscle, it's a new skill. It's something that you're going to have to learn. And personally, this is going to be in every aspect of your life beyond security. Like I personally believe in the near future, if I'm ordering food, I'm going to be on a website and I'll have the ability to prompt and tell the machines about my orders and exactly what I want. And I'll get that at essentially machine speed based on using my order history. I personally believe when I book airplanes and I want to book a flight somewhere, again, I'm going to prompt this machine to go out, look at all the available flights and book the flight that I want. So this is a new muscle that I personally believe as well as others, is going to basically be something required in the near future to operate with all technologies. But, there is a lesson learn scenario where you're going to have to go through and fail at prompting. You're going to prompt this thing, you're going to get bad outcomes, and you're going to have to learn how to tweak those outcomes. Similar to an athlete that goes and take shots or shoot hoops, misses until they start to perfect that skill. And next thing you know, they start to score and hit the dunk. It's going to be similar where you're going to prompt this, you're not going to like the outcomes. And by learning and training and prompting, eventually you'll get really effective at using the AI. So have an expectation that there's going to be a learning curve and you're going to fail at prompting. With these two commitments, you'll approach this in a way where you can get value from the AI. Some of the big value pieces that you're going to get is, ideally, by improving the people process and technology, your SOC services are going to be much more effective, even more effective than the adversaries. Hence the idea that for the first time, the defense will have the advantage. You're going to be able to strengthen your team in a sense that you're going to break down barriers and allow somebody with one skillset to communicate and understand other areas and skill sets. For example, if you're an incident responder and you find there's some vulnerability in some very technical tool, you don't have to find that person that understands the tool. You can ask the AI and the AI will inform you. Also, you can skill up. So the idea that people that are new, tier one type people, will be able to do tier two and tier three type work. So skill up and skill wide, and this is going to happen because you can do things at machine speed as well as being able to take advantage of the breath machines bring, meaning you can correlate across many aspects of information, whether it's NIST, MITRE, data reports, dashboards. You can look across all these things and be able to take advantage of that information in your decision process. There's many ways to see what we call the ROI in the industry, 'cause people will say, "I like this technology, but I need a return on investment. I need to understand the impact." This technology's been out for about a year or so from a private preview perspective. So Microsoft Copilot for Security started off allowing some customers to use it. We had an early access program that people were able to buy in to use it. And as of April 1st, 2024, it is now generally available. So talking to customers throughout that entire time span, we found many aspects of improvements and value. Some of the immediate ones were, should be a shocker, things are running faster 'cause you have the ability of the machine speed. We're also seeing a lot more accurate and repeatable services because the idea is by telling the machine to do this thing, the outcome becomes a lot more repeatable versus training somebody to look at a dashboard and do this and say, "This is how you run this." Machines are obviously more accurate. And by doing this, we're also finding the outcomes, which is typically reporting, summarization, et cetera, or are a lot more effective and being generated a lot faster. Just the idea of being creative enough to think about somebody that you're going to be speaking to, a technical, non-technical, whoever it may be, and crafting an email takes a lot more time than just instructing an AI, "Summarize this for this person. Go." All that is great, but it's going to be different for every organization, 'cause every security operation center is different, every business is different. So the true ROI you're going to find when you start to develop your use cases, and you're going to hear all throughout this course, the idea of use cases and applying AI to mature those use cases. So in the end, if you want to understand how to outpace the adversaries, how to take AI and have your security operation center for the first time have the advantage, you're really going to have to understand your use cases. And our hope with this course is to help you take advantage, maximize AI, and really improve your security operations.

Contents