From the course: AWS Administration: Security Fundamentals

AWS regions and edge locations

- [Instructor] The AWS Cloud is designed specifically to ensure that customers have the ability of hosting their applications in the cloud that can be secure, reliable, and the performance is decent. Now, the terminology I'm using is the simple versions of what is described in the AWS well-Architected framework and the best practices for customers deploying applications at AWS. Of course, we want our applications to be secure and we want them to be reliable. And you could think of a reliable application in a sense, being secure in that it's always available for the end user when they need it. And we don't want to waste money. We want the performance of our applications to be adequate, exactly what we need at any given time. So why focusing on these terms? Well, because we're talking about AWS regions and what are called edge locations, and your choices as a customer will dictate your security, your reliability, and the performance efficiency of the applications. So let's dive into this concept. The infrastructure that's offered to us as customers is deployed worldwide in what are called regions. Currently, there's over 30 regions. This number will continue to increase. If you think of populous areas around the world, you'll probably find a region being developed or online offered by AWS that has cloud services. So you can use those resources in that area of the world. And a geographical region can be pretty big. So they're divided into something called availability zones. And this is the language of AWS, and we have to get used to this terminology because anytime we deploy a cloud service, the questions that arise are, where do you want the cloud service, ie the region, and what availability zone are you going to use? And then there's another term called edge services that sit outside of the AWS regions. So we need to know this terminology to ensure our applications are secure, reliable, and have decent performance. First up regions. A region is independent, can operate on its own, and the regions are completely separate from each other. So there are regions in Canada, there are multiple regions in the United States. There are multiple regions in Europe. And as mentioned, the regions, the geographical area is so big that the regions are broken down into what is called an availability zone. And this provides us with highly available cloud services and a way for us to design and deploy highly available applications. In this graphic, we can see that there are a number of regions across the globe, and this graphic probably is obsolete, meaning there's probably more regions available as you're watching this class. All of these regions offer many cloud services, and all of these regions are online. So if you're in Australia, you're probably happy. There's a couple of regions. If you're in North America, you can see there's quite a concentration of regions available. But there are also regions around the world wherever there's a large populous area. So how to choose a region. Well, what people are worried about when they operate in the cloud is data. The security of their data. The data for your applications hosted in databases, logs, backups, customer graphics, you name it. We want that data to be secure. And perhaps when I choose the region, I'm looking at it from a security point of view. You'll be happy to know that data is never just stored in one location in a region. It's always stored in multiple locations. So there's inherent data security built into this cloud with multiple copies of data created automatically. But where is the data? So data residency might be important. For example, you might say the data has to stay in Canada, or the data has to stay in Europe. And as we know, there are starting to be rules enforced by countries saying if you don't follow these rules Amazon, we're going to find you. So the residency of my data is very important as well. And what kind of business are you? Do you have certain rules and regulations you must follow? Are you a government institution? Are you a bank? Does your industry have specific rules that you must follow? These decisions made by your company are going to be based on rules and regulations and the type of industry that you're working in, but we've got lots of choices as to where we choose to reside and operate our applications at AWS. As mentioned, a region gets broken down into availability zones. Now, Amazon has many, many data centers, but what they ensure is that each availability zone contains at least one data center dedicated to the customer workloads, to the customer applications. And the availability zones within a region are all connected with high speed fiber connections. We can't see these connections, but they're there. And there is really decent bandwidth when you're operating your applications at AWS. So a region doesn't have just one availability zone, it has a number of availability zones. And currently in 2023, each region has at least three availability zones. So when you deploy your applications on virtual servers or containers, the term that Amazon uses is instances, EC2 instances, which can be virtual servers or Docker containers or Kubernetes. And these instances are hosted and deployed on multiple subnets across multiple availability zones for your web, your application, and your database components. Now, do I have to deploy across multiple subnets in multiple availability zones? No, but then I don't have any high availability. I don't have the ability of having failover. So the purpose of availability zones and multiple availability zones is to provide security, is to provide a way for designing applications that operate with a certain amount of high availability and failover. The core services that we're talking about in the cloud include compute, networking, and storage. Sorry, this is all we've invented, Linux or Windows or Max running on ethernet networks with a wide variety of storage depending on what you need. And these resources are pretty generic throughout all of the AWS regions and availability zones. So with a region deploying, say a simple application that needs compute EC2 instance and a database, I can place that in a single availability zone. What happens if that availability zone fails or has issues? Perhaps I should deploy my application using two availability zones or three availability zones. The odds of three separate data centers and three separate physical locations all going down at the same time, that's a pretty slim set of odds. It's really never happened at Amazon. So now we have a situation, a design situation that is providing reliability and security and performance by spreading my resources across multiple availability zones within a single region. Now, you may think this is costing me more, isn't it? I am needing more compute and more databases. Yes, but you have to pay for reliability. You have to pay for failover, you have to pay for quote unquote security of your application. Ultimately, multiple availability zones is giving me failover possibilities. Let's say we're using two availability zones. We've got our application running, it's hosted on subnets, we have separate availability zones, and we have a load balancer fronting the application. Now I've got decent high availability and failover ability because if I have a problem with one availability zone, the other one's available and the load balancing service will direct my end users to the application that's up when there are issues. If there's no issues, I have even more resources. We also want to be aware that although we have availability zones for our customer workloads and we have at least one data center for those customer workloads, the services that are being deployed at AWS are mostly regional services. So if I look at say, relational database servers, they're available in any AWS region. DynamoDB and no SQL database, available in any region. The elastic file system, elastic Load Balancing, Amazon S3 storage, S3 Glacier storage, these are generic services that are available across the entire AWS cloud in any region that you choose to operate in. So for the most part, the services that you want to use at AWS will be in the region that you choose. So we have regions broken into availability zones, and we also have services that are defined as being at the edge or outside of the regions. The main service at the edge that you probably use is something called CloudFront. This is the CDN that is provided by AWS for caching static and dynamic records. In order to send the end user to the caching location, the DNS service Route 53 is used to deliver the end user to the closest edge location. We also have a web application firewall running at the edge, which provides a filter. So if there's any hacking or any bots or any problems with accessing this data, we can define filters to protect our content. So the edge locations have these three main services to allow me to cache data closer to the end user. Let's see how this works. First up, the edge locations are all around the world. There are hundreds of edge locations. Think of an edge location as another data center and its job is caching content. So we can see the blue dots indicating edge locations, the purple dots indicating multiple edge locations. And then we have a orange circle that indicates a regional edge cache. This is telling us that in that area of the world, there are so many customers, we have even more caching. So we don't set up regional edge caches. It's just a component of the edge locations. So the main service at the edge location is Amazon CloudFront, which is a CND, a content delivery network. And behind the scenes, and if you subscribe to the service, it will speed up the distribution of static or dynamic content to the end user. And it's caching this content using edge locations and the edge locations are closer to the end user. Therefore, it's a win-win for the application being nice and fast. So when an end user requests content and the application has subscribed and is using Amazon CloudFront, that content is automatically routed to the edge location that provides the best performance. Why is it the best performance? Because the edge location is the closest edge location to the end user that made the request. So let's take a look at this concept. On the left, without CloudFront, this is an example of an application using an S3 bucket. Not the S3 bucket is in a specific location in the world, and everybody has to attach directly to the S3 bucket. Once CloudFront, the CDN is invoked and placed into the application design, notice that all the end users are going to their edge location. The edge location allows us fast entry into the Amazon cloud using fiber to connect to that caching location, that edge location, which has pulled the data automatically from the S3 bucket. So CloudFront can provide performance. In a sense, it can provide security because I'm now not going across the internet directly to that bucket. I can use the edge location, can transverse Amazon's private network and get my content faster. In summary, AWS regions contain the regional cloud services that we want to use. Compute storage, networking, and many management services that we'll look at throughout this class to help us manage security at AWS. Each region has at least three availability zones, and these availability zones provide high availability and failover possibilities for the applications that you're developing. And finally, edge locations sitting outside of the AWS Cloud allow us to cache data, application data closer to the end user location. So everybody is happier. These components provide performance, security, and reliability. Now that we know about regions and edge locations, let's take a look at some of the services, some of the security, starting with identity and access management accounts security.

Contents