
Lesson Description
The "Setup AWS App Runner" Lesson is part of the full, Cloud Infrastructure: Startup to Scale course featured in this preview video. Here's what you'd learn in this lesson:
Erik configures App Runner to deploy the application. Secrets from AWS Parameter store are connected to the App Runner instance, and the container from ECR is specified. Once the deployment is started, logs and metrics are available to monitor the progress.
Transcript from the "Setup AWS App Runner" Lesson
[00:00:00]
>> Erik Reinert: So now that we've done that, we can go here, we can go to App Runner, and now we're ready to actually deploy our application. So you'll see here that I have a running application FEM FD Service Preview. Fantastic. Let's do a new one. So I'm going to click Create Service up in the top right hand corner.
[00:00:18]
And again, this deployment was set up and specifically focused on doing one thing and then being able to leverage that thing we just did. So if we go back to our diagram super quickly, we set up a container image, we set up a database. Well, now that we've set up the container image, we can draw the line between App Runner and the container image.
[00:00:42]
Because we've already done that. That's exactly what we're going to do. Now by going into the Create service, it already is like, okay, cool, I can run container images. Which one you wanna run? So if I go in here, I'll say Container Registry as my repository type provider ECR.
[00:01:00]
If I click browse, I can click on the image repository that I want, in this case FEM fdservice, and then bam. You should see that latest tag appear. That means that now this application is going to deploy based off of that latest tag and that latest tag only.
[00:01:16]
So I'm going to hit continue. You'll see at the bottom. Look at that. Deployment settings. I don't even have to worry about deployment. So I could do manual. I can just tell it, hey, I'll take care of it manually, don't worry about that. But I actually don't want to do that.
[00:01:31]
What I really want to do is I want to say, tell you what, every time I push to latest, which is basically the same as merging domain, then deploy my application. So you might be like, okay, well, how does that work in the sense of deployment? Well, we just built our CI pipeline because all I have to do is push an image to ECR and then that will go out and be deployed.
[00:01:57]
When you're talking about a startup phase and not having CI or anything like that, we just did our deployment. You built the image locally, you made sure it was for the right platform, and then you pushed it to ecr. Well, cool. Now you can trigger deployments that same way too.
[00:02:13]
Meaning that you've enabled the developer to immediately start being able to deploy if they want to. So, yeah, automatic is perfect here. And then it might tell you to create a new service role or use an existing service role, but the TLDR is you want to use that app Runner ECR Access role.
[00:02:32]
So if you don't have it, just click Create new service role. It'll create the exact one that I'm using. Or you can click Use existing service role and you're going to want to use the app Runner ECR Access role. So now, what we're going to do is we're going to click next, and now it's actually gonna ask us for our service name.
[00:02:47]
So we're gonna say FEM-FD service. We don't want to spend a lot of money, right? We don't really need, like we don't really know how much traffic we're gonna take on or anything like that. So I would recommend starting as small as possible, right? Save the most money.
[00:03:03]
We are going to be charging or being paid or sorry, not paid. We are going to be charged for this. So let's make sure we just save as much money as we possibly can. And then the next thing. Runtime environment variables. Perfect. That's exactly how we ran it locally.
[00:03:17]
So what I'm going to do now is I'm going to create four environment variables that I want to use inside of this application. The first one, the source is actually going to be. Look at that. Parameter store and SSM. Man, it's like they were so perfectly integrated. [LAUGH] So there we go.
[00:03:36]
Parameter store will be the first one. Parameter store will be the second one. We'll make plain text be the third one and then we will make parameter store be the fourth one. And so what we're going to do is in each one of the environment variable names, if you were curious or might have guessed it, we're going to put in the environment variable names that our application expects.
[00:03:59]
So Client id Client secret, right? Google redirect URL and Postgres URL. So make sure you've got each one of those. Google Client id Google Client Secret, Google redirect URL and postgres URL. And then the next thing I want you to do is I want you to open up another tab.
[00:04:26]
Sorry. Unfortunately, when you build stuff in the cloud, you have to open up a lot of tabs. But I want you to open up another tab and then I want you to go to ssm. So again, you can type in SSM at the top SSM and then go to parameter store.
[00:04:45]
Because this is the only easy way to get this FEM FD service. And then I'm going to click on each one of these and I'm going to get what's called the arn. The arn is effectively the identity or the identifier in Amazon of the resource. And so that's going to tell App Runner.
[00:05:10]
Remember earlier how I said services can use other services in Amazon? Well, we're literally doing that now. We're telling App Runner, hey, this is the location of my Google Client id. So if I click on that, you'll see arn right here. If I copy that, I'm going to paste it into this third line here.
[00:05:31]
So let me take a step back. [LAUGH] I have added parameters to ssm, meaning that users can now configure their application in a UI whenever they need. I've set up policies for the App Runner service to be able to pull those exact same credentials. So if I change them in the future, I can deploy new changes and new configuration values whenever I want.
[00:05:57]
What I'm doing now is I'm mapping the path to where it exists in Amazon with the environment variable that should be used in the application. So I'm delivering our secrets very securely at a very early stage, which, to be fair, not a lot of applications do at an early stage.
[00:06:19]
So I'm going to grab this one, I'm going to paste this one in here. Now, for the third value, you might be like, what are we using for the Google redirect URL? Well, this is what we call a chicken before the egg scenario, which is, unfortunately, we don't know the URL yet because it's going to be generated for us by the application.
[00:06:36]
What we're going to do for now is we're just going to put in the local host one and then we'll update it. Once we actually have the URL of the application that's running in App Runner, I'm going to postgres URL. Thank, gosh! I wasn't sure if I sent encrypted that one or not.
[00:06:54]
And then we'll paste that in there. So again, first one should be the arn to the Google client id. Second one should be the arn to the client secret. Third one should just be that local host callback URL. And then the last one here should be the postgres arn as well.
[00:07:12]
So then, we scroll down a little bit. Oops, sorry. Scroll down a little bit. We should see the port. So, so just like a container, it's going to ask us what port it's going to listen on. So I'm going to tell it 8080. Then we're going to scroll down a little bit Further, and we're going to see security.
[00:07:29]
In security, this is where we're actually going to tell it to use the role that we created. So if I click this, you'll see that I have two, but you should have one, which is FEMFD service. Now, if you didn't set up that role correctly or make sure that the assume role was there, that would not show in the list.
[00:07:49]
Amazon is able to tell if you've given the role permission to the service. And so if it's not there, then that means effectively the role failed setup, it wasn't created properly, and you're gonna have to go debug that role. But because it is, we should see FEMFD service in the instance role.
[00:08:10]
So we're gonna scroll down a little bit more. We're gonna click next. We're gonna leave everything else the same. Everything else could stay the same. And I'll explain why in a second. So this next part is really gonna show us what we're gonna get out of deploying this service on App Runner.
[00:08:26]
So the first thing is it's gonna show us our source, which in this case, this service that's gonna run in the cloud is gonna get its source from ecr. And anytime the latest image is updated, it's going to do automatic deployments for us, right? It's going to access ECR using the App Runner ECR access role that we told it to use, and then it's going to configure the service using the service name of FemfdService with 0.25 cores and half a gigabyte of memory on port 80.
[00:08:57]
And then, it's going to map the environment variables to the SSN or the plain text options that we tell it, right? The next stuff is nice, and what's really nice about using something like App Runner, if we scroll down a little bit further, you'll actually see that we have auto scaling.
[00:09:15]
We have auto-scaling built into App Runner. What's really nice about this is that this actually auto scales based off of your request size. So basically, your concurrency, your minimum size, and your maximum size are the minimum size is how little, how low you can scale down. Maximum size is how far you can scale up.
[00:09:38]
And then the concurrency is how many requests per second you will get before it scales up. So right now it will stay scaled down until we get up to 100 requests per second. And then once we get to 100 requests per second, that will then scale out and keep scaling out until that value drops to under 100 requests per second.
[00:09:58]
So that's really nice. That means that, let's talk about a VPS for a second. If we did this with a vps, unfortunately this would not help us. A VPS would not help us at all here, right? Because a vps, you would have to scale, you would have to have more instances and you can't scale something that's already like separated.
[00:10:19]
This is another reason why I went container first is because a lot of these container first systems have auto scaling and these kind of systems built into them. So what's nice is because we took the container route, even though again, Docker's a little annoying, we got auto-scaling out of the box.
[00:10:41]
In all fairness, and I try to be as fair as I can in any kind of technical decision at least, or discussion at least, is we could probably be on top of this for a while, right? Like our biggest problems are really being solved, which is how do we configure the service, how do we deploy the service and how does it scale or handle traffic, right?
[00:11:06]
This is all right here. And again, even though it took us a little bit longer to get here, the reality of it is, is if you, you know, if we were moving at a more normal pace, this would probably be like an hour's worth of work, right? To be able to go from a service with absolutely nothing in it to a Docker file, to a database, to configuration, to policy setup, to now a completely auto scaling service.
[00:11:31]
It has health checks built into it. Again, we have security permissions, we even have networking settings as well where we can tell it. Do we want it to be a private endpoint, do we want it to be public? We can add observability so we can add additional metrics and logging on top of that.
[00:11:48]
But anyways, what I'm going to do now is I'm going to click the create and deploy button and bam, there we go. Underneath the hood. What it's going to do is it's going to start our deployment. If I scroll down to the bottom, you'll see that we have a deployment log here.
[00:12:06]
There's no CI CD to this, but built in we do have the ability to see some logs and output of what's going on. So if I was curious to see where are we with the deployment, I can keep refreshing the logs here. There you go. I see that a deployment was created.
[00:12:26]
It's getting the actual ECR artifact. It created a pipeline. Now it's working on actually deploying the service itself. Okay, cool. It pulled the ECR image. It was successful in doing that. It's just going through that deployment. We have CICD or at least we have CD. We don't have CI, but we have cd, right?
[00:12:47]
And again, I know we've been talking a lot and I've been filling your brain with all of these things, but it's only to show you how if you're just trying to get something out, you're just trying to be focused on that. You can really easily do that if you are a one man team.
[00:13:07]
You're really using the cloud here to your full advantage and to save you as much time as possible. You can see here, it pulled the image, it created the instance for me, it deployed it, it's doing health checks right now and it failed. However, what's neat is we have application logs.
[00:13:35]
You could see here I'm getting an error PQ authentication error. So more than likely I copied my SSM parameter wrong. But what's nice is that even this solution gives me the ability to debug and know what's going on. One thing that I did forget that is a little important.
[00:13:56]
I did mention that we would stay in US West 2. However, I did not tell you to click up at the top and actually change to US west too. So in the off chance that you did not do that and you're like having deployment failures or other things like that, make sure you're in the right region just to be safe.
[00:14:20]
So after a little bit of labor and a little bit of annoyance, we have a service up and running in App Runner. And so you can see here, this is an example of a service that is running in App Runner. A few things that you'll note. You'll see that it says running.
[00:14:34]
So if you see the green running, that means, hey, my service is running. That's awesome. You'll see that it's on a public domain. We have the default domain here. Then if I scroll down, I can see again the server event logs. I can see all of the deployments.
[00:14:50]
So you'll see here that I've done a few deployments. So you can see all of the deployments that have gone through. This should mean that these deployments were either manually triggered by me or or they were triggered by a push to latest. Then if I click on application logs, you can see that I have had no logs for a while, but if I did, I would be able to see them.
[00:15:11]
Another thing that's really, really nice is we also do have a little bit of observability, for example, I can click on metrics, and then it will load my request count. It'll show me my 200 responses. It'll show me my 400 responses, my 500 responses, my latency, my active instances, if I'm using, using my CPU utilization, memory, concurrency, all of this.
[00:15:34]
So another thing to note about you doing this yourself, as one person, you already have a lot of observability kind of out of the box, right? Like, you don't just get like CPU and memory, you're actually getting like request count and request latency and five hundreds and four hundreds and two hundreds, right?
[00:15:53]
So this service really was and one of the reasons why I like it is it was really set up to kind of be like out of the box. You can get started. You can actually get started with something that works well for you.
Learn Straight from the Experts Who Shape the Modern Web
- In-depth Courses
- Industry Leading Experts
- Learning Paths
- Live Interactive Workshops