Cloud Infrastructure: Startup to Scale

Creating a Production Environment

Erik Reinert
TheAltF4Stream
Cloud Infrastructure: Startup to Scale

Lesson Description

The "Creating a Production Environment" Lesson is part of the full, Cloud Infrastructure: Startup to Scale course featured in this preview video. Here's what you'd learn in this lesson:

Erik demonstrates how to create a production environment. The environment configuration is added to the main.tf file, and the init, plan, and apply commands are run to create the environment's infrastructure. Erik also discusses why running migrations directly inside the environment rather than in CI eliminates the need for the database to be exposed to the GitHub runner.

Preview
Close

Transcript from the "Creating a Production Environment" Lesson

[00:00:00]
>> Erik Reinert: The next thing we want to do really quickly is we want to create an entirely new environment. Now, to save time, I'm not going to do it, but it's just to save time. However, I am going to show you really quickly what it looks like when you do so say you're at the point where you're like, all right, I want Prod.

[00:00:19]
Sweet prod, awesome. Let's do it. So what we would do is we would go to main. We would just copy this and we would just type in prod. Or if you want production like it says, and then production like this, whatever. I like to call it prod, just because it's a little bit shorter, it makes the names shorter.

[00:00:40]
But whatever you decide to choose or whatever you decide to use, this is literally all you need to do to create an entirely new environment. That's it. Once I save that file and then I do a terraform in it, we will now see all of the production modules also get initialized.

[00:01:02]
Then if I do a terraform plan. And again, the only reason why I'm not doing it is because the distribution and everything takes like 20 minutes, so I don't wanna wait for that. Look at that. 101 resources. So that is what it looks like when you have a completely automated environment out of the box ready to go.

[00:01:24]
And so if you want to create a new environment, that's all you need to do. Just copy and paste it, everything that we talked about up until this point would be exactly the same. So again, you would go to ECR for your deployments. Except for this would use the Prod image, so it'd use prod.

[00:01:42]
If you wanted to look at your services, you would go to the Prod cluster, you'd click on the Prod service, you would go and update the prod environment or the prod SSM variables. It's exactly the same. The experience is exactly the same. If you wanna configure anything or do anything like that, you easily can.

[00:02:06]
What we would do is we would just add that, do a terraform apply, which I can do in the background if I want to. I could do a terraform apply in the background right now, and it'll start creating that prod environment for me. There we go, off to the races.

[00:02:20]
While that's running, though, we want to make sure that we can deploy to it. We want to make sure that we actually have the deployment parts that we were talking about earlier. So I've made it so terraform runs, but I haven't yet. Made it so that we're not deploying on App Runner and Supabase anymore.

[00:02:37]
Now we want to be deploying to ECS and RDS Database Container instance. Now, what's kind of cool, before when we ran migrations, we needed a public endpoint, why? Why did we need a public endpoint? If I wanted to run migrations from the GitHub runner, I'd have to make sure the GitHub runner can communicate with the database.

[00:03:02]
In this case, the database is completely disconnected from the world. The GitHub runner is not running in our environment, so how do we solve this problem? Now, you might initially say, well, put a runner in the environment. And you can do that. You totally can do that. However, you might have noticed that we added the Goose database URL and the Goose driver to the container.

[00:03:30]
So it's kind of neat about ECS is not only can you run services with ECS, but you can run ad-hoc tasks with ECS. So I can take the exact same task definition with all the configuration and then I can tell it, hey, change the command you run in that container.

[00:03:49]
That's exactly why we put Goose inside of the main and the migrations inside of the main deployed image is because not only does the image run the service, but it also runs the migrations. What's cool about this is this is a CLI API call. So all we have to do is make a CLI API call to Amazon and say, hey, run me a container on ECS with this command, Goose, db migrate, blah blah, or up, and then just exit.

[00:04:20]
You don't have to stay there. Then that's exactly what it does. It goes out, runs the migration inside of the entire environment. You got networking there, you got the literal entire container of the service itself, right? And then it just runs that command inside of it, which means that it runs our migrations inside of that environment and then we're off to the races and we don't need to connect our CI at all to it.

[00:04:42]
And so that's the approach we're gonna take.

Learn Straight from the Experts Who Shape the Modern Web

  • In-depth Courses
  • Industry Leading Experts
  • Learning Paths
  • Live Interactive Workshops
Get Unlimited Access Now