Check out a free preview of the full Introduction to Backend Architectures course

The "Microservices Q&A" Lesson is part of the full, Introduction to Backend Architectures course featured in this preview video. Here's what you'd learn in this lesson:

Erik answers students' questions regarding having databases in the same service and communication between microservices, such as RESTful transactions, event-based systems, and queues, and emphasizes the need to consider latency and adapt to specific requirements.

Preview
Close
Get $100 Off
Get $100 Off!

Transcript from the "Microservices Q&A" Lesson

[00:00:00]
>> Erik Reinert: Any questions about microservices or this, yeah?
>> Speaker 1: If you have two different databases, like this example, does it matter if they are in the same server, like, MySQL service?
>> Erik Reinert: Yes, yeah, you would not want them in the same service. When we talk about microservices, I should never have to go to another team and say, hey, is it okay if I do this, that's friction, right?

[00:00:28]
The goal of microservices is to alleviate friction by having logical isolation between each one of the systems. And actually, what you see here of a database per service is probably one of the cornerstones for a microservice architecture. And what makes it so hard is that a lot of people can't even structure their data this way, right?

[00:00:49]
Because at some point, you might have to look up something somewhere else, right? And so, it becomes very challenging to say, okay, well, payments only exist in payments, but there's gotta be a way for us to link a payment to a user. So, how do we do that, right?

[00:01:04]
And that might mean that there's another service that is, like invoices, and maybe that's a service that now joins payments and users together, or maybe you find out other ways of doing it, right? But there really should be a clear isolation to where, if I need to do database migrations, right, stuff like that, I'm not really impacting the users that are using my service.

[00:01:30]
I'm just impacting myself, and then I can just say, hey, go update your client. Microservice architecture and even service architecture is really focused on making these, if we go all the way back to the beginning, modularity. This is modularity and its most extensive or almost most extensive, right?

[00:01:52]
You can think of the LEGOs or whatever kind of scenario you want, but you should be able to, like, if I wanted to, I should be able to pick up users. And put it somewhere else, and I'm not impacting payments and anything that's happening with that.
>> Speaker 1: What's the best way to communicate between microservices?

[00:02:13]
>> Erik Reinert: That is a tough question to answer [LAUGH], mostly because it depends on your transactions, and what kind of messaging you need, right? What we're looking at here is a very simple, restful kinda transactional approach where more than likely what will happen is that a user will make a request to the front end, right?

[00:02:36]
And while this user's waiting, the front end's going out to the users table, getting the data it needs, going out to the payments table, getting what it needs. And do you remember that reference I said earlier about the guy in the video who's like, but then we got to go here, go here, [INAUDIBLE], that's the problem.

[00:02:51]
That's the problem, is that at some point, if you even go with this approach, somebody's gonna be waiting, basically for that process to come back and for it to finally resolve and respond. And that's what that guy is talking about in that video, jokingly, is like, it might take five minutes for her request to populate properly just because it's so distributed, right?

[00:03:15]
And again, going back to when I was interviewing at Netflix, this was a problem that I was gonna be brought on to solve, which is, they have such a huge distribution. How can they avoid the hops to get people in the system quicker, right? And so, what you could lean on then is like, event-based systems where you say like, okay, we're gonna remove the front end and we're gonna put like an event bus in front of it.

[00:03:40]
And then everything becomes asynchronous to where one request pops over here. And then once that happens, something else is listening for that request to be done, so then it can then take on its next step and that just makes it a lot faster in the backend. So, you have to be really careful with how even this scenario plays out because you can easily add a lot of, again, going back to the UUIDs doesn't scale.

[00:04:06]
This is actually a good example of that, if you had these two services, but then you gotta go to payments and then you go back to users, and they go to payments again. And they go somewhere else that are, hey, you might have a distributed system, but it's not really efficient, right, so it really depends, again, cues are also really good.

[00:04:24]
In the scenario where you want to say, a good example of that is asynchronous processes where you click a button and then it says, okay, we're taking care of your process, give us some time and go do whatever else you want to, right? That's about building a user experience with your architecture, where in that scenario, you're telling the user, hey, it's gonna take a second, but we're not blocking you to stay here.

[00:04:50]
You can go and do something else and then when it's done, we'll come back, and then that allows all the services to communicate with each other internally, do whatever they need. So, there's a few different approaches, the most common one though, is this approach here where you have services speaking directly to each other.

[00:05:07]
They have designated endpoints, and again, GRPC, I think is a really good approach to that, not only because it works in go, but GRPC is also available to be created clients for almost any language. So, if you had a front end that was Java, God, I feel sorry for you.

[00:05:27]
But if you had a front end that was Java, you could technically unify those services with GRPC because Java can create GRPC clients. But again, if that Java service has to make five hops, you might find yourself going on a queue-based system or an event-based system or something like that.

[00:05:47]
That's why we're really not going into that too much in these slides just because those are a little bit more circumstantial. Most of the time, as long as you're building good isolation, good separation, this is fine. Again, whenever we get a Twitch chat message, we go out to a API for our data, we go out to an API for our AI stuff, and those are separated.

[00:06:11]
But because they're so fast, and they are easy to respond, we don't really worry about it, the latency is maybe, like 250, 300 milliseconds. And so, that's another thing you have to ask yourself, is how quickly are you concerned about these requests coming through? And if you need to adapt to that requirement, that's a requirement to me more than something you just think of naturally.

Learn Straight from the Experts Who Shape the Modern Web

  • In-depth Courses
  • Industry Leading Experts
  • Learning Paths
  • Live Interactive Workshops
Get Unlimited Access Now