
Lesson Description
The "Spring for Kafka" Lesson is part of the full, Enterprise Java with Spring Boot course featured in this preview video. Here's what you'd learn in this lesson:
Josh discusses the messaging integrations with Spring. A Kafka messaging broker is started in a Docker container. The initial Spring application is scaffolded, and the ApplicationRunner is coded with the injected Kafka template. A DogAdoptionRequest payload is stubbed out, and the event listener is configured.
Transcript from the "Spring for Kafka" Lesson
[00:00:00]
>> Josh Long: Spring has a couple of a few different layers of support. When you talk about messaging technologies, at the lowest level you'll have a X,, spring 4x project, right? Usually so for example, Spring for Apache Kafka, right? You have to do that or you have spring for spring AMQP, which is the project that supports the open source AMQP protocol, of which RabbitMQ is by far the largest implementation.
[00:00:29]
Fun fact, RabbitMQ, the open source technology, is a project shepherded by the same team that does spring, right? Well, we're two different teams technically now, but we're all under the same happy family and it was because of Spring that they got acquired into the same company as spring exists.
[00:00:47]
So RabbitMQ is. I love that technology, but I won't be talking about it here because again, Kafka's I think the new thing and then Spring for Apache Pulsar. So notice that we didn't have to say spring for RabbitMQ because we're talking about the spec, right? As opposed to spring for RabbitMQ.
[00:01:05]
It's an open source protocol. Okay, so these are some projects you can use. These are low level and I'm gonna start with that. I think that's the easiest way to get purchased in the wide and wonderful world of messaging. So let's just build a simple example to do this.
[00:01:18]
You'll need a Docker image or something to spin up Kafka. Fun fact, there's actually a Docker image. This one, I would take a photo of this. You don't need to do this right now, but just take a photo of this. This is the Docker image for Kafka that has been compiled as a graalvm native image, right?
[00:01:37]
So remember we've been talking about Graalvm Kafka, the Kafka team itself. Remember, Kafka is written in the jvm. It's a Scala code base, but that's a JVM language. They compiled the whole thing into a graalvm native image and they put that into a Docker image. So now you can start one of the most venerable, tried and true messaging brokers in like sub second,, response times.
[00:01:58]
Okay, so I'm going to do this now. Let's see what it looks like. Docker compose up. Well, no, not minus D. I'm just going to do Docker compose up. Ready, steady, go. Whoops, what did I do? So do we have compose? Yeah, we don't. So I'm going to Move Kafka YAML to compose YAML make dear Kafka, Move everything that's Kafka into the folder called Kafka and okay, cd Kafka and Docker compose up.
[00:02:37]
That was slow. What happened there? I have to pull it, I guess. Ready, steady, go. That's the broker. The entire thing is now up and running and ready to handle traffic. So that part was easy. We got Kafka going. I want to build a very simple application that talks to this thing.
[00:02:57]
So we're going to go back to the lab again, start Spring IO, I'm going to call this Kafka, even though it's not okay. And we're gonna bring in the Spring for Apache Kafka support, right? So add that. Add the Graalvm native image support, if you like, add in.
[00:03:13]
Do we care about a web server? Nah, it's fine. Just hit Enter. Okay, open this up. Where is that? That's all this other stuff from before. Delete that. Goodbye. Here we go. And we're not going to do too much here. I'm just going to demonstrate how to send and receive.
[00:03:30]
This is all you really need to know for most use cases and from the balance of stuff you can figure out in the documentation, in the details and so on. So the code is actually fairly straightforward here. The code is actually the easiest part, so let's get that out of the way now a little bit of sweet before the sour.
[00:03:46]
Then we need configuration, and the configuration is tedious. So how are we going to send a message? Let's just create a spring bean that will start up. When the application starts up, it'll be invoked and so we'll call this sender, and we're going to use the old Kafka template.
[00:04:03]
So you're going to see this idiom a lot in Spring. You'll see different template objects, or sometimes increasingly today you'll see clients like the JDBC client, whatever. But I'm going to send a. I'm going to inject a Kafka template that has the key, right? This is the key that Kafka uses and then the type of payload.
[00:04:20]
And what kind of payload am I going to send? I'll just send a dog adoption request, for example. Okay, so int, dog ID string, dog name, okay? And we'll put that here and so when the application runs, it's going to run this program, it's going to inject the Kafka template and I shall use it to send a message to a topic, okay?
[00:04:45]
And the topic is dog adoption request. Sure,, to make that cleaner, since we're going to do both the producer and the consumer in the same JVM program, I'll just extract that out into a constant and refer to that constant here. Okay, so that's going to publish a message.
[00:05:01]
Now how do I consume that message? Imagine I have a different jvm. You've already seen this event listener annotation. Well, we're doing event listener lets you listen for spring published events in the same jvm. If you want to listen for Kafka events, use Kafka listener, right. So listen for Kafka, that dog adoption event, and you have to specify a few things.
[00:05:20]
First of all, what's the destination, what's the topic, what's the address that you're going to listen to? And then what is your group id? So for a group ID in Kafka is an exclusive consumer group. So basically messages that are in the if you say you're part of a group, then you won't get messages duplicated to you across different partitions.
[00:05:44]
So I'll call this, what do I want to call this? I call this my group. Who cares? It's arbitrary as long as you're consistent, right. And now we've got the message. So system out, got request, whatever. Okay, let's configure this now. And the configuration is a little annoying because you have to seal.
[00:06:01]
Remember, Kafka, like RabbitMQ, is sort of client agnostic. It doesn't require you to, it doesn't require Java or whatever. It can be used from any language. This is, by the way, in stark contrast to something called jms. Java Messaging Service. If you ever encounter this, run the other way, right?
[00:06:18]
Just turn right back around, go somewhere else. It's not for you. JMS is a very, very misguided standard from the Oracle Java community process people from 20 years ago, 25 years ago, right. And it's meant to be an abstraction around messaging. And the core abstraction has this concept of a.
[00:06:41]
You have topics and or queues. You have a producer and a consumer that can send to a topic in a queue. Well, a couple things. First, it was just a set of Java interfaces. There's no protocol implied. So if you are on one microservice and you're the consumer of the message and I'm in the other microservice and the central JMS broker that we're using, whatever it is, has upgraded the technology.
[00:07:10]
They can actually break their own wire protocol. Requiring both producer and consumer to upgrade their client jars, right? There's no protection against that. The whole point of messaging is to decouple systems from each other so if you have to have so called flag day upgrades to upgrade the client.
[00:07:25]
And the server at the same time otherwise they won't be able to speak to each other that breaks the whole point. It's completely anathema to the idea of enterprise integration as a means of messaging. So JMS not great right? The other thing that wasn't great about this is that the producer and the consumer share the same address, they share the same location.
[00:07:42]
They have to talk to the same topic or queue in the broker which means that if I want to do some routing in between. If I decide tomorrow that I want to introduce some intermediary steps between producer and consumer I can't do that without rewriting the consumer right?
[00:07:57]
So protocols like RabbitMQ and now Kafka have support for sort of transparently slotting in these extra opportunities, right? These extra waypoints.
Learn Straight from the Experts Who Shape the Modern Web
- In-depth Courses
- Industry Leading Experts
- Learning Paths
- Live Interactive Workshops