Lesson Description
The "Adding Laminar Tracing" Lesson is part of the full, AI Agents Fundamentals, v2 course featured in this preview video. Here's what you'd learn in this lesson:
Scott explains setting up observability for message generation, showing how to import components, initialize functions, and enable telemetry. He highlights the importance of offline evaluations and flushing telemetry events to ensure data is sent correctly.
Transcript from the "Adding Laminar Tracing" Lesson
[00:00:00]
>> Scott Moss:OK, so the first thing we're going to do here is we need to set up the observability on the actual message generation and it's pretty simple, so I'm going to go to my code here. Have this run file that I already had, and then we're just going to instrument this. So in my code, I got a lot of stuff here. I got like the agent loop stuff and there's a lot of stuff in here. But we don't need to do the agent loop stuff right now.
[00:00:32]
I think I might have put the agent loop stuff here on accident. That's actually going to be the next one. All we need is just the Laminar stuff. So first thing we're going to do is in our run. We're going to import from @lmnr-ai/lmnr and the thing that we want is going to be the getTracer. So if you understand Otel, a tracer is like exactly what it sounds like you want to trace something, so we're going to do that and then we're going to import from.
[00:01:03]
No, not that, actually. I think we can just do it here. Laminar, like that. We're going to import those two things and then all we need to do is just say Laminar right here at the top, initialize. That's a function, takes a project API key and you add that in .env and should have added that into your .env file if you're following along as you can call it whatever you want, because this is what I called it.
[00:01:31]
And you just process that. And then the next thing is just to set this up here. So we'll say experimental telemetry. And we'll say isEnabled to true. And we'll say tracer is getTracer. That's it, and that's it. We've enabled telemetry. Everything our agent does will now be traced, and we can go visualize it somewhere, right on the dashboard, so. Let's give it a try, so I'm going to put the tools here.
[00:02:12]
I'll just say. I'll just do a single pass. Actually, I'm not even, I'm just going to just log done. Because I don't really care what output is here, so now we can run this. And go say. Make sure I do my npm install just in case, yeah, cool. I'm just going to put this at the top just in case. Actually, let's try to run this with the chat. Let's see what happens. So I'm going to say npm run build. Cool, and npm install -g.
[00:02:53]
Great, I'm going to say agi. Cool, this still should not chat, but I think it should just, I think it should do nothing essentially. Yeah, OK, it says done. Great, that's exactly what I thought it was going to do. And then now if I go to my Laminar dashboard, I click on traces. I should be able to see my traces, so I believe this is the one that I just ran. Yep, I just ran this one. Here's a time stamp, they make that bigger.
[00:03:30]
And it's fully traced. So I called it generate text. There was, oops, go away. Here was the span that got traced. So basically this is the text that it generated, this is the output, right? Here are different attributes that the SDK added, different events that might have been there, and then the next step was AI generateText doGenerate this is something internal inside of the AI SDK, right? Here's our system prompt span output, all that stuff.
[00:03:58]
So this is really great for when you like get into the weeds of trying to track down what's going on in my system. Right now it's very basic, we only call one thing, so like we can trace it very well, but like as this gets more complicated with multiple turns and more tool calls. This thing is going to get really crazy and it goes hand in hand for evals because you can enable the tracing with the evals.
[00:04:21]
So we can go look at an eval and then we can also follow the traces for that eval to see where we might want to improve things. On that tool, I see that there's like an evaluation spot in addition to the tracing. I mean, can that tool also do the Laminar or whatever it's. The evaluations? Yeah, evaluations. So in this case, I believe what this is, is, so most evaluation tools allow you to run. Like I said, online or offline evaluators.
[00:04:51]
If you run an online evaluator, you have to give them code for the evaluation that you want to run, so they can run it for you. That's what this is. So it's like, hey, do you want us to run one of the evaluators that you gave us on one of these span paths? We're not going to be doing online evaluators, one, because they only support Python online, and two, you don't really need that and like until you're, you just have so many people using your stuff like there's just no point.
[00:05:17]
It's better just to like collect data and run offline evals later. Online evals are for like people with scale, like you just have massive scale, right? So yeah, I'm pretty sure that's what that is because we will literally not be running evals in this dashboard. We'll use this dashboard to view the evaluations, like for instance, I think I have some here. That I ran and. Yeah, this thing's, these are older ones, so it's like not finding it, but you can see I have like an average score of something here and then you can run them, right?
[00:05:49]
So, but we'll run those offline. If you were not able to see your evals show up even though you have this code, then just like any analytics tool, sometimes you have to flush. The events, what does that mean? It just means that most analytics tools, you know, telemetry tools batch the calls to the server versus calling them every time you tell them to call. So they like put them in an array, right?
[00:06:19]
And then like on some interval, they'll flush that array of events and send them all at one time. So in this case it's a race condition your process that you ran from the terminal might have ended before Laminar sent those events off. So by calling flush you're forcing Laminar like, hey, I'm done, so can you send off your events to your server please? So it's very typical for analytics tools to do that.
[00:06:49]
So if you didn't see your stuff show up. It's a possibility that that's the case, OK. Going forward, make sure you do an npm run build. OK, and an npm install -g. And then do agi. Going forward, use this. You don't need to do the tsx stuff anymore. Just do this. You have a conversation with this thing. Because the next lesson will actually make it conversational. Right now it's just going to like log a thing in my example, but in the next one we will.
[00:00:00]
So you can just use this. This like cleans it up, you don't have to do the .env stuff, like it just works, right? So just do that.
Learn Straight from the Experts Who Shape the Modern Web
- 250+In-depth Courses
- Industry Leading Experts
- 24Learning Paths
- Live Interactive Workshops