Lesson Description
The "Delimeters with Complex Prompts" Lesson is part of the full, Practical Prompt Engineering course featured in this preview video. Here's what you'd learn in this lesson:
Sabrina demonstrates how delimiters like quotes, dashes, XML tags, and markdown create boundaries and structure in prompts. This added structure allows LLMs to understand the prompt more easily and provides structure and readability to the output. Sabrina uses Claude to demonstrate using delimiters with a complex prompt.
Transcript from the "Delimeters with Complex Prompts" Lesson
[00:00:00]
>> Sabrina Goldfarb: Let's move on to a more tried, true, and tested thing in the prompting space again, a less controversial thing called delimiters, and we're going to talk about delimiters and XML tags. But first, let's talk about what a delimiter is. So if you don't actually know what a delimiter is, it's something you use every single day. Commas and periods—you use them in your code, right? You separate items in an array with commas.
[00:00:28]
Those are delimiters. At its simplest, it's just a boundary. It's a boundary in your prompts. It's a boundary in a section of text. If you've seen markdown formatting, which I'm sure everyone here has seen at least once, whether you know it or not, if you've ever looked at a readme file, then you've seen markdown formatting, and that's a great use of delimiters. The same way that delimiters work on human eyes, right?
[00:00:56]
When I made these slides, I used bullet points on slides. Why? To keep your attention, to make it easy to read, to separate, to understand what is the header versus the subcontent, what are the main points, right? And this does the same thing for LLMs. We've been talking all day about all this research that kind of shows that LLMs think a whole lot like we do. And I think again I put that in quotes, but they act a whole lot like we do in terms of how their neural network behaves and pays attention to things, and this is the same for us with delimiters and XML tags.
[00:01:35]
So, triple quotes, dashes, XML tags, markdown, all works, right? And why does it work? Because if we think back to when these LLMs were trained, they were trained again on data. And I know I said this earlier, but what do we have a ton of data on on the internet? We have a lot of code, right? A lot of code. We have a lot of documentation, we have a lot of readmes. We have a lot of things that utilize some sort of delimiter.
[00:02:05]
Our code uses delimiters with curly braces, with brackets, with commas. Our readmes have delimiters with hashtag pounds, whatever you want to call them anymore, right? With line breaks, with all these sorts of things. So it makes sense that these models would understand delimiters and XML tags really well. Something else is some models are actually trained on these tags. So when it comes to the Anthropic Claude models, they have been specifically trained on XML tags, so they are exceptionally good at understanding XML tags, right?
[00:02:43]
So, by utilizing delimiters and XML tags, we can take large chunks of information in our prompts and break it down into really easy to understand sections for the LLMs. It gives us a visual hierarchy, it gives the LLM a visual hierarchy, and it just makes it easier to tell the difference between input, output, examples, etc. OK. You should use delimiters and XML tags, and we have used them at some point today, right, with our structured output section.
[00:03:16]
We use delimiters, and we're going to utilize them again, but you should use them anytime you have a complex prompt. So you'll likely use these often. Something else I want to mention for delimiters is the fact that you don't have to use traditional markdown syntax or traditional any syntax, right? You can decide, I can decide to say my delimiter is going to be four at symbols. As long as I'm consistent, it doesn't really matter, right?
[00:03:50]
It's not going to make a huge difference if I use three dashes for a line break or if I use five hashtags for a line break, right? It doesn't really matter as long as I'm consistent in how I'm doing it. Delimiters support nesting and attributes for complex data organization. So if I want to show a bunch of examples, I can add that I have example one, example two, example three, to just give that, again, that clarity so that this way I'm not getting my few-shot examples mixed up.
[00:04:22]
You can see very clearly that this is where example one starts and ends, clearly where example two starts and ends, and clearly where example three starts and ends. So we're just adding that structure. Something I do want to mention is to make sure to use semantic naming, so requirements, constraints, examples. Make it easy. If you are trying to create a user schema, don't just call it like X, right?
[00:04:48]
Call it user schema. Make it easy to understand both for yourself and for the LLM. Again, we're kind of treating these LLMs like they're junior engineers working with us. So the more clarity we can give, the better our output is going to be. For delimiters, we are going to go into our Claude chat. And now I told you that we are done building our Prompt Library, and we are, but we are going to use delimiters to ask our LLM to help us plan what the next steps would be if we wanted to turn our application into a live production app, right?
[00:05:22]
So maybe I want to turn our application into a live production app. We don't really know what we want to do. We don't know what considerations to keep in mind. And so we want to use delimiters to ask our LLM, Hey, can you format this easily for me, so I can really see how I could bring this into a full production application? So, I am going to start typing this out, and then I'm going to copy it over again if you need to.
[00:05:58]
It is in the workshop materials. I need to research how existing tools handle prompt management and version control to inform architecture decisions for a Prompt Library I'm building and hoping to move to production. Please research and analyze different approaches using this structure, right? So I'm giving it structure at this point. So, I'm going to show the structure. OK, so using this structure, we can see that we are using tags to say, here's a research area, and we're nesting these tags to say within this research area, there is a topic, prompt management solutions, right?
[00:06:53]
And then within this research area, but not within the topic, we have some questions that we want to consider. So what tools currently exist for Prompt Library management? And then we're closing out our questions area and we're closing out this first research area. Then we're opening up a new research area with a topic collaboration features. We're closing out that topic tag, and then we're saying, here's some questions.
[00:07:21]
How do teams share Postman collections or Insomnia workspaces? What permission models exist in developer tools? We're closing out those questions and that research area. Then finally, we have a research area that says our topic technical implementation details. Then we have a subset of questions. What databases do similar tools use? Research from their engineering blogs, right? This way we can use this multi-modality of researching from the internet to get some information from engineering blogs.
[00:07:53]
How do they handle search at scale? What's their approach to data export and import? How do they prevent abuse and implement rate limiting? And then we're going to close out the questions and research area. Then we're using even more tags to separate our information and say for each research area, one, find concrete examples, two, identify patterns, three, highlight common failures, and four, estimate implementation complexity.
[00:08:24]
Then synthesize into a concise competitive analysis matrix. So we can see we've utilized a lot of delimiters in here, right? We're using our colons, we're using all these research areas, these topics, right? We're using these questions. Then we also have numbers, right? We're bullet pointing, we're numbering, and then we're synthesizing this into a bullet pointed concise competitive analysis matrix.
[00:08:49]
So let's see what Claude comes up with from this. These are the types of things we can utilize whether we're creating code or whether we're making architecture decisions, or whether we're learning, right? I'm just trying to show you an example of things that we can be doing, but delimiters really belong everywhere in everything that we're doing. I especially like to utilize delimiters when I am trying to make documentation for my team.
[00:09:15]
So maybe we're considering adding a new feature, and we've considered five different options or languages or frameworks, right? I can use delimiters to put this together in a really nice and easy to understand way. So we can see that Claude is conducting comprehensive research on these tools. Yeah, go ahead, questions. Is XML a better format than something like JSON? So it's not necessarily going to matter unless you're actually talking about the Claude models because they were specifically trained on XML tags.
[00:09:45]
They're usually considered kind of the standard for the Claude Anthropic models, but if you're talking about ChatGPT, then you might be more likely to get better results with JSON. Either are going to be fine. It kind of depends on your use case and depends on what you want from it, and again, which model you're using. The newer GPTs, especially GPT-5, I believe, are very good with XML tags as well, whereas some of the older GPT models struggled more with XML tags, so you would like to use more like markdown for them.
[00:10:20]
So Claude said, I'll conduct comprehensive research. We can see all of these results. So this is that multi-modality stuff that I mentioned earlier, right? With a lot of these models, we can upload images. They can even create images for us, right? There's so many things we can do outside of just writing code and traditional architecture diagrams, or we could ask them to make an architecture diagram and make it visual for us because they have these abilities.
[00:10:48]
But also, amazingly, we have this browser support ability. So when I was able to ask, hey, can you actually research it and find concrete examples from real world projects? Can we look at tech blogs to see what they're actually using? We were actually able to do that. So, we have a research report, prompt management solutions architecture, right, with an executive summary. Bottom line up front, the prompt management landscape reveals a clear pattern.
[00:11:17]
Successful tools balance sophisticated technical capabilities with non-technical user accessibility. PromptLayer, LangSmith, Agenta, and similar platforms have emerged as leaders by enabling both technical and non-technical teams to collaborate on prompt engineering. And we can see here that we have a source, right? So if you're ever curious, Claude or the GPTs will add sources. Key architectural decisions around databases, search, and permissions follow established developer tool patterns from Postman and Insomnia, right?
[00:11:49]
These were the specific examples we had given it, so these are the examples it's going to be looking for. And we can see that actually it's utilizing, Claude is utilizing delimiters right back at us, right? So we have bolded headlines, we have subheadlines. We still have a small section up front that's bolded for us to look at. We have line breaks, right? So we have, you know, one, two, three, four, we have bullet points in terms of numbering, right?
[00:12:19]
Research area one, prompt management solutions, right? So, key features, version control, collaboration, testing and evaluation, and visual interfaces. So if we were considering saying, let's take our Prompt Library, let's bring it to the next step, and let's make it something that other people can use and collaborate and do other things with, these are the things that we would want to consider, right?
[00:12:44]
We will want to add version control and collaboration tools and testing and evaluation tools and a different visual interface or more in a visual interface, right? Implementation complexity estimates. So now we can see how long is it going to take us to build this, right? So our lowest complexity would take two to four weeks. We could easily get this up and running, a basic CRUD operation for prompts, simple version control.
[00:13:08]
I mean, we saw how much we could build in one day, right? What could we build in two to four weeks? So, and here is our matrix that we asked for, right? We asked at some point, can we have a matrix that's really concise, that tells us all of this information very quickly, and here's our matrix. So we're using delimiters and XML tags to make things easier for ourselves, make things easier for the models to understand, and then also giving us back outputs that are going to be the most useful for us.
[00:13:41]
If I was doing this for my company, if my company was asking me, hey, do all the research into getting this live. Give me the reasons for, against, the technical implementation, and any other considerations we have. Now I can copy and paste this, right? It's already in a markdown format that makes it easy for me to just copy and paste. OK, this ended up getting to be a super long prompt, right, even though I asked it to say concise, but I did also ask it for a lot of research areas and information.
[00:00:00]
So we do have to keep this in mind again when we're prompting, what are we looking for? What is actually important for us, and are we only asking for those specific things?
Learn Straight from the Experts Who Shape the Modern Web
- In-depth Courses
- Industry Leading Experts
- Learning Paths
- Live Interactive Workshops