Practical Prompt Engineering

3 hours, 43 minutes CC
Practical Prompt Engineering

Course Description

Generate higher quality code from AI tools! Learn prompting techniques that work consistently across Claude, ChatGPT, Copilot, and Cursor. Transform vague project ideas into structure, actionable development plans. Stay productive by applying these future-proof prompting strategies as AI tools and models evolve.

This course and others like it are available as part of our Frontend Masters video subscription.

Preview

What They're Saying

Bri is an awesome teacher, very witty with examples to drill down concepts. I would say you would definitely be coming out of this course with practical understanding of LLMs, how it works and how to integrate it to your daily workflow, regardless of the usecase.
Ayman Yaqoub
Ayman Yaqoub

Course Details

Published: October 24, 2025

Rating

4.6

Learn Straight from the Experts Who Shape the Modern Web

Your Path to Senior Developer and Beyond
  • 250+ In-depth courses
  • 24 Learning Paths
  • Industry Leading Experts
  • Live Interactive Workshops

Table of Contents

Introduction

Section Duration: 5 minutes
  • Introduction
    Sabrina Goldfarb begins the course by sharing the origins of her interest in the science behind prompt engineering. The prompting techniques learned throughout the course will be used to create a client application for managing, rating, and exporting prompts. The application will be built completely from prompting—no code will be written.

Core Prompting Techniques

Section Duration: 37 minutes
  • What is Prompt Engineering?
    Sabrina introduces LLMs, or Large Language Models. Prompt Engineering is the science behind craft prompts to return high-quality results from LLMs, which are non-deterministic or unpredictable. Transformer architectures are discussed, and Sabrina explains why scaling an LLM by 10x delivers 100x the capabilities.
  • Temperature & Top P
    Sabrina explains the controls available to make LLMs more or less deterministic. Temperature ranges from 0 (deterministic) to 2 (random). It controls how often the LLM will pick the next most likely token. Top P is an alternative to temperature and is used to remove potential answers from the dataset.
  • Token Limits & Context Windows
    Sabrina highlights the importance of understanding tokens and the context window. Tokens are roughly .75 words on average. Since LLMs do not have a "memory", the full conversation is passed with each prompt so the LLM can "remember" what the conversation is about. Long conversations run the risk of filling the context window and can lead to hallucinations.
  • Standard Prompt
    Sabrina introduces the "Standard Prompt," which is a direct question or instruction to the LLM. The quality of the question directly relates to the quality of the answer. Some examples of standard prompts are provided using Claude. Any AI/LLM tool can be used to follow along with these examples.

Building Better Prompts

Section Duration: 1 hour, 37 minutes
  • Project Setup & AI Tools
    Sabrina walks through the tools and resources required to follow along with the course. All the prompts are available on the course website, which is linked below. Sabrina will be using GitHub Copilot inside VS Code. Other AI coding assistants like Claude Code, Cursor, Gemini, etc, can also be used.
  • Using a Coding Agent
    Sabrina uses a standard prompt to create a Prompt Library application. The application was successfully created but it contained some unwanted features and functionality that didn't work. This demonstrated how AI agents can go beyond the scope and deliver results outside the original scope.
  • Zero-Shot Prompt
    Sabrina introduces the zero-shot prompt. These prompts provide direct task requests without any examples. They work well for common tasks but rely entirely on the model's pre-training knowledge. A zero-shot prompt is used to recreate the Prompt Library application.
  • Model Selection
    Sabrina shares some advice for selecting different models. Exploring alternative models can help improve accuracy or reduce costs in a application.
  • One-Shot Prompt
    Sabrina compares one-shot prompting to zero-shot. With one-shot prompting, one example is provided with the request. The model learns the pattern, format, and style from the example, which establishes the format for future requests.
  • One-Shot Prompting an Agent
    Sabrina writes a one-shot prompt to add a rating feature to the application. The prompt instructs the agent to analyze feature requests and provide the plan's technical requirements, code considerations, etc.. Once the plan is reviewed, a standard prompt is used to instruct the agent to implement the feature.
  • Few-Shot Prompt
    Sabrina introduces few-shot prompting. This technique provides two or more examples and edge cases. Models learn nuances and variations from a diverse set of inputs and outputs.
  • Generate a Few-Shot Prompt
    Sabrina asks Copilot to generate a few-shot prompt for implementing a notes feature. The examples provided ask for core requirements, implementation details, and deliverables. Once the prompt is generated, the agent is instructed to implement the feature.
  • Few-Shot Prompt Q&A
    Sabrina answers questions about generating few-shot prompts and managing the context window. She also shares a tip about typing "continue" to ask an agent to restart the output if it gets stuck.
  • Context Placement
    Sabrina discusses why the placement of context matters. Providing context at the beginning and end of the prompt is much more effective than placing it in the middle.

Advanced Prompting Techniques

Section Duration: 1 hour, 21 minutes
  • Structured Output
    Sabrina explains how to get a consistent format from LLMs by providing structured outputs. Specifying examples, templates, or schemas along with the prompt helps the model understand the desired output format. A few examples are demonstrated using Claude.
  • Prompt with Structure Output
    Sabrina prompts the agent with the structured output needed to implement a metadata tracking system for the Prompt Library application. Once the plan is reviewed the agent generates the metadata feature and the applicaiton is tested.
  • Chain-of-Thought Prompts
    Sabrina walks through a chain-of-thought (COT) prompt example. chain-of-thought prompting asks the model to show it's reasoning step-by-step. This breaks complex problems down into intermediate steps and can be very effective when combined with the few-shot technique.
  • Import/Export Feature Prompt
    Sabrina uses the chain-of-thought prompting technique to implement and import/export features for the Prompt Library. After the code is generated, the features are tested. During the test, the wrong export files were selected, and Sabrina added an additional feature to support legacy formats.
  • Future Proofing Prompts
    Sabrina shares some advice for future-proofing prompts as models evolve. Documenting how prompts are used and the models where they are successful makes it easy to test them as new models are released. Also, recognizing that smaller models may work better with different prompting techniques than larger models helps identify how prompts should be adjusted for other models.
  • Emotional Prompts
    Sabrina shares research showing that LLMs can be enhanced by emotion. Emotional prompts caused the LLM to pay more attention to the more important parts of the original prompt, leading to more accurate results in the study. Sabrina also notes that this isn't universal across all models and can evolve.
  • Delimeters with Complex Prompts
    Sabrina demonstrates how delimiters like quotes, dashes, XML tags, and markdown create boundaries and structure in prompts. This added structure allows LLMs to understand the prompt more easily and provides structure and readability to the output. Sabrina uses Claude to demonstrate using delimiters with a complex prompt.
  • Personas
    Sabrina explains a technique for assigning a persona to a model. Personals instruct the model to identify with a specific role. They don't give the model extra capabilities but provide a perspective to steer the model toward a subset of data.

Wrapping Up

Section Duration: 1 minute

Learn Straight from the Experts Who Shape the Modern Web

  • 250+
    In-depth Courses
  • Industry Leading Experts
  • 24
    Learning Paths
  • Live Interactive Workshops
Get Unlimited Access Now