What Senior Engineers Need to Know About AI Coding Tools

Marc Grabanski Marc Grabanski on

Sabrina Goldfarb, an engineer on GitHub Copilot, noticed something odd. She was getting great results from AI tools at GitHub, but the senior engineers around her kept saying the tools were terrible. Same models, wildly different experiences.

“I kept hearing the more senior engineers at my company being like, this is terrible, right? I’m not getting the right outputs,” she explains in her course. “And I was just like, why? If I’m getting really good outputs, why are you not getting really good outputs?”

So she dug in. And what she found is that prompt engineering isn’t magic. It’s a learnable technique.

Here’s the thing: senior engineers actually have a massive advantage with these tools once they learn the basics. They know what questions to ask. They understand the edge cases, the architecture decisions, the hundred small things required to ship real software. A junior developer might accept the first output an AI gives them. A senior developer knows what’s missing.

The gap isn’t about AI aptitude. It’s just technique. And the techniques work whether you’ve been coding for 20 years or 20 days.

Here’s a small example of what she means. In her course, she asks an AI to build a Prompt Library app with a simple request: save and delete functionality, clean and professional, HTML/CSS/JavaScript. The result? The AI added search functionality nobody asked for, an export button that wasn’t requested, and a save button that didn’t work.

Same task, more specific prompt (spelling out exactly what each button should do, what to store, what not to add) and suddenly it works. “The quality of the question directly relates to the quality of the answer,” she says. “This shouldn’t be surprising to us, right? This is the same thing with us as humans.”

The Fundamentals Actually Matter

The research on this is striking. There’s a technique called chain-of-thought prompting that boils down to adding “let’s think step-by-step” to your prompts. That’s it. Five words.

In one study Sabrina references, accuracy on reasoning tasks jumped from 17.7% to 78.7% just by adding that phrase.

“I cannot think of five words in the English language that could possibly help more with your prompts,” she says.

This is the unsexy truth about AI-assisted development: the fundamentals matter more than the features. Prompting patterns, context management, knowing when to let an agent run versus when to just write the code yourself. Get these right and every tool gets better.

A Path Through All of This

We didn’t want to create courses about AI theory or speculation. We wanted to show you exactly how working engineers are using these tools right now, in production, to get real work done.

If you’ve never learned to prompt properly, start with Sabrina Goldfarb’s Practical Prompt Engineering. Zero-shot, one-shot, few-shot techniques, chain-of-thought prompting, structured outputs. Three hours and 43 minutes that will change how you interact with every AI tool you use.

If you want to understand how agents actually work, Scott Moss from Netflix walks you through building agents from scratch. Not using a framework. Building the thing yourself so you understand exactly what’s happening when you hand off work to an AI agent.

If you’re already using Cursor or Claude Code but feel like you’re fighting the tools, Steve Kinney from Temporal shows you his professional AI dev setup. When to use inline edits versus background agents. How to set up guardrails. How to get unstuck when agents go off track. As one student put it: “He gives a lot of good tips and a realistic view of the capabilities of AI tools.” That realistic view is what separates useful instruction from hype.

If you want to connect AI to your actual workflows, Brian Holt from Databricks built an MCP course because he’s actively using the Model Context Protocol in his work. One student, Daniel W., said the course “set me off on my journey to create my company’s workflow MCP server, which could be used by other devs within my work community.” That’s the goal: not theoretical knowledge, but tools you use the next day.

If you want to understand the fundamentals beneath all of this, Will Sentance’s Hard Parts of Neural Networks takes you under the hood of how AI models are trained. Hand-build neural networks. Understand how prediction actually works. One student said the course “made some of the concepts in the field of AI less intimidating while building great mental models for understanding.” This depth matters because AI keeps evolving and understanding the foundations helps you adapt to whatever comes next.

Why This Matters for Senior Engineers

The companies hiring right now want people who can prompt effectively, who understand when to use agents versus when to write code themselves, who can debug when AI tools hallucinate or go off track. They want engineers who understand the technology, not just use it blindly.

Senior engineers already have the hard part: the judgment, the taste, the understanding of what good software looks like. The prompting techniques are the easy part. A few hours of learning the fundamentals, and all that experience becomes leverage.

Check out the full AI Learning Path and start with Practical Prompt Engineering. The developers who’ve spent years learning what to build are exactly the ones who’ll benefit most from learning how to ask for it.

Master the Full Stack

Leave a Reply

Your email address will not be published. Required fields are marked *

$966,000

Frontend Masters donates to open source projects through thanks.dev and Open Collective, as well as donates to non-profits like The Last Mile, Annie Canons, and Vets Who Code.