A number of the big players are coming out with their own AI coding assistants (e.g., OpenAI’s Codex, Anthropic’s Claude Code, and Google Gemini CLI). However, one of the advantages of using a third-party tool like Cursor is that you have the option to choose from a wide selection of models. The downside—of course—is that, like Uncle Ben would always say, “With great power comes great responsibility.”
Cursor doesn’t just give you a single AI model and call it a day—it hands you a buffet. You’ve got heavy hitters like OpenAI’s GPT series (now including the newly-released GPT-5), Anthropic’s Claude models (including the shiny new Opus 4.1), Google’s Gemini, along with Cursor’s own hosted options and even local models you can run on your machine.
Different models excel in different areas, and selecting wisely has a significant impact on quality, latency, and cost. Think of it like picking the right guitar for the gig—you could play metal riffs on a nylon-string classical, but wouldn’t you rather have the right tool for the job?
A Word on “Auto” Mode
Cursor also offers Auto mode, which will pick a model for you based on the complexity of your query and current server reliability. It’s like autopilot—but if you care about cost or predictability, it’s worth picking models manually. Cursor’s documentation describes it as selecting “the premium model best fit for the immediate task” and “automatically switch[ing] models” when output quality or availability dips. In practice, it’s a reliability‑first, hands‑off default so you can keep coding without thinking about providers.
Use Auto when you want to stay in flow and avoid babysitting model choice. It’s especially handy for day‑to‑day edits, smaller refactors, explanation/QA over the codebase, and any situation where provider hiccups would otherwise force you to switch models manually. Because Auto can detect degraded performance and hop to a healthier model, it reduces stalls during outages or rate‑limit blips.
Auto is also a good “first try” when you’re unsure which model style fits—Cursor’s guidance explicitly calls it a safe default. If you later notice the conversation needs a different behavior (more initiative vs. tighter instruction‑following), you can switch and continue. But, with that said, let’s dive into the differences between the models themselves for those situations where you want to take control of the wheel.
Nota bene: A lot of evaluating how “good” a model is for a given task is a subjective art. So, for this post, we’re going to be juggling a careful balance between my own experience and a requisite amount of reading other people’s hot takes on Reddit so that you don’t have to subject yourself to that.
Claude Models (Sonnet, Opus, Opus 4.1)
Claude has become a fan favorite in Cursor, especially for frontend work, UI/UX refactoring, and code simplification. I will say, I like to think that I am pretty good at this whole front-end engineering schtick, but even sometimes, I am impressed.
- Claude 3.5 Sonnet – Often the “default choice” for coding tasks. It’s fast, reliable, and has a knack for simplifying messy code without losing nuance.
- Claude 4 Opus – Anthropic’s flagship for deep reasoning. Excellent for architectural planning and critical refactors, though slower and pricier.
- Claude 4.1 Opus – The newest version, with sharper reasoning and longer context windows. This is the model you pull out when you’re dealing with a sprawling repo or thorny system design and you want answers that feel almost like a senior architect wrote them.
Trade-off: Claude models are sometimes cautious—they’ll decline tasks that a GPT model might at least attempt. But the output is usually more focused and aligned with best practices. I’ve also noticed that Claude has a tendency to get side-tracked and work on other tangentially-related tasks that I didn’t explicitly ask for. That said, I’m guilty of this too.
GPT Models (GPT-3.5, GPT-4, GPT-4o, o3, GPT-5)
OpenAI’s GPT line has been the workhorse of AI coding.
- GPT-3.5 – Blazing fast and cheap, perfect for boilerplate generation and small tasks.
- GPT-4 / GPT-4o – Solid all-rounders. Great for logic-heavy work, nuanced refactors, and design patterns. GPT-4o is especially nice as a “daily driver” because it balances cost, speed, and capability.
- o3 – A variant tuned for better reasoning and structured answers. Handy for debugging or step-by-step problem solving.
- GPT-5 – The new heavyweight. Think GPT-4 but with significantly deeper reasoning, longer context, and a much better grasp of codebases at scale. It’s particularly strong at handling multi-file architectural changes and design discussions. If GPT-4 was like working with a diligent senior dev, GPT-5 feels closer to having a staff engineer who can keep the whole system in their head.
Trade-off: GPT models sometimes get “lazy”—they’ll sketch a partial solution instead of finishing the job. But when you want factual grounding or logic-intensive brainstorming, they’re hard to beat. GPT-5 in particular tends to go slower and check in more often. So, it’s a bit more of a hands-on experience than the Claude models. That said, given Claude’s tendency to go on side quests, I am not sure this is a bad thing. GPT-5 will often do the bare minimum but then come to you with suggestions for what it ought to do next and I find myself either agreeing or choosing a subset of its suggestions.
Gemini Models (Gemini 2.5 Pro)
Google’s Gemini slots in nicely for certain tasks: complex design, deep bug-hunting, and rapid completions. It’s more of a specialist tool—less universal than Claude or GPT, but very effective when you hit the right workload. Historically, one of the perks of Gemini is that it had a massive context window (around 2 million tokens). In the months since it was released, however, other models have caught up—namely Opus and GPT-5. Even Sonnet 4 now rocks a 1 million token context window.
I typically find myself using Gemini for research tasks. “Hey Gemini, look over my code base and come up with some suggestions for how I can make my tests less flaky and go write them to this file.” Its large context window makes it great for these kinds of tasks. It’s no slouch in your day-to-day coding tasks either. I just typically find myself reaching for something lighter—and cheaper.
DeepSeek Coder
Cursor also offers DeepSeek Coder, a leaner, cost-effective option hosted directly by Cursor. It’s good for troubleshooting and analysis, and useful if you want more privacy and predictable costs. That said, it doesn’t quite match the top-tier frontier models for heavy generative work.
Local Models (LLaMa2 Derivatives, etc.)
Sometimes you just need to keep everything on your own machine. Cursor supports local models, which are slower and less powerful but guarantee maximum privacy. These shine if you’re working with highly sensitive code or under strict compliance requirements. This is not my area of expertise. Mainly because my four-year-old MacBook can’t run these models at the same speed as one of OpenAI’s datacenters can.
Model Selection Strategy
Here are some general heuristics I’ve found useful:
- For small stuff (boilerplate, stubs, quick utilities): GPT-4o or a local model keeps things fast and cheap.
- For day-to-day coding: Claude Sonnet 4 and GPT-4.1 are solid defaults. They balance reliability with performance. Gemini 2.5 Flash is also a strong contender in this department.
- For heavy lifting (large refactors, architecture, critical business logic): GPT-5 or Claude Opus 4.1 are the power tools. They’re not cheap, but often it costs less to get it right the first time. What I’ll typically do is have them write their plan to a Markdown file, review it, and then let a lighter weight model take over from there.
- When stuck: Swap models. If Claude hesitates, try GPT. If GPT spins in circles, Claude often cuts to the chase. This is not a super scientific approach, but it’s wildly effective—or at least it feels that way.
- Privacy first: Use local models or Cursor-hosted DeepSeek when your code should never leave your machine. I’ve traditionally worked on open-source stuff. So, this hasn’t been a huge concern of mine, personally.
Editor’s note: If you really want to level up with your AI coding skills, you should go from here right to Steve’s course: Cursor & Claude Code: Professional AI Setup.
Evaluating New Models
New models drop all of the time, which raises the question: How should you think about evaluating a new model release to see if it’s a good fit for your workflow?
Capability—Can it actually ship fixes in your codebase, not just talk about them? Reasoning‑forward models like OpenAI’s o3 and hybrid “thinking” models like Claude 3.7 Sonnet are pitched for deeper analysis; use them when you expect layered reasoning or ambiguous requirements.
Behavior—Does it take initiative or wait for explicit instructions? Cursor’s model guide groups “thinking models” (e.g., o3, Gemini 2.5 Pro) versus “non‑thinking models” (e.g., Claude‑4‑Sonnet, GPT‑4.1) and spells out when each style helps. Assertive models are great for exploration and refactors; obedient models shine on surgical edits.
Context—Do you need a lot of context right now? If you’re touching broad cross‑cutting concerns, enable Max Mode on models that support 1M‑token windows and observe whether plan quality improves enough to justify the slower, pricier runs. Having a bigger context window isn’t always a good thing. Regardless of what the model’s maximum context window size is, the more you load into that window, the longer it’s going to take to process all of those tokens. Generally speaking, having the right context is way better than having more context.
Cost and reliability—Cursor bills at provider API rates; Auto exists to keep you moving when a provider hiccups. New models often carry different throughput/price curves—compare under your real workload, not just benchmarks. Cost is a tricky thing to evaluate because if a model costs more per token, but can accomplish the task in few tokens, it might end up being a bit cheaper when all is said and done.
Here is my pseudo-scientific guide for kicking the tires on a new model.
- Freeze variables. Use the same branch, same repo state, and the same prompt for each run. Turn Auto off when you’re pinning a candidate so you’re not measuring routing noise. Cursor’s guide confirms Auto isn’t task‑aware and excludes o3—so when you test o3 or any very new model, pin it.
- Pick three task archetypes. Choose one surgical edit, one bug‑hunt, and one broader refactor. That trio exposes obedience, reasoning, and context behavior in a single pass. Cursor’s “modes” page clarifies that Agent can run commands and do multi‑file edits—ideal for these trials.
- As Peter Drucker (or John Doerr, but I digress) used to say: Measure what matters. For each task and model, record: did tests pass; how much did it modify; did it follow constraints; how many agent tool calls and shell runs; and wall‑clock duration. Cursor’s headless CLI can stream structured events that include the chosen model and per‑request timing—perfect for quick logging.
Repeat this process with Max Mode if the model you’re evaluating advertises giant context. You’re testing whether the larger window yields better plans or just slower ones.
Wrapping Up
Model choice in Cursor isn’t just about “which AI is best”—it’s about matching the right tool to the task. Claude excels at simplifying and clarifying, GPT shines at reasoning and factual grounding, Gemini offers design chops, and local models guard your privacy.
And with GPT-5 and Opus 4.1 now in the mix, we’re entering a phase where models can reason about your codebase almost like a human teammate. The trick is knowing when to bring in the heavy artillery and when a lighter model will do the job faster and cheaper.