{"id":8618,"date":"2026-02-19T10:07:25","date_gmt":"2026-02-19T15:07:25","guid":{"rendered":"https:\/\/frontendmasters.com\/blog\/?p=8618"},"modified":"2026-02-19T10:07:26","modified_gmt":"2026-02-19T15:07:26","slug":"ai-hates-ambiguity-a-guide-to-probability","status":"publish","type":"post","link":"https:\/\/frontendmasters.com\/blog\/ai-hates-ambiguity-a-guide-to-probability\/","title":{"rendered":"AI Hates Ambiguity: A Guide to Probability"},"content":{"rendered":"\n<p>We often treat Large Language Models (LLMs) like magic chat boxes. Brilliant colleagues ever awaiting our next question. This isn&#8217;t terribly far off these days. We&#8217;re certainly past the <a href=\"https:\/\/bsky.app\/profile\/damonberes.com\/post\/3mdjwxhps5c2r\">stochastic parrot<\/a> phase. As with a human colleague, the more specific we are in our questions and requests, the more useful the conversation will be.<\/p>\n\n\n\n<p>Every time you send a prompt, you are defining a <strong>probability distribution<\/strong>. You provide the context (the input tokens), and the model calculates the most likely\/useful next set of output tokens based on the weights in its neural network.<\/p>\n\n\n\n<p>When you <strong>constrain<\/strong> that distribution effectively, the result feels magical: precise, production-ready code. But when we force the AI to <strong>guess<\/strong> our intent \u2014 because we were vague, assumed it &#8220;knew the context,&#8221; or skipped the edge cases \u2014 we <strong>widen the continuation space<\/strong>. We force the model to choose from a massive array of potential answers, most of which are mediocre.<\/p>\n\n\n\n<p>In engineering, we often call bogus results &#8220;hallucinations<span style=\"box-sizing: border-box; margin: 0px; padding: 0px;\">,&#8221; but in practice, they&#8217;re\u00a0<strong>probability drift<\/strong>\u00a0caused by<\/span> unclear guidance. It happens because we failed to anchor the model\u2019s completion path to a specific, high-quality distribution.<\/p>\n\n\n\n<p>To fix this, we need to stop &#8220;chatting&#8221; and start <strong>architecting<\/strong>. Let\u2019s look at exactly what happens when we ignore the mechanism.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">The &#8220;Hostile&#8221; Prompt<\/h2>\n\n\n\n<p>I call the prompt below &#8220;hostile&#8221; because it ignores the model&#8217;s statistical reality. It treats the AI like a mind reader rather than a pattern matcher.<\/p>\n\n\n\n<p>The Prompt:<\/p>\n\n\n\n<blockquote class=\"wp-block-quote ticss-a2439d42 prompt is-layout-flow wp-block-quote-is-layout-flow\">\n<p>Write a function to fetch user data from an API and save it to state.<\/p>\n<\/blockquote>\n\n\n\n<p>In this simple request, there are three distinct <strong>specification gaps <\/strong>that will break your production app:<\/p>\n\n\n\n<ol class=\"wp-block-list\">\n<li><strong>The Tutorial Bias: <\/strong>The model\u2019s training data is dominated by simple tutorials where APIs never fail. Without constraints, it defaults to this &#8220;Happy Path&#8221; because it is statistically the most common pattern.<\/li>\n\n\n\n<li><strong>Type Blindness: <\/strong>It generates generic JavaScript instead of strict TypeScript because the constraints weren&#8217;t negotiated.<\/li>\n\n\n\n<li><strong>The State Gap: <\/strong>It writes the fetch logic immediately (prediction) without handling intermediate states (loading\/error), causing UI flashes.<\/li>\n<\/ol>\n\n\n\n<h2 class=\"wp-block-heading\">\ud83d\udee0 The &#8220;Before&#8221; Output (The Drift)<\/h2>\n\n\n<pre class=\"wp-block-code\" aria-describedby=\"shcb-language-1\" data-shcb-language-name=\"JavaScript\" data-shcb-language-slug=\"javascript\"><span><code class=\"hljs language-javascript\"><span class=\"hljs-comment\">\/\/ The AI's \"Default\" Response<\/span>\nuseEffect(<span class=\"hljs-function\"><span class=\"hljs-params\">()<\/span> =&gt;<\/span> {\n  fetch(<span class=\"hljs-string\">'\/api\/users'<\/span>)\n    .then(<span class=\"hljs-function\"><span class=\"hljs-params\">res<\/span> =&gt;<\/span> res.json())\n    .then(<span class=\"hljs-function\"><span class=\"hljs-params\">data<\/span> =&gt;<\/span> setUsers(data)); <span class=\"hljs-comment\">\/\/ Risky: No loading state, no<\/span>\nerror handling, race conditions.\n  }, &#91;]);<\/code><\/span><small class=\"shcb-language\" id=\"shcb-language-1\"><span class=\"shcb-language__label\">Code language:<\/span> <span class=\"shcb-language__name\">JavaScript<\/span> <span class=\"shcb-language__paren\">(<\/span><span class=\"shcb-language__slug\">javascript<\/span><span class=\"shcb-language__paren\">)<\/span><\/small><\/pre>\n\n\n<p>This code isn&#8217;t &#8220;broken&#8221;, it&#8217;s just optimized for brevity, not production. It lacks cancellation, error boundaries, and loading indicators.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">The Theory (Why This Happens)<\/h3>\n\n\n\n<p>Why did the AI give us such mediocre code? It wasn&#8217;t because it&#8217;s &#8220;dumb.&#8221; It was a failure of <strong>Contextual Anchoring<\/strong>.<\/p>\n\n\n\n<p>To fix this, we need to respect the architecture. The model operates on a complex Transformer architecture, but the behavior can be summarized as:<\/p>\n\n\n\n<ol class=\"wp-block-list\">\n<li><strong>Ingest: <\/strong>It maps your input tokens into a high-dimensional vector space.<\/li>\n\n\n\n<li><strong>Attention: <\/strong>It calculates &#8220;attention weights&#8221; \u2014 deciding which previous tokens are most relevant to predicting the next one.<\/li>\n\n\n\n<li><strong>Sampling: <\/strong>It selects the next token based on the calculated probabilities.<\/li>\n<\/ol>\n\n\n\n<h3 class=\"wp-block-heading\">The Limitation: Attention Decay<\/h3>\n\n\n\n<p>Critically, the model&#8217;s attention mechanism is finite. It suffers from <strong>Token Locality<\/strong>. As a conversation grows longer, the influence of earlier tokens (like your initial instructions) can dilute.<\/p>\n\n\n\n<p>If you paste a 500-line file and ask for a refactor at the bottom, the model is statistically less likely to &#8220;attend&#8221; to the specific style guide you pasted at the very top. To combat this, effective engineers re-inject critical constraints (like &#8220;Remember to use strict TypeScript&#8221;) closer to the generation point.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">The &#8220;Likelihood&#8221; Mistake<\/h3>\n\n\n\n<p>In our hostile prompt, we ignored this mechanism. We provided a short, vague input, which left the &#8220;search space&#8221; for the answer too wide. When you say &#8220;Write a function&#8221;, the model maximizes likelihood by choosing the path of least resistance: the tutorial snippet. Every time we leave a constraint undefined, the model fills the gap with the most common pattern in its dataset.<\/p>\n\n\n\n<p>We need to move from <strong>Implied Intent <\/strong>(hoping it gets it) to <strong>Explicit Constraint <\/strong>(forcing the distribution toward quality).<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Solutions<\/h2>\n\n\n\n<p><\/p>\n\n\n\n<p>We are going to make three specific negotiations with the AI to ensure stability.<\/p>\n\n\n\n<ol class=\"wp-block-list\">\n<li><strong>The Persona Negotiation (System Context)<\/strong>: The AI isn&#8217;t &#8220;wrong&#8221; when it gives you a simple script; it is fulfilling the pattern of a &#8220;helpful assistant&#8221;. We need to narrow the distribution to &#8220;Senior Engineering.&#8221; The fix here is to use a system prompt of sorts, or global context. Something like a <code>claude.md<\/code> file or <code>agents.md<\/code> file. It could have context like this: <em>You are a Senior Frontend Engineer who prioritizes defensive coding. You reject &#8220;happy path&#8221; code. You always implement error handling, type safety, and cleanup functions.<\/em><\/li>\n\n\n\n<li><strong>The Format Negotiation (Output Constraints): <\/strong>Just as <a href=\"https:\/\/frontendmasters.com\/blog\/the-browser-hates-surprises\/\">the browser struggles<\/a> when elements load out of order, the AI struggles when it mixes conversational filler with code. We need to force a strict structure. The fix here is to include extra information in our prompts, to the tune of: <em>Return only the code block. Do not add introductory text. Ensure the code is strictly typed TypeScript.<\/em><\/li>\n\n\n\n<li><strong>The Logic Anchor (Chain of Thought): <\/strong>This is the most critical technique. Normally, the AI predicts code token-by-token immediately. This is like coding without a plan. By asking for a &#8220;plan&#8221; first, we force the model to generate tokens that <strong>bias the<\/strong> <strong>attention weights <\/strong>for the subsequent code. If the model writes &#8220;I must handle the race condition <em>before <\/em>it writes the code, the probability of it actually writing AbortController skyrockets. The fix here might be prompting to ask for a plan first, like:<strong> <\/strong><em>Before generating the code, list the 3 most likely failure modes for this feature (e.g., race conditions, network errors) and explain how you will prevent them.<\/em><\/li>\n<\/ol>\n\n\n\n<h3 class=\"wp-block-heading\">Phase 4: The &#8220;Stable&#8221; Interaction<\/h3>\n\n\n\n<p>Here is a prompt asking for the same thing we did originally, but negotiated correctly.<\/p>\n\n\n\n<p>The Orchestrated Prompt:<\/p>\n\n\n\n<blockquote class=\"wp-block-quote prompt is-layout-flow wp-block-quote-is-layout-flow\">\n<p>Write a useUsers hook.<\/p>\n\n\n\n<ol class=\"wp-block-list\">\n<li>Use AbortController to handle unmounting (Race Condition).<\/li>\n\n\n\n<li>Strictly type the response.<\/li>\n\n\n\n<li>Return a tuple: [data, loading, error]<\/li>\n<\/ol>\n<\/blockquote>\n\n\n\n<h3 class=\"wp-block-heading\"><br><strong>\ud83d\udee0 The &#8220;After&#8221; Output<\/strong><\/h3>\n\n\n<pre class=\"wp-block-code\" aria-describedby=\"shcb-language-2\" data-shcb-language-name=\"JavaScript\" data-shcb-language-slug=\"javascript\"><span><code class=\"hljs language-javascript\"><span class=\"hljs-comment\">\/\/ The AI's \"Production\" Response<\/span>\n<span class=\"hljs-keyword\">const<\/span> useUsers = (): &#91;User&#91;], boolean, <span class=\"hljs-built_in\">Error<\/span> | <span class=\"hljs-literal\">null<\/span>] =&gt; {\n  <span class=\"hljs-keyword\">const<\/span> &#91;data, setData] = useState&lt;User&#91;]&gt;(&#91;]);\n  <span class=\"hljs-keyword\">const<\/span> &#91;loading, setLoading] = useState&lt;boolean&gt;(<span class=\"hljs-literal\">false<\/span>);\n  <span class=\"hljs-keyword\">const<\/span> &#91;error, setError] = useState&lt;<span class=\"hljs-built_in\">Error<\/span> | <span class=\"hljs-literal\">null<\/span>&gt;(<span class=\"hljs-literal\">null<\/span>);\n\n  useEffect(<span class=\"hljs-function\"><span class=\"hljs-params\">()<\/span> =&gt;<\/span> {\n    <span class=\"hljs-keyword\">const<\/span> controller = <span class=\"hljs-keyword\">new<\/span> AbortController();\n    setLoading(<span class=\"hljs-literal\">true<\/span>);\n    setError(<span class=\"hljs-literal\">null<\/span>); <span class=\"hljs-comment\">\/\/ Reset error state on new fetch<\/span>\n    <span class=\"hljs-keyword\">const<\/span> fetchData = <span class=\"hljs-keyword\">async<\/span> () =&gt; {\n      <span class=\"hljs-keyword\">try<\/span> {\n        <span class=\"hljs-keyword\">const<\/span> res = <span class=\"hljs-keyword\">await<\/span> fetch(<span class=\"hljs-string\">\"\/api\/users\"<\/span>, {\n          <span class=\"hljs-attr\">signal<\/span>: controller.signal,\n        });\n        <span class=\"hljs-keyword\">if<\/span> (!res.ok) <span class=\"hljs-keyword\">throw<\/span> <span class=\"hljs-keyword\">new<\/span> <span class=\"hljs-built_in\">Error<\/span>(<span class=\"hljs-string\">`Error: <span class=\"hljs-subst\">${res.status}<\/span>`<\/span>);\n        <span class=\"hljs-keyword\">const<\/span> result = <span class=\"hljs-keyword\">await<\/span> res.json();\n        setData(result);\n      } <span class=\"hljs-keyword\">catch<\/span> (err) {\n        <span class=\"hljs-keyword\">if<\/span> (err.name !== <span class=\"hljs-string\">\"AbortError\"<\/span>) {\n          setError(err <span class=\"hljs-keyword\">as<\/span> <span class=\"hljs-built_in\">Error<\/span>);\n        }\n      } <span class=\"hljs-keyword\">finally<\/span> {\n        <span class=\"hljs-keyword\">if<\/span> (!controller.signal.aborted) {\n          setLoading(<span class=\"hljs-literal\">false<\/span>); <span class=\"hljs-comment\">\/\/ Guard against unmounted state updates<\/span>\n        }\n      }\n    };\n    fetchData();\n    <span class=\"hljs-keyword\">return<\/span> <span class=\"hljs-function\"><span class=\"hljs-params\">()<\/span> =&gt;<\/span> controller.abort(); <span class=\"hljs-comment\">\/\/ Cleanup negotiated.<\/span>\n  }, &#91;]);\n};\n<\/code><\/span><small class=\"shcb-language\" id=\"shcb-language-2\"><span class=\"shcb-language__label\">Code language:<\/span> <span class=\"shcb-language__name\">JavaScript<\/span> <span class=\"shcb-language__paren\">(<\/span><span class=\"shcb-language__slug\">javascript<\/span><span class=\"shcb-language__paren\">)<\/span><\/small><\/pre>\n\n\n<p>Notice the difference. It handles the loading state. It resets errors on retry. It cleans up after itself. It feels like it was written by a human engineer because we gave the model the constraints it needed <em>before <\/em>it started predicting.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">A Note on Limitations<\/h3>\n\n\n\n<p>It is important to acknowledge that prompt engineering is not magic. Even the most perfectly constrained prompt cannot force a model to solve a problem that exceeds its training data or reasoning capabilities. If the model simply doesn&#8217;t know a library, no amount of &#8220;persona setting&#8221; will teach it.<\/p>\n\n\n\n<p>However, for the vast majority of daily engineering tasks, the failure point is not the model&#8217;s capability \u2014 it is the prompt&#8217;s ambiguity.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Conclusion<\/h2>\n\n\n\n<p>Prompt Engineering is not about &#8220;tricking&#8221; the machine; it is about <strong>constraining <\/strong>the machine. Every bug, every generic tutorial script, and every &#8220;hallucination&#8221; is a signal that the continuation space was too wide. We failed to give the AI the context it needed to converge on the right answer.<\/p>\n\n\n\n<p>Stop hoping for good code. Start <strong>architecting <\/strong>for it.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>The more effort you put in to what you put in, the higher quality you&#8217;re going to get out.<\/p>\n","protected":false},"author":43,"featured_media":8647,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"sig_custom_text":"","sig_image_type":"featured-image","sig_custom_image":0,"sig_is_disabled":false,"inline_featured_image":false,"_jetpack_memberships_contains_paid_content":false,"footnotes":""},"categories":[1],"tags":[104],"class_list":["post-8618","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-blog-post","tag-ai"],"acf":[],"jetpack_featured_media_url":"https:\/\/i0.wp.com\/frontendmasters.com\/blog\/wp-content\/uploads\/2026\/02\/architect.jpg?fit=2000%2C1200&ssl=1","jetpack_sharing_enabled":true,"_links":{"self":[{"href":"https:\/\/frontendmasters.com\/blog\/wp-json\/wp\/v2\/posts\/8618","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/frontendmasters.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/frontendmasters.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/frontendmasters.com\/blog\/wp-json\/wp\/v2\/users\/43"}],"replies":[{"embeddable":true,"href":"https:\/\/frontendmasters.com\/blog\/wp-json\/wp\/v2\/comments?post=8618"}],"version-history":[{"count":21,"href":"https:\/\/frontendmasters.com\/blog\/wp-json\/wp\/v2\/posts\/8618\/revisions"}],"predecessor-version":[{"id":8653,"href":"https:\/\/frontendmasters.com\/blog\/wp-json\/wp\/v2\/posts\/8618\/revisions\/8653"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/frontendmasters.com\/blog\/wp-json\/wp\/v2\/media\/8647"}],"wp:attachment":[{"href":"https:\/\/frontendmasters.com\/blog\/wp-json\/wp\/v2\/media?parent=8618"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/frontendmasters.com\/blog\/wp-json\/wp\/v2\/categories?post=8618"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/frontendmasters.com\/blog\/wp-json\/wp\/v2\/tags?post=8618"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}