Field notes
The brief is the bottleneck. AI can't fix bad intake
25 April 2026
On this page
Three months ago I spoke with an ops manager at a mid-size digital agency. They had invested in AI tooling across the board: content generation, reporting, social scheduling. Productivity was supposed to be up. Instead, revision rounds had increased. The team was busier than before. She couldn't work out why.
I asked to see the client intake form. It had four fields: client name, deadline, budget, and a text box labelled "tell us about the project." That text box was doing all the work. And AI was generating first drafts from it at speed. Which meant the agency was producing more polished wrong work, faster.
Every agency I've worked with that has adopted AI for content or campaign delivery has encountered the same ceiling. The tools work. The output is competent. But revision rounds don't go down. They go sideways. Work gets done quickly, then redone.
The cause is almost never the AI. It's what went into the AI. A vague brief, handed to a human copywriter, produces a vague first draft that goes through two or three back-and-forths before it lands. Hand that same brief to an AI system and it produces a vague first draft in forty seconds: polished enough to feel finished, wrong enough to require a full rewrite. The time saving on generation is spent three times over on correction.
Garbage in, garbage out is not new advice. What's changed is the speed. This is one specific case of the broader pattern I described in why most AI pilots fail before they ship. Process problems get amplified, not solved, when you put AI tooling on top of them.
A brief that works for a human (one that conveys intent, tone, and rough direction) doesn't necessarily work for an AI-assisted workflow. Humans read between the lines. They ask follow-up questions in the room. They've worked with the client before and carry context in their heads.
AI systems have none of that. They process what's there. So "target audience: property investors" becomes the sum total of audience instruction, and the output reflects it: generic, undifferentiated, structurally fine, commercially useless.
A machine-parseable brief is explicit about what a human brief leaves implied:
brief:
client: "Northgate Property Group"
product: "Three-bed new builds, SE London"
target_audience:
primary: "First-time buyers, 28-38, dual income, pre-approved mortgage"
exclude: "Buy-to-let investors"
tone:
voice: "Warm, factual, not aspirational lifestyle copy"
avoid: ["luxury", "exclusive", "dream home"]
deliverable:
type: "Google Ads headlines + descriptions"
character_limits: { headline: 30, description: 90 }
quantity: 5
success_metric: "Click-to-enquiry rate on landing page"
approver: "Sarah Chen, Head of Marketing"
deadline_copy: "2026-05-02"
deadline_approved: "2026-05-05"Notice what's explicit here that most briefs leave vague: who to exclude, what words to avoid, what the actual success metric is, and who signs off. None of that requires a new CMS. It requires the intake process to ask for it.
You don't need to rebuild your entire onboarding flow. In three engagements I've watched, the biggest reduction in revision rounds came from adding just three fields to the existing brief.
1. Exclusions Not just what the client wants. What they explicitly don't want. Tone words to avoid. Competitor territory to stay clear of. Audience segments who should not be targeted. This single field eliminates an entire category of AI output that looks right but isn't.
2. The single success metric "Increase brand awareness" is not a metric. "Drive enquiry form completions on the landing page" is. When the AI-assisted workflow knows what success looks like, it can weight its output toward that outcome. When it doesn't, it optimises for plausibility, which is not the same thing.
3. Named approver Not a team. Not a department. One name. This matters for the workflow, not just accountability. When approval routing is automated, with a draft moving from AI output to review folder to approver, it needs a destination. "Marketing team" is not a destination.
These three fields take an existing brief from human-readable to machine-actionable without requiring anyone to learn a new tool.
The instinct is to wait for the right system: a new project management tool, a proper CMS, the budget to build a custom portal. I'd push back on that.
The intake gate is a form. It can live in Typeform, Notion, a Google Form, or a shared Word document with mandatory fields. The structure matters more than the platform. Start with a required field list. Make the exclusions field and the success metric field non-skippable. Route completion to wherever your workflow starts: a Slack notification, a Trello card, a folder in your shared drive.
What this creates is not just better AI output. It creates a constraint that forces the client conversation to happen upstream, at intake, rather than downstream, in revision rounds. The brief becomes the gate. Work doesn't start without it. That shift alone is worth more than any prompt engineering.
The AI sits downstream of all of this. When structured data arrives, it produces structured, defensible output. When structured data doesn't arrive, no amount of prompt refinement rescues the situation. The downstream consequence, when reports get generated from vague briefs, is the failure mode I covered in your AI reporting tool is making up your client's numbers.
Here's the commercial implication that most agencies miss when they start marketing AI-assisted services: if you're promising faster delivery, you're implicitly promising that your intake process is tight enough to support it.
A client who sends a vague email brief and gets a polished PDF back in four hours is not experiencing AI productivity. They're experiencing AI speed applied to their ambiguity, which means revision round one arrives faster than it used to. That is not a selling point. It's a liability.
The agencies I'd back long-term are the ones that make the structured brief part of the client offer. "Here's how we work: before anything starts, you complete a 15-minute intake form. It's the thing that makes everything else fast." That's honest, it's defensible, and it moves the accountability for vagueness to the right place: early, visible, and fixable before a single word is written.
The brief was always the bottleneck. AI just made it impossible to ignore.
If you're trying to identify the right intake structure for your specific workflow, that's the work of an AI Workflow Audit: a focused review of one process and the structural change that makes everything downstream of it cheaper and more reliable.