The one thing your prompts are missing
Why do some people get outputs that read like McKinsey slides while others get outputs that sound like jargon thrown into a blender?
Every marketer knows the dread: the blinking cursor on a blank doc.
When it comes to the blank canvas problem, ChatGPT is handily the best antidote marketers have ever had.
With a single prompt, that empty page explodes into ideas, outlines, even full drafts.
But here’s the catch: not all outputs are created equal.
Why do some people get outputs that read like McKinsey slides while others get outputs that sound like jargon thrown into a blender?
The answer is context.
Think of it like photography. Two people can stand in the same location, holding the exact same camera. One takes a shot that ends up framed in a gallery.
The other takes a blurry photo of their thumb.
The difference isn’t the gear. It’s the aperture, the shutter speed, the framing, the eye for detail.
Prompts work the same way. The tool stays constant.
But the way you set the stage, the context you feed in, changes everything.
Think of them like Lego blocks you can assemble differently depending on the task.
Here are the big ones:
Role: Tell the model who it is supposed to be. A consultant, a copywriter, a developer, a B2B CMO, a stand-up comedian. Roles change the entire flavor of the response.
Tone: Specify how it should sound. Formal, playful, sarcastic, data-driven. Without this, you get LinkedIn Beige by default - technically fine, instantly forgettable.
Format: Is the output a 200-word essay? A table? A 3-step plan? A tweet? Context isn’t just about what you say, but how you want it delivered.
Audience: Who is this for? A CEO? A customer? Different audiences require different levels of detail and persuasion.
Constraints: Word count limits, avoid jargon and clichés, include sources. Constraints sharpen the output.
Examples: Few things help more than saying, “Write it like this example.” Models love patterns. Feed them one and they’ll follow.
Common prompt frameworks, aka the Context field guide:
C.R.I.S.P.
Clarity: Be clear about the task.
Role: Assign the AI a persona.
Instructions: Spell out steps or constraints.
Scope: Define the boundaries (length, detail).
Purpose: State the goal or why you need it.
P.R.O.M.P.T.
Persona: Who should the AI act as?
Role or task definition.
Output format (bullet list, table, narrative).
Modifiers: Tone, style, voice.
Parameters: Constraints, limits, do’s and don’ts.
Target: Audience or intended reader.
R.E.A.C.T. (for reasoning tasks)
Reasoning: Ask the model to think step by step.
Examples: Give samples to learn from.
Action: Specify what you want done.
Constraints: Set boundaries.
Test or Verify: Ask it to self-check or validate.
Chain of thought (CoT)
Ask the model to think out loud, show reasoning, or break down steps before the final answer. Improves logical and complex outputs.
A.C.T.
Assume a role.
Create with constraints.
Tailor to audience.
P.A.S. / A.I.D.A. adaptations
Borrow from copywriting frameworks like Problem–Agitate–Solution or Attention–Interest–Desire–Action, baked into prompts.
Roleplay / simulation
“Pretend you are X.”
“Act as if you are conducting a workshop with Y audience.”
Deliberate iteration
Step 1: Generate a baseline.
Step 2: Critique the output.
Step 3: Refine the prompt with new constraints.
Step 4: Repeat until good enough.
Meta-prompting
Ask the AI to propose better prompts or suggest ways to refine your input.
But here’s the thing: you don’t need to use every trick in the book.
Start small. Pick one framework that feels natural.
And the next time you type into that blank chat box, ask yourself:
Am I giving this the detail it deserves?
Or am I about to get back a blurry picture of my thumb?
Yours Promptly,
Manu

