Last Tuesday we shipped a new post to the Market Correct blog, built a six-image LinkedIn carousel for the agency, and ran a Google Ads search terms audit for a client. All by end of day. We finished before lunch.
That isn't a flex. It's what happens when the AI you work in already knows your brand voice, your SEO playbook, and what a clean negative keyword recommendation actually looks like at our agency. Most teams open ChatGPT, type a prompt, paste the result, and start over the next morning. We don't. We run on Claude skills.
A Claude skill is a small instruction pack the model loads on demand. Each one teaches Claude how we want a specific job done. We have around 30 of them installed across the agency right now. We touch about 12 of them every single day. The ones doing the most work are the custom ones we wrote ourselves, against our own brand voice, our own SEO checklist, and our own way of running an account audit.
This post is the actual stack. What we use, what each skill does, how we build new ones, and what it looks like when the work compounds instead of restarting from scratch every morning.
What are Claude skills?
A Claude skill is a packaged set of instructions and references that Claude Code loads on demand when a task matches its description. Skills sit in a folder. They fire by name or by intent. They run inside the same conversation as the model. They're how Anthropic lets you teach Claude a specific job without retraining the model.
The format is simple. Each skill is a folder with a SKILL.md file. The frontmatter tells the model when to load the skill. The body teaches the model the rules. Some skills include reference files, templates, or example outputs. Some are short and pointed. Others are long, opinionated playbooks. The whole thing lives on disk, gets versioned in git, and travels with the project or the user.
Anthropic publishes a growing public catalog at github.com/anthropics/skills. That repo is the starting point for any new category of work we take on. We pull what's useful, fork what almost fits, and write our own when nothing matches the way we actually do the job.
The mental model that helped us most was this. A prompt is a request. A skill is a job description. The first lives for one conversation. The second lives across thousands. For a service business that does similar kinds of work day after day, the skill is the unit that compounds.
Why a marketing agency runs on skills
An agency is a repetition machine dressed up as creative work. Every blog post follows roughly the same shape. Every Google Ads audit checks roughly the same things. Every paid social brief lists roughly the same fields. The art is in the judgment. The work is in the consistency. AI prompting, on its own, doesn't give you that consistency. The same person typing the same request twice in two different sessions will get two different outputs, structured two different ways, with two different sets of conventions.
That's fine for solo experimentation. It's a problem at agency scale. When five people on a team are producing client-facing content, the variance between their outputs becomes the agency's brand whether anyone meant for it to or not.
Skills fix that. Once a skill exists for a job, every person on the team gets the same rules applied the same way. Banned words stay banned. The opening paragraph follows the format. The internal links go to the correct service pages. Schema markup matches the spec. The output isn't perfect on the first pass, but it's consistent, which is the precondition for editing being useful instead of being a rewrite.
About 12 of them are touched every working day. The rest run weekly, monthly, or on demand for specific client situations.
The other reason we run on skills is that they encode the agency's institutional memory in a place the model can read. The way we audit a Google Ads account isn't in someone's head. It's in a skill. The way we structure a paid social brief isn't tribal knowledge. It's in a skill. When we hire someone new, the skill stack is part of the onboarding. When the team grows, the rules don't drift.
The skills we use every day
Here's the actual working set, broken out by what each one owns. The ones marked custom are skills we wrote ourselves. The ones marked official are pulled from Anthropic's public catalog and either used as-is or lightly modified.
| Skill | What it owns | Source |
|---|---|---|
| market-correct | Internal. Our brand context, editorial rules, agency facts. Loads on our own marketing work, not client work. | Custom |
| market-correct-blog | Internal. End-to-end production for posts shipping to the Market Correct blog only. | Custom |
| market-correct-humanizer | Internal. Enforces our own brand voice on our own content. Client work uses each client's voice instead. | Custom |
| seo-geo | SEO and Generative Engine Optimization for ranking and AI citation | Custom (forked) |
| ai-seo | Optimizing content for ChatGPT, Perplexity, and Google AI Overviews | Official |
| claude-ads suite | Audits and creative for Google, Meta, LinkedIn, TikTok, Microsoft, Apple Ads | Official |
| brand-identity | Brand strategy and identity design for new client engagements | Official |
| ui-ux-pro-max | Design intelligence with curated styles, palettes, and font pairings | Official |
| typeset | Typography fixes, hierarchy, sizing, weight consistency on UI builds | Official |
| colorize | Adding strategic color when a layout reads too monochromatic | Official |
| copywriting | Long-form persuasive writing in the Ogilvy and direct-response tradition | Official |
| skill-creator | The meta-skill that builds new skills, the one we use most when adding to the stack | Official |
The custom ones do the heavy lifting
The four custom skills at the top of that table account for most of our internal output, the work that ships under the Market Correct name. They're not loaded for client work, which has its own brand voice, its own SEO playbook, and its own deliverable format. Worth saying that up front so the rest of this section reads correctly.
The market-correct skill is the brand context, the spine. It tells Claude who we are, what we sell, what numbers to preserve verbatim ($550M+ managed, 400+ clients, 12+ years), and which clients we've worked with. Every conversation about our own marketing benefits from that context being present without anyone re-typing it. It does not load on client engagements.
The market-correct-blog skill is the production engine for posts shipping to the Market Correct blog. Keyword research, on-page SEO requirements, the GEO patterns from the Princeton research, the required components a post must include, and the deliverable format we publish to our own site. Client blog work, when we do it, runs on the client's own voice and structure rules, usually captured in a client-specific custom skill we build for that engagement.
The market-correct-humanizer is the voice enforcer for our content. It's based on the patterns from Wikipedia's Signs of AI writing guide, plus the rules we layered on top. Banned punctuation. Mandatory contractions. The list of words that get rewritten on sight. The opening paragraph rules. The paragraph rhythm checks. Every piece of Market Correct content runs through it before publishing. Client content uses the client's voice, which means a different skill or a different review pass.
The seo-geo skill combines traditional SEO and the newer practice of optimizing for AI search. The Princeton GEO research, available on arxiv.org, ranks nine optimization methods by visibility lift. Citations, statistics, authoritative tone, fluency. We use it on our own content. For client SEO work it's the same methodology applied to their domain instead of ours.
The official ones we lean on hardest
From the Anthropic catalog, the claude-ads suite earns the most use. It's a stack of platform-specific audit skills, one for each of Google, Meta, LinkedIn, TikTok, Microsoft, and Apple Ads, plus cross-platform tools for budget allocation, creative review, competitor analysis, and PPC math. When a new client hands us a paid media account, the audit work that used to take a day now takes an hour.
The ai-seo skill from Anthropic's catalog covers the methodology for getting cited in AI answers. The structural pillars, the schema markup, the content-block patterns. We use it as the input to our internal seo-geo skill, which adds agency-specific opinions and the format conventions we ship to clients.
The design skills, brand-identity, ui-ux-pro-max, typeset, colorize, run on a different rhythm. Not every day, but every week. They're the ones we reach for when we're building a landing page, refining a creative brief, or fixing a layout that doesn't feel right and we can't immediately say why.
The copywriting skill is the off-the-shelf workhorse for persuasive long-form, drawing on the Ogilvy and direct-response traditions. We use it for ad copy variants, sales pages, and email sequences. It pairs cleanly with the humanizer because the copywriting skill is about persuasion mechanics and the humanizer is about voice. Different jobs, complementary outputs.
How a typical day looks through Claude skills
The clearest way to show what a skill stack does for an agency is to walk a normal day through it. This is roughly how a Tuesday at Market Correct runs. The internal-only skills fire on internal work. Client work runs on a different set.
Morning. We're shipping a new post to the Market Correct blog about a Google Ads topic. Claude loads market-correct for our brand context, market-correct-blog for production rules, seo-geo for SEO and AI search optimization, and copywriting for the persuasive structure. The first draft is in our format, with the right schema markup, the right component blocks, and the right number of FAQs. Time from start to draft, about 25 minutes.
Mid-morning. The same draft runs through market-correct-humanizer. Banned words flagged and rewritten. Em dashes stripped. Contractions enforced. Paragraph rhythm broken up where it had drifted into sameness. The output is a humanized, brand-consistent post, ready for human edit. Time, about 10 minutes.
Late morning. New client onboarding. Their Google Ads account needs an audit before the kickoff call. None of the market-correct skills load here, this is client work. We load the claude-ads Google audit skill instead, point it at the export, and get back a structured audit covering conversion tracking, account structure, keywords, Quality Score, ad assets, Performance Max, bidding, and settings. The same audit, done by hand, used to take half a day.
Lunch. The same client also runs a small programmatic budget. We load claude-ads's budget allocation and competitor analysis skills, plus the cross-platform creative audit. Three more skills, three more passes, one combined recommendation deck before the afternoon. Still client work, still no internal Market Correct skills in the mix.
Afternoon. New brand work for a different client. brand-identity handles the identity strategy. ui-ux-pro-max picks design directions. typeset pairs the type. colorize adds restraint where needed. These are all generic, client-applicable skills. The deliverable lands in something close to its final state by end of day, instead of going through three rounds of vague design feedback.
End of day. A new internal process needs documenting. We load skill-creator, talk Claude through the rules of the new process, and ship a first version of a skill that captures the work. The next time the same job comes up, the skill is already there.
Nothing about that day is unusual. What's unusual is that none of those steps required someone to remember our brand voice, the SEO checklist, the client audit framework, or the design standards. The skills carried that load, and the right skills loaded for the right context.
How we build custom skills
Every skill we've written started the same way. We did the job by hand. We saved the conversation. Then we turned the conversation into a skill.
The pattern is reliable because the hard part of writing a skill isn't the file format. The frontmatter is six lines of YAML. The hard part is being specific enough about what a good output looks like that the model can produce it without you in the room. Trying to write a skill in the abstract, before you've done the job a few times, almost always produces a vague document that doesn't actually help.
The version we use to bootstrap new skills is Anthropic's skill-creator. It walks you through the structure, asks the right questions about scope and triggers, and writes a first draft of the SKILL.md you can then edit by hand. The official documentation lives at docs.anthropic.com and is worth reading once before you write your first one from scratch.
The structure of every custom skill we build follows roughly the same shape.
- Start with a SKILL.md that has a one-paragraph description, a name, and the trigger conditions.
- Write the rules in plain language. Imagine briefing a sharp junior employee who wants to do the job well but has never seen the format before.
- Add reference files for anything too long to live in the SKILL.md, like a 500-word style guide or a list of banned words.
- Write a "what good looks like" section with one or two real examples, so the model has a target to hit.
- Test the skill on a real job. Adjust the parts that came out wrong. Re-test. Ship.
That last loop is where most of the work lives. The first version of a skill is almost never the one that ends up running every day. You learn what's missing only by using it on real work and watching where the output drifts. The skill doesn't get better by being designed harder. It gets better by being used.
When we build a custom skill instead of using an existing one
- The same prompt has been retyped or copy-pasted three times in a month
- A task has rules that need to be enforced consistently across multiple people
- An off-the-shelf skill exists but it doesn't know our brand voice or service shape
- A workflow combines several existing skills and we keep loading them in the same order
- The output of a job needs to follow a specific deliverable format we ship to clients
The opposite case is also worth saying out loud. We don't build a skill for one-off curiosity work, for tasks where the rules genuinely vary every time, or for jobs where the existing official skill already covers everything we need. Skill bloat is real. A stack of 200 skills nobody remembers is worse than a focused stack of 30 that the team actually uses.
Want to see what a Claude-skill-driven agency engagement actually looks like in practice?
Talk to usWhy custom skills beat clever prompts
The most common pushback we hear from peers is, why bother with skills when you can just write a really good prompt? The answer is that a really good prompt is a one-time win. A skill is the same win every time, for everyone on the team, on every job that fits, with no decay over a year of use.
Three things compound when work moves from prompts into skills.
First, consistency. The brand voice doesn't drift between team members. The SEO rules don't get half-applied because someone forgot a step. The audit framework runs the same way for every account because the framework is the skill, not the operator's memory of the framework.
Second, speed. A new task that fits an existing skill starts at minute one with the rules already loaded. Tasks that used to take an hour of setup before any real work happened now start producing useful output immediately. Across a year, that adds up to weeks of recovered time per person.
Third, reach. A skill written once benefits every person on the team and every conversation they have where it applies. A clever prompt benefits the person who wrote it, in the moment they used it, and never again unless they remember to dig it back up. The economics of those two paths are not comparable.
The other side of this is honesty. A skill is not a substitute for knowing the work. We've watched smart prompts fail in interesting ways when the operator didn't know enough to spot the wrong answer. Skills don't fix that. They make the experienced operator faster and the junior operator more consistent. They don't make the inexperienced operator into a senior strategist. The judgment is still ours. The acceleration is what's new.
What "always growing" actually looks like
The skill stack is treated like any other piece of agency infrastructure. It gets reviewed. It gets refactored. It gets pruned when something stops earning its place. Every couple of weeks, we run a stocktake on the stack. Which skills got used. Which didn't. Which produced output that needed heavy editing. Which produced output that shipped clean.
Skills that stopped getting used either get rewritten or deleted. Skills that produced consistently weak output get reviewed for what's missing, usually a clearer "what good looks like" section or a tighter list of banned patterns. Skills that produced consistently strong output get studied, and the patterns inside them get pulled into other skills where they apply.
The other shape this growth takes is forking and combining. A skill that worked well for one kind of client work gets forked into a sibling skill for an adjacent kind of work, with the rules adjusted. A skill that was doing two jobs gets split into two skills that each do one job better. The directory of skills slowly evolves to match how the work is actually shaped, instead of how we thought it would be shaped when we started.
If you read our companion post on how we use Granola and Pocket as our meeting stack, the pattern is the same. The tools we run aren't static. They get pruned, rewritten, and combined as we learn what actually compounds in agency work and what just feels productive in the moment.
The bottom line
If you run a service business that does similar work day after day, Claude skills are the unit of automation that actually compounds. Prompts are interesting. Skills are operational. The difference shows up in throughput, in consistency across a team, and in how much institutional knowledge is encoded somewhere the next conversation can read.
Start with the official catalog at github.com/anthropics/skills. Find the two or three skills closest to the work you do most. Use them on real jobs. When they almost fit but don't quite, fork them. When nothing fits, write your own with skill-creator. The first one will take an afternoon. The fifth one will take 30 minutes. The fiftieth one will already be paying back the time you spent building any of them.
For an agency, the real win isn't speed. It's that the work the team produces starts looking like the agency, every time, instead of looking like whoever happened to be in front of the keyboard that morning. That consistency is what clients pay for. Skills are how we keep delivering it at the rate the work demands.