Claude skills, the way our marketing agency actually uses them

TLDR

Claude skills are reusable instruction packs the model loads on demand. We run about 30 across the agency and touch around 12 every day. The ones doing the most work are the custom ones we wrote against our own brand voice, SEO playbook, and the way we audit ad accounts.

  • Skills load automatically when a task matches their description, no copy-paste prompting required.
  • Our most-used skills are custom, not off-the-shelf, because the off-the-shelf versions don't know what good looks like for our agency.
  • Building a skill takes 30 to 60 minutes once you've done it a few times. Maintaining one is the actual work.

Last Tuesday we shipped a new post to the Market Correct blog, built a six-image LinkedIn carousel for the agency, and ran a Google Ads search terms audit for a client. All by end of day. We finished before lunch.

That isn't a flex. It's what happens when the AI you work in already knows your brand voice, your SEO playbook, and what a clean negative keyword recommendation actually looks like at our agency. Most teams open ChatGPT, type a prompt, paste the result, and start over the next morning. We don't. We run on Claude skills.

A Claude skill is a small instruction pack the model loads on demand. Each one teaches Claude how we want a specific job done. We have around 30 of them installed across the agency right now. We touch about 12 of them every single day. The ones doing the most work are the custom ones we wrote ourselves, against our own brand voice, our own SEO checklist, and our own way of running an account audit.

This post is the actual stack. What we use, what each skill does, how we build new ones, and what it looks like when the work compounds instead of restarting from scratch every morning.

What are Claude skills?

A Claude skill is a packaged set of instructions and references that Claude Code loads on demand when a task matches its description. Skills sit in a folder. They fire by name or by intent. They run inside the same conversation as the model. They're how Anthropic lets you teach Claude a specific job without retraining the model.

The format is simple. Each skill is a folder with a SKILL.md file. The frontmatter tells the model when to load the skill. The body teaches the model the rules. Some skills include reference files, templates, or example outputs. Some are short and pointed. Others are long, opinionated playbooks. The whole thing lives on disk, gets versioned in git, and travels with the project or the user.

Anthropic publishes a growing public catalog at github.com/anthropics/skills. That repo is the starting point for any new category of work we take on. We pull what's useful, fork what almost fits, and write our own when nothing matches the way we actually do the job.

The mental model that helped us most was this. A prompt is a request. A skill is a job description. The first lives for one conversation. The second lives across thousands. For a service business that does similar kinds of work day after day, the skill is the unit that compounds.

Why a marketing agency runs on skills

An agency is a repetition machine dressed up as creative work. Every blog post follows roughly the same shape. Every Google Ads audit checks roughly the same things. Every paid social brief lists roughly the same fields. The art is in the judgment. The work is in the consistency. AI prompting, on its own, doesn't give you that consistency. The same person typing the same request twice in two different sessions will get two different outputs, structured two different ways, with two different sets of conventions.

That's fine for solo experimentation. It's a problem at agency scale. When five people on a team are producing client-facing content, the variance between their outputs becomes the agency's brand whether anyone meant for it to or not.

Skills fix that. Once a skill exists for a job, every person on the team gets the same rules applied the same way. Banned words stay banned. The opening paragraph follows the format. The internal links go to the correct service pages. Schema markup matches the spec. The output isn't perfect on the first pass, but it's consistent, which is the precondition for editing being useful instead of being a rewrite.

~30 Claude skills installed across the agency

About 12 of them are touched every working day. The rest run weekly, monthly, or on demand for specific client situations.

The other reason we run on skills is that they encode the agency's institutional memory in a place the model can read. The way we audit a Google Ads account isn't in someone's head. It's in a skill. The way we structure a paid social brief isn't tribal knowledge. It's in a skill. When we hire someone new, the skill stack is part of the onboarding. When the team grows, the rules don't drift.

The skills we use every day

Here's the actual working set, broken out by what each one owns. The ones marked custom are skills we wrote ourselves. The ones marked official are pulled from Anthropic's public catalog and either used as-is or lightly modified.

Skill What it owns Source
market-correct Internal. Our brand context, editorial rules, agency facts. Loads on our own marketing work, not client work. Custom
market-correct-blog Internal. End-to-end production for posts shipping to the Market Correct blog only. Custom
market-correct-humanizer Internal. Enforces our own brand voice on our own content. Client work uses each client's voice instead. Custom
seo-geo SEO and Generative Engine Optimization for ranking and AI citation Custom (forked)
ai-seo Optimizing content for ChatGPT, Perplexity, and Google AI Overviews Official
claude-ads suite Audits and creative for Google, Meta, LinkedIn, TikTok, Microsoft, Apple Ads Official
brand-identity Brand strategy and identity design for new client engagements Official
ui-ux-pro-max Design intelligence with curated styles, palettes, and font pairings Official
typeset Typography fixes, hierarchy, sizing, weight consistency on UI builds Official
colorize Adding strategic color when a layout reads too monochromatic Official
copywriting Long-form persuasive writing in the Ogilvy and direct-response tradition Official
skill-creator The meta-skill that builds new skills, the one we use most when adding to the stack Official

The custom ones do the heavy lifting

The four custom skills at the top of that table account for most of our internal output, the work that ships under the Market Correct name. They're not loaded for client work, which has its own brand voice, its own SEO playbook, and its own deliverable format. Worth saying that up front so the rest of this section reads correctly.

The market-correct skill is the brand context, the spine. It tells Claude who we are, what we sell, what numbers to preserve verbatim ($550M+ managed, 400+ clients, 12+ years), and which clients we've worked with. Every conversation about our own marketing benefits from that context being present without anyone re-typing it. It does not load on client engagements.

The market-correct-blog skill is the production engine for posts shipping to the Market Correct blog. Keyword research, on-page SEO requirements, the GEO patterns from the Princeton research, the required components a post must include, and the deliverable format we publish to our own site. Client blog work, when we do it, runs on the client's own voice and structure rules, usually captured in a client-specific custom skill we build for that engagement.

The market-correct-humanizer is the voice enforcer for our content. It's based on the patterns from Wikipedia's Signs of AI writing guide, plus the rules we layered on top. Banned punctuation. Mandatory contractions. The list of words that get rewritten on sight. The opening paragraph rules. The paragraph rhythm checks. Every piece of Market Correct content runs through it before publishing. Client content uses the client's voice, which means a different skill or a different review pass.

The seo-geo skill combines traditional SEO and the newer practice of optimizing for AI search. The Princeton GEO research, available on arxiv.org, ranks nine optimization methods by visibility lift. Citations, statistics, authoritative tone, fluency. We use it on our own content. For client SEO work it's the same methodology applied to their domain instead of ours.

The official ones we lean on hardest

From the Anthropic catalog, the claude-ads suite earns the most use. It's a stack of platform-specific audit skills, one for each of Google, Meta, LinkedIn, TikTok, Microsoft, and Apple Ads, plus cross-platform tools for budget allocation, creative review, competitor analysis, and PPC math. When a new client hands us a paid media account, the audit work that used to take a day now takes an hour.

The ai-seo skill from Anthropic's catalog covers the methodology for getting cited in AI answers. The structural pillars, the schema markup, the content-block patterns. We use it as the input to our internal seo-geo skill, which adds agency-specific opinions and the format conventions we ship to clients.

The design skills, brand-identity, ui-ux-pro-max, typeset, colorize, run on a different rhythm. Not every day, but every week. They're the ones we reach for when we're building a landing page, refining a creative brief, or fixing a layout that doesn't feel right and we can't immediately say why.

The copywriting skill is the off-the-shelf workhorse for persuasive long-form, drawing on the Ogilvy and direct-response traditions. We use it for ad copy variants, sales pages, and email sequences. It pairs cleanly with the humanizer because the copywriting skill is about persuasion mechanics and the humanizer is about voice. Different jobs, complementary outputs.

How a typical day looks through Claude skills

The clearest way to show what a skill stack does for an agency is to walk a normal day through it. This is roughly how a Tuesday at Market Correct runs. The internal-only skills fire on internal work. Client work runs on a different set.

Morning. We're shipping a new post to the Market Correct blog about a Google Ads topic. Claude loads market-correct for our brand context, market-correct-blog for production rules, seo-geo for SEO and AI search optimization, and copywriting for the persuasive structure. The first draft is in our format, with the right schema markup, the right component blocks, and the right number of FAQs. Time from start to draft, about 25 minutes.

Mid-morning. The same draft runs through market-correct-humanizer. Banned words flagged and rewritten. Em dashes stripped. Contractions enforced. Paragraph rhythm broken up where it had drifted into sameness. The output is a humanized, brand-consistent post, ready for human edit. Time, about 10 minutes.

Late morning. New client onboarding. Their Google Ads account needs an audit before the kickoff call. None of the market-correct skills load here, this is client work. We load the claude-ads Google audit skill instead, point it at the export, and get back a structured audit covering conversion tracking, account structure, keywords, Quality Score, ad assets, Performance Max, bidding, and settings. The same audit, done by hand, used to take half a day.

Lunch. The same client also runs a small programmatic budget. We load claude-ads's budget allocation and competitor analysis skills, plus the cross-platform creative audit. Three more skills, three more passes, one combined recommendation deck before the afternoon. Still client work, still no internal Market Correct skills in the mix.

Afternoon. New brand work for a different client. brand-identity handles the identity strategy. ui-ux-pro-max picks design directions. typeset pairs the type. colorize adds restraint where needed. These are all generic, client-applicable skills. The deliverable lands in something close to its final state by end of day, instead of going through three rounds of vague design feedback.

End of day. A new internal process needs documenting. We load skill-creator, talk Claude through the rules of the new process, and ship a first version of a skill that captures the work. The next time the same job comes up, the skill is already there.

Nothing about that day is unusual. What's unusual is that none of those steps required someone to remember our brand voice, the SEO checklist, the client audit framework, or the design standards. The skills carried that load, and the right skills loaded for the right context.

How we build custom skills

Every skill we've written started the same way. We did the job by hand. We saved the conversation. Then we turned the conversation into a skill.

The pattern is reliable because the hard part of writing a skill isn't the file format. The frontmatter is six lines of YAML. The hard part is being specific enough about what a good output looks like that the model can produce it without you in the room. Trying to write a skill in the abstract, before you've done the job a few times, almost always produces a vague document that doesn't actually help.

The version we use to bootstrap new skills is Anthropic's skill-creator. It walks you through the structure, asks the right questions about scope and triggers, and writes a first draft of the SKILL.md you can then edit by hand. The official documentation lives at docs.anthropic.com and is worth reading once before you write your first one from scratch.

The structure of every custom skill we build follows roughly the same shape.

  1. Start with a SKILL.md that has a one-paragraph description, a name, and the trigger conditions.
  2. Write the rules in plain language. Imagine briefing a sharp junior employee who wants to do the job well but has never seen the format before.
  3. Add reference files for anything too long to live in the SKILL.md, like a 500-word style guide or a list of banned words.
  4. Write a "what good looks like" section with one or two real examples, so the model has a target to hit.
  5. Test the skill on a real job. Adjust the parts that came out wrong. Re-test. Ship.

That last loop is where most of the work lives. The first version of a skill is almost never the one that ends up running every day. You learn what's missing only by using it on real work and watching where the output drifts. The skill doesn't get better by being designed harder. It gets better by being used.

When we build a custom skill instead of using an existing one

  • The same prompt has been retyped or copy-pasted three times in a month
  • A task has rules that need to be enforced consistently across multiple people
  • An off-the-shelf skill exists but it doesn't know our brand voice or service shape
  • A workflow combines several existing skills and we keep loading them in the same order
  • The output of a job needs to follow a specific deliverable format we ship to clients

The opposite case is also worth saying out loud. We don't build a skill for one-off curiosity work, for tasks where the rules genuinely vary every time, or for jobs where the existing official skill already covers everything we need. Skill bloat is real. A stack of 200 skills nobody remembers is worse than a focused stack of 30 that the team actually uses.

Want to see what a Claude-skill-driven agency engagement actually looks like in practice?

Talk to us

Why custom skills beat clever prompts

The most common pushback we hear from peers is, why bother with skills when you can just write a really good prompt? The answer is that a really good prompt is a one-time win. A skill is the same win every time, for everyone on the team, on every job that fits, with no decay over a year of use.

Three things compound when work moves from prompts into skills.

First, consistency. The brand voice doesn't drift between team members. The SEO rules don't get half-applied because someone forgot a step. The audit framework runs the same way for every account because the framework is the skill, not the operator's memory of the framework.

Second, speed. A new task that fits an existing skill starts at minute one with the rules already loaded. Tasks that used to take an hour of setup before any real work happened now start producing useful output immediately. Across a year, that adds up to weeks of recovered time per person.

Third, reach. A skill written once benefits every person on the team and every conversation they have where it applies. A clever prompt benefits the person who wrote it, in the moment they used it, and never again unless they remember to dig it back up. The economics of those two paths are not comparable.

The other side of this is honesty. A skill is not a substitute for knowing the work. We've watched smart prompts fail in interesting ways when the operator didn't know enough to spot the wrong answer. Skills don't fix that. They make the experienced operator faster and the junior operator more consistent. They don't make the inexperienced operator into a senior strategist. The judgment is still ours. The acceleration is what's new.

What "always growing" actually looks like

The skill stack is treated like any other piece of agency infrastructure. It gets reviewed. It gets refactored. It gets pruned when something stops earning its place. Every couple of weeks, we run a stocktake on the stack. Which skills got used. Which didn't. Which produced output that needed heavy editing. Which produced output that shipped clean.

Skills that stopped getting used either get rewritten or deleted. Skills that produced consistently weak output get reviewed for what's missing, usually a clearer "what good looks like" section or a tighter list of banned patterns. Skills that produced consistently strong output get studied, and the patterns inside them get pulled into other skills where they apply.

The other shape this growth takes is forking and combining. A skill that worked well for one kind of client work gets forked into a sibling skill for an adjacent kind of work, with the rules adjusted. A skill that was doing two jobs gets split into two skills that each do one job better. The directory of skills slowly evolves to match how the work is actually shaped, instead of how we thought it would be shaped when we started.

If you read our companion post on how we use Granola and Pocket as our meeting stack, the pattern is the same. The tools we run aren't static. They get pruned, rewritten, and combined as we learn what actually compounds in agency work and what just feels productive in the moment.

The bottom line

The bottom line

If you run a service business that does similar work day after day, Claude skills are the unit of automation that actually compounds. Prompts are interesting. Skills are operational. The difference shows up in throughput, in consistency across a team, and in how much institutional knowledge is encoded somewhere the next conversation can read.

Start with the official catalog at github.com/anthropics/skills. Find the two or three skills closest to the work you do most. Use them on real jobs. When they almost fit but don't quite, fork them. When nothing fits, write your own with skill-creator. The first one will take an afternoon. The fifth one will take 30 minutes. The fiftieth one will already be paying back the time you spent building any of them.

For an agency, the real win isn't speed. It's that the work the team produces starts looking like the agency, every time, instead of looking like whoever happened to be in front of the keyboard that morning. That consistency is what clients pay for. Skills are how we keep delivering it at the rate the work demands.

AI-Native Agency

Want a marketing program built and run on a real Claude skill stack?

We use Claude skills as the spine of how we run client work, from blog production to ad audits to landing page builds. If you want a paid program operated with that same discipline, talk to us.

Talk to us about your campaigns
FAQ

Questions about Claude skills, custom skills, and our agency stack

A Claude skill is a packaged set of instructions, references, and rules that Claude Code loads on demand when a task matches its description. Skills sit in a folder, fire by name or by intent, and run inside the same conversation as the model. They're how Anthropic lets you teach Claude a specific job, like writing in your brand voice or auditing a Google Ads account, without retraining the model. The official documentation lives at docs.anthropic.com.

A prompt is a one-off instruction. A skill is a reusable, version-controlled instruction pack the model loads automatically when the task fits. The practical difference is that a prompt has to be retyped or copy-pasted every time you want the same output style. A skill is on file, evolves over time, and applies the same rules the same way across thousands of tasks. For an agency that does the same kinds of work repeatedly, that's the difference between cleverness and consistency.

The five we touch every single day are our custom market-correct brand context skill, market-correct-blog for blog production, market-correct-humanizer for voice enforcement, seo-geo for SEO and AI search optimization, and the Claude Ads suite for paid media work. We also run brand-identity, ui-ux-pro-max, and skill-creator regularly. About 30 skills are installed across the agency in total, and roughly 12 see real daily use.

Both. Anthropic publishes a public skills repo at github.com/anthropics/skills with strong defaults like ai-seo, copywriting, brand-identity, and humanizer. We use those as a starting point for new categories of work. Then we fork and customize them, or write our own from scratch, when the official version doesn't match how we actually run an agency. The market-correct family of skills, for example, is entirely custom. They wouldn't make sense for anyone else.

A skill is a folder with a SKILL.md file at the root, plus optional reference files and templates. The SKILL.md has frontmatter that tells Claude when to invoke it, then the body teaches the model the rules of the job. We usually start by writing the skill the way we'd brief a junior employee. What's the goal, what's the output format, what are the non-negotiables, what's banned. Anthropic's skill-creator skill walks you through the structure if you've never built one.

By default, personal skills live in ~/.claude/skills/ and load automatically whenever you start Claude Code. Project-scoped skills live in .claude/skills/ inside a specific repo and only load when you're working in that project. Plugin skills, including Anthropic's official set, install into a separate plugin directory. The Claude Code documentation at docs.anthropic.com covers the exact paths and how to verify a skill is registered.

Yes. A skill is a markdown file. If you can write a thorough document describing how a job should be done, you can write a skill. The technical part, the frontmatter that triggers loading, is six lines of YAML you can copy from any existing skill. The hard part isn't the format. The hard part is being specific enough about what good output looks like that the model can follow it without you in the loop. That's a writing problem, not a coding problem.

A first usable version of a skill takes us about 30 to 60 minutes if the underlying process is already documented somewhere internally. If we're starting from a vague mental model, it takes longer because we have to figure out the rules before we can teach them to the model. The faster way is to do a job by hand, save the chat, and then turn the conversation into a skill. The slower way is to try to design a skill in the abstract.

Both. Skills were originally a Claude Code feature, but they're now supported across the Claude product family, including the web app at claude.ai. The exact loading behavior varies a bit between the CLI and the web client. The official documentation at docs.anthropic.com is the place to confirm what works where on the day you're checking.

A skill is instructions. An MCP server is a connection. Skills teach Claude how to do a job. MCP servers, built on the Model Context Protocol that Anthropic published, give Claude access to external data or tools. The two work well together. A skill might say, when asked to audit a Google Ads account, query the Google Ads MCP server for the campaign data and apply the audit rules below. We use both heavily.

The rule we use is simple. If we've written essentially the same prompt three times in a month, that's a skill. If a task has rules we want enforced consistently across the team, that's a skill. If a task is a one-off curiosity, it stays a prompt. The cost of building a skill is small once you've done it a few times. The cost of relying on prompt memory across a team of people is much larger over time.

Skills aren't a client deliverable, but we don't hide them. Clients who ask see how we run the work. The Claude Ads suite, the copywriting skill, brand-identity, ui-ux-pro-max, all visible. The market-correct family of skills is internal-only, used for our own marketing rather than any client engagement, but we'll explain those too if anyone asks. The practical reason we share is that transparency about workflow is part of the trust we build. The other reason is that no skill replaces the judgment, the experience, or the $550M in managed spend we've built over more than 12 years. The skills make us faster. They don't make us replaceable. Read more about how we think about agency tech and the proprietary tools myth if you want the longer version.

Two ways. Anthropic's official skills update through the standard plugin mechanism, the same way other software updates. Our custom skills sit in a git repo, and we revise them when something stops working or when we learn a better way to do the job. We also have a meta-skill that scans for skills with stale rules or outdated references and flags them for review. The skill stack is treated like code. It evolves, it's reviewed, it ships.

Skills run in the same context as the conversation, so they don't have any access the model doesn't already have. The thing to watch is what gets baked into a skill. If a skill includes a real client name, real account ID, or real revenue numbers, that becomes part of every conversation that loads the skill. Our rule is that example values inside a skill are anonymized. Real client data only enters a session through controlled context, not through skill files. Anthropic's documentation on Claude Code privacy at docs.anthropic.com is worth reading on this.

They're related but not identical. A plugin in the Claude Code ecosystem can bundle skills, MCP servers, slash commands, and hooks together as a single installable package. A skill is one piece of that. You can have a plugin that installs nothing but skills, like Anthropic's anthropic-skills plugin. You can also have skills that aren't part of any plugin. The simplest way to think about it is that a skill is a unit of teaching, and a plugin is a unit of distribution.

The market-correct-humanizer. It enforces our brand voice rules across every piece of Market Correct content we publish. No em dashes, no banned AI words, mandatory contractions, opening rules, paragraph rhythm checks, the whole list. It's the difference between our own content sounding like Market Correct wrote it and sounding like a generic AI tool wrote it. Every other internal skill produces drafts. The humanizer is what makes those drafts shippable under our name. If you read this post and felt it sounded human, that's the skill at work.