We didn't set out to run an AI SEO experiment. We needed a site that ranked. We had opinions about how performance marketing agencies should price their work. And we had Claude. So we built the whole thing with an AI-assisted workflow and watched what happened.
The honest version of this story has an embarrassing early chapter. Then it gets interesting around week five. By week eight, the impressions curve was doing something we didn't expect from a brand-new domain with zero backlinks and no paid promotion.
This post is the full account. The workflow, the early failures, what we changed, and what the Google Search Console data actually shows. We're including the screenshots because the argument falls apart without them.
The theory we were testing
The question was simple: can a structured, humanized, schema-rich AI content workflow compete in organic search? Not "can AI write content" because that's settled. The real question is whether AI-assisted content, done right, actually ranks.
There's a lot of noise on this topic. Some people say Google can detect and penalize AI content. Others say it's fine and performs normally. Neither camp tends to show their GSC data, which makes the whole debate theoretical. We wanted real numbers from our own domain. So we published posts using the workflow and let Google sort it out.
One framing note worth stating upfront: mrktcorrect.com isn't a test site. It's our actual agency site. Every post had to be useful and credible on its own terms, not just technically optimized. That constraint turned out to be the right one.
What the workflow actually looks like
We use Claude as the primary drafting tool. But "using Claude" means something specific here. It's not opening a chat window and asking for a blog post. The production system has four distinct layers, and the drafting is only one of them.
Layer 1: Keyword research and source gathering
Before drafting starts, we run web searches on the target keyword to see what's already ranking and what angle they're taking. We look for data gaps in competing content. Specific numbers, named scenarios, and real account examples are where we can win. We also collect at least three credible external sources to cite inline. That's a hard delivery requirement, not a suggestion.
Layer 2: Structured drafting with a skill file
Claude doesn't draft from a blank prompt. Every post is produced against a skill file that encodes our SEO requirements, brand voice rules, schema patterns, and design system. The skill file specifies required components, banned vocabulary, punctuation rules, and GEO optimization methods. This is the difference between "Claude wrote a blog post" and "Claude executed a documented production process."
Each post must include a comparison or pricing table, at least one stat callout block, a verdict or callout box, a red-flag or green-check list where relevant, and an accordion FAQ with a minimum of 10 questions. Missing any of these blocks is a delivery blocker.
Layer 3: The humanizer pass
This is non-negotiable, and it's where early posts failed. Raw AI drafts have patterns that are easy to spot and easy for Google to rank below something better. Not because they're "AI content" in some abstract sense, but because they're thin and generic. Every paragraph the same length. The same vocabulary appearing in clusters. Negative parallelism constructions. Generic conclusions that say nothing specific.
The humanizer pass rewrites at the sentence level. It enforces contractions, strips the banned vocabulary list, breaks up metronomic paragraph rhythm, and replaces vague framing with specific experience-backed claims. It's the pass that makes a draft sound like something a person thought through, not the statistical average of everything the model has ever read.
Layer 4: Python delivery checklist
Before any file ships, a Python script validates every hard requirement. Meta description under 160 characters. GTM snippets present in both head and body. FAQ count at or above 10. External link count at or above 3. No em dashes anywhere. Schema blocks present and structurally correct. Closing tags verified. Nothing ships without a clean run.
Research, structured drafting, humanizer pass, and automated delivery checklist. Skip any one of them and it shows up in the data.
What we got wrong early on
The first posts shipped without the humanizer pass. We were testing velocity. Could we publish faster with raw AI drafts and see if volume compensated for quality? It couldn't.
The early posts were technically correct. Right schema, right keyword targets, right structure. But the writing read like AI. Not obviously robotic. In the subtler way where every paragraph is three sentences, the transitions are "Furthermore" and "In addition," and the conclusion says something like "In today's competitive landscape, having the right partner matters." That's not a penalty trigger. It's just content that Google has no reason to surface over something better.
- Skipping the humanizer pass and shipping raw drafts
- Publishing early posts without FAQPage schema
- Only one or two external links per post instead of three minimum
- No stat callout blocks, so no GEO-optimized number anchoring
- Generic conclusions that gave Google nothing to prefer us over anyone else
We fixed all of these. Some early posts got retroactively updated with proper schema and humanized copy. The posts that didn't get updated are still sitting near the bottom of the impressions data. The ones that did are climbing.
What we settled on and why
The full production sequence exists because we learned what happens when you skip parts of it. Here's how the early workflow compares to what we run now.
| Element | Early workflow | Current workflow |
|---|---|---|
| Drafting tool | Claude, blank prompt | Claude + structured skill file |
| Humanizer pass | Skipped | Mandatory, sentence-level |
| FAQPage schema | Inconsistent | 10+ Q&A pairs on every post |
| External links | 1 to 2 per post | 3 minimum, hard delivery blocker |
| Delivery validation | Manual spot-check | Python checklist, automated |
| GEO optimization | Not applied | Built in during draft, not retrofitted |
| llms.txt / llms-full.txt | Not deployed | Live at site root, included in sitemap |
The GEO optimization deserves a specific note. Princeton GEO research quantified the visibility lift from different optimization methods. Citing authoritative external sources adds roughly 40% visibility in AI search. Specific statistics add about 37%. Authoritative tone adds 25%. We apply all of these during drafting, not as a post-draft pass. That ordering matters because retrofitting GEO structure into a finished draft produces awkward copy. Building it in from the start produces content that reads naturally and ranks for both traditional and AI search simultaneously.
Google's helpful content guidance doesn't mention AI once. It mentions originality, expertise, and whether the content actually helps someone. Those are the real bars, and our workflow is designed around clearing them, not around gaming any specific signal.
What the GSC data actually shows
Here's the Google Search Console impressions chart for mrktcorrect.com since late January 2026.
Google Search Console, mrktcorrect.com impressions, Jan 25 to Mar 29, 2026. Near-zero through mid-February, then a clear step-change upward beginning in early March.
Near-zero from January 25 through mid-February. The site was indexed and Google had found it, but it wasn't showing up for anything meaningful. A slow climb started around February 15. Clear acceleration through early March. Then a step-change from March 8 onward, where daily impressions jumped from the 30 to 50 range into 60 to 80, and then consistently into 120 to 160.
That curve shape matters. It's not a spike from one viral post. It's not a paid push. It's compounding impression growth from a body of posts gaining ground together. That's what a content strategy looks like when the underlying quality is consistent across the whole site, not just one good post.
From near-zero 60 days earlier. New domain, no backlinks, no paid promotion. Structured AI-assisted workflow only.
And here's the GA4 new users chart for the same period.
Google Analytics 4, mrktcorrect.com new users, last 90 days. The dashed baseline (previous period) barely registers. March did more than the previous three months combined.
275 new users over 90 days. The previous period baseline is the dashed line that barely shows up on the chart. January and most of February are flat. Late February starts moving. March ramps hard, peaks around March 15, and settles into a new baseline that's well above where the site was at the start of the year.
275 new users isn't a big number on its own. We know that. But the point of showing it isn't scale. It's the direction and the shape. The previous period line is essentially zero. Every week in March outperformed every week in January. The only thing that changed was the production workflow.
What this means in practice
The debate about whether AI content ranks is mostly a distraction. The real question is whether the content is good. Google's helpful content guidance doesn't mention AI once. It mentions originality, expertise, and usefulness.
What we found is that AI-assisted content clears those bars when the production process enforces them. The humanizer pass isn't about hiding that you used AI. It's about making sure the content says something specific and sounds like someone thought it through. The schema requirements aren't about gaming structured data. They're about making content legible to the AI systems that now mediate a growing share of how people find information.
The honest takeaway
AI-assisted content ranks when it's built to a documented standard that enforces quality at every step. Raw AI drafts don't rank well because they're not good content, not because they're AI. The workflow is the variable, not the tool.
One more thing worth saying: AI search is real, but traditional Google search still dwarfs AI search in volume by roughly 373 to 1, according to SparkToro. Optimizing for AI citation matters, but it complements traditional SEO rather than replacing it. Schema, content depth, authoritative external linking, and clear E-E-A-T signals help in both channels at the same time. You don't have to choose.
The site will keep compounding. Every post published under this workflow adds to a body of content that Google is already valuing. The impressions curve is the early evidence that it's working.
- Structured skill file that encodes SEO, GEO, and voice rules in one place
- Mandatory humanizer pass that rewrites at the sentence level before anything ships
- FAQPage schema with 10+ Q&A pairs on every post for AI citation lift
- Python delivery checklist that catches every hard requirement before delivery
- llms.txt and llms-full.txt deployed at site root for AI crawler indexing
- Hard external link minimum (3 per post, no exceptions) to signal topical credibility