Blog Services About Get Started

We Built This Site With AI.
Here's What Google
Actually Did With It.

March 29, 2026  ·  Market Correct  ·  10 min read

A real-data account of building mrktcorrect.com with an AI-assisted content workflow. What we tried, what failed, what we fixed, and what the AI SEO results actually look like.

We didn't set out to run an AI SEO experiment. We needed a site that ranked. We had opinions about how performance marketing agencies should price their work. And we had Claude. So we built the whole thing with an AI-assisted workflow and watched what happened.

The honest version of this story has an embarrassing early chapter. Then it gets interesting around week five. By week eight, the impressions curve was doing something we didn't expect from a brand-new domain with zero backlinks and no paid promotion.

This post is the full account. The workflow, the early failures, what we changed, and what the Google Search Console data actually shows. We're including the screenshots because the argument falls apart without them.

The theory we were testing

The question was simple: can a structured, humanized, schema-rich AI content workflow compete in organic search? Not "can AI write content" because that's settled. The real question is whether AI-assisted content, done right, actually ranks.

There's a lot of noise on this topic. Some people say Google can detect and penalize AI content. Others say it's fine and performs normally. Neither camp tends to show their GSC data, which makes the whole debate theoretical. We wanted real numbers from our own domain. So we published posts using the workflow and let Google sort it out.

One framing note worth stating upfront: mrktcorrect.com isn't a test site. It's our actual agency site. Every post had to be useful and credible on its own terms, not just technically optimized. That constraint turned out to be the right one.

What the workflow actually looks like

We use Claude as the primary drafting tool. But "using Claude" means something specific here. It's not opening a chat window and asking for a blog post. The production system has four distinct layers, and the drafting is only one of them.

Layer 1: Keyword research and source gathering

Before drafting starts, we run web searches on the target keyword to see what's already ranking and what angle they're taking. We look for data gaps in competing content. Specific numbers, named scenarios, and real account examples are where we can win. We also collect at least three credible external sources to cite inline. That's a hard delivery requirement, not a suggestion.

Layer 2: Structured drafting with a skill file

Claude doesn't draft from a blank prompt. Every post is produced against a skill file that encodes our SEO requirements, brand voice rules, schema patterns, and design system. The skill file specifies required components, banned vocabulary, punctuation rules, and GEO optimization methods. This is the difference between "Claude wrote a blog post" and "Claude executed a documented production process."

Each post must include a comparison or pricing table, at least one stat callout block, a verdict or callout box, a red-flag or green-check list where relevant, and an accordion FAQ with a minimum of 10 questions. Missing any of these blocks is a delivery blocker.

Layer 3: The humanizer pass

This is non-negotiable, and it's where early posts failed. Raw AI drafts have patterns that are easy to spot and easy for Google to rank below something better. Not because they're "AI content" in some abstract sense, but because they're thin and generic. Every paragraph the same length. The same vocabulary appearing in clusters. Negative parallelism constructions. Generic conclusions that say nothing specific.

The humanizer pass rewrites at the sentence level. It enforces contractions, strips the banned vocabulary list, breaks up metronomic paragraph rhythm, and replaces vague framing with specific experience-backed claims. It's the pass that makes a draft sound like something a person thought through, not the statistical average of everything the model has ever read.

Layer 4: Python delivery checklist

Before any file ships, a Python script validates every hard requirement. Meta description under 160 characters. GTM snippets present in both head and body. FAQ count at or above 10. External link count at or above 3. No em dashes anywhere. Schema blocks present and structurally correct. Closing tags verified. Nothing ships without a clean run.

4 production layers before any post publishes

Research, structured drafting, humanizer pass, and automated delivery checklist. Skip any one of them and it shows up in the data.

What we got wrong early on

The first posts shipped without the humanizer pass. We were testing velocity. Could we publish faster with raw AI drafts and see if volume compensated for quality? It couldn't.

The early posts were technically correct. Right schema, right keyword targets, right structure. But the writing read like AI. Not obviously robotic. In the subtler way where every paragraph is three sentences, the transitions are "Furthermore" and "In addition," and the conclusion says something like "In today's competitive landscape, having the right partner matters." That's not a penalty trigger. It's just content that Google has no reason to surface over something better.

We fixed all of these. Some early posts got retroactively updated with proper schema and humanized copy. The posts that didn't get updated are still sitting near the bottom of the impressions data. The ones that did are climbing.

What we settled on and why

The full production sequence exists because we learned what happens when you skip parts of it. Here's how the early workflow compares to what we run now.

Element Early workflow Current workflow
Drafting tool Claude, blank prompt Claude + structured skill file
Humanizer pass Skipped Mandatory, sentence-level
FAQPage schema Inconsistent 10+ Q&A pairs on every post
External links 1 to 2 per post 3 minimum, hard delivery blocker
Delivery validation Manual spot-check Python checklist, automated
GEO optimization Not applied Built in during draft, not retrofitted
llms.txt / llms-full.txt Not deployed Live at site root, included in sitemap

The GEO optimization deserves a specific note. Princeton GEO research quantified the visibility lift from different optimization methods. Citing authoritative external sources adds roughly 40% visibility in AI search. Specific statistics add about 37%. Authoritative tone adds 25%. We apply all of these during drafting, not as a post-draft pass. That ordering matters because retrofitting GEO structure into a finished draft produces awkward copy. Building it in from the start produces content that reads naturally and ranks for both traditional and AI search simultaneously.

Google's helpful content guidance doesn't mention AI once. It mentions originality, expertise, and whether the content actually helps someone. Those are the real bars, and our workflow is designed around clearing them, not around gaming any specific signal.

What the GSC data actually shows

Here's the Google Search Console impressions chart for mrktcorrect.com since late January 2026.

Google Search Console impressions chart for mrktcorrect.com showing growth from near-zero in late January 2026 to approximately 150 daily impressions by late March 2026

Google Search Console, mrktcorrect.com impressions, Jan 25 to Mar 29, 2026. Near-zero through mid-February, then a clear step-change upward beginning in early March.

Near-zero from January 25 through mid-February. The site was indexed and Google had found it, but it wasn't showing up for anything meaningful. A slow climb started around February 15. Clear acceleration through early March. Then a step-change from March 8 onward, where daily impressions jumped from the 30 to 50 range into 60 to 80, and then consistently into 120 to 160.

That curve shape matters. It's not a spike from one viral post. It's not a paid push. It's compounding impression growth from a body of posts gaining ground together. That's what a content strategy looks like when the underlying quality is consistent across the whole site, not just one good post.

~150 daily impressions by late March 2026

From near-zero 60 days earlier. New domain, no backlinks, no paid promotion. Structured AI-assisted workflow only.

And here's the GA4 new users chart for the same period.

Google Analytics 4 new users chart for mrktcorrect.com showing 275 new users over 90 days with significant growth beginning in late February and accelerating through March 2026

Google Analytics 4, mrktcorrect.com new users, last 90 days. The dashed baseline (previous period) barely registers. March did more than the previous three months combined.

275 new users over 90 days. The previous period baseline is the dashed line that barely shows up on the chart. January and most of February are flat. Late February starts moving. March ramps hard, peaks around March 15, and settles into a new baseline that's well above where the site was at the start of the year.

275 new users isn't a big number on its own. We know that. But the point of showing it isn't scale. It's the direction and the shape. The previous period line is essentially zero. Every week in March outperformed every week in January. The only thing that changed was the production workflow.

What this means in practice

The debate about whether AI content ranks is mostly a distraction. The real question is whether the content is good. Google's helpful content guidance doesn't mention AI once. It mentions originality, expertise, and usefulness.

What we found is that AI-assisted content clears those bars when the production process enforces them. The humanizer pass isn't about hiding that you used AI. It's about making sure the content says something specific and sounds like someone thought it through. The schema requirements aren't about gaming structured data. They're about making content legible to the AI systems that now mediate a growing share of how people find information.

The honest takeaway

AI-assisted content ranks when it's built to a documented standard that enforces quality at every step. Raw AI drafts don't rank well because they're not good content, not because they're AI. The workflow is the variable, not the tool.

One more thing worth saying: AI search is real, but traditional Google search still dwarfs AI search in volume by roughly 373 to 1, according to SparkToro. Optimizing for AI citation matters, but it complements traditional SEO rather than replacing it. Schema, content depth, authoritative external linking, and clear E-E-A-T signals help in both channels at the same time. You don't have to choose.

The site will keep compounding. Every post published under this workflow adds to a body of content that Google is already valuing. The impressions curve is the early evidence that it's working.

Flat-fee performance marketing, no percentage games

We manage Google Ads, paid social, and programmatic for brands that are done paying agencies to spend more of their money. Flat fee. Senior management. Real accountability.

Get in touch

Questions about AI SEO and this workflow

Yes, with the right workflow. Google doesn't penalize AI-generated content as a category. What it penalizes is thin, unhelpful content regardless of how it was produced. Our AI-assisted posts went from near-zero impressions to roughly 150 per day within 60 days of publishing, based on our Google Search Console data. The difference is humanization, schema markup, and AEO structure, not avoiding AI altogether.

AEO stands for Answer Engine Optimization. It's the practice of structuring content so AI search engines like ChatGPT, Perplexity, and Google's AI Overviews can extract, summarize, and cite it directly. FAQPage schema gives roughly a 40% higher AI citation rate according to Princeton GEO research. Writing answer-first, using specific statistics, and structuring content around questions all improve both traditional rankings and AI visibility at the same time.

SEO targets traditional Google rankings and blue-link results. AEO targets AI-generated answer boxes and featured snippets. GEO (Generative Engine Optimization) is the broader practice of optimizing for AI systems that synthesize and generate responses, including ChatGPT, Perplexity, and Claude. In 2026, all three matter. Traditional rankings still drive the majority of traffic, but AI-referred traffic converts at meaningfully higher rates.

Google's official position is that it rewards helpful content regardless of how it was produced. Unedited AI drafts with no original insight tend to rank poorly because they're thin and generic, not because they're AI-generated. Content that combines an AI-assisted structure with real human experience, specific data, and proper technical markup can rank competitively. Our own GSC results confirm this.

FAQPage schema is structured JSON-LD markup that labels your question-and-answer content so search engines and AI systems can parse it directly. Princeton GEO research shows it boosts AI citation rates by approximately 40%. For traditional Google rankings it's a supporting signal. For AI visibility in ChatGPT, Perplexity, and Google's AI Overviews, it's one of the most reliable tactics available.

Based on our experience with mrktcorrect.com, the first GSC impressions appeared within two to three weeks of publishing. Meaningful daily impression volume started building around week five to six. The growth wasn't linear, it looked flat for a while, then stepped up. Domain authority, schema quality, and content depth all affect the timeline. Newer domains will generally see slower initial indexing.

The most common tells are paragraph rhythm (every paragraph the same length), banned vocabulary like "leverage," "delve," "tapestry," and "comprehensive," negative parallelism constructions, rule-of-three stacking, and generic conclusions that say nothing specific. The fix isn't better prompting. It's a structured humanizer pass that rewrites at the sentence level and varies the rhythm deliberately.

llms.txt is a plain-text file at the root of your domain that tells AI crawlers what your site is and what content matters most. It's modeled on robots.txt but for large language models rather than search bots. We deploy both llms.txt and llms-full.txt on mrktcorrect.com, with the full version listed in our sitemap. Whether it meaningfully affects AI citation rates is still being studied, but the implementation cost is low and the potential upside is real.

The most practical starting point is Google Search Console for AI Overview appearances, and GA4 referral traffic filtered for sources like perplexity.ai and chatgpt.com. Dedicated tools like Otterly.ai and Semrush's AI Visibility toolkit track brand mentions across ChatGPT, Perplexity, Gemini, and Copilot. It's still an immature measurement space, directional data is useful, but don't treat it as precise.

Yes. Google processes roughly 14 billion search queries per day. ChatGPT handles around 37.5 million. The ratio is 373 to 1 in Google's favor, according to SparkToro research. AI search is growing fast but traditional rankings still drive the majority of organic traffic. The smart play is optimizing for both at once, since the tactics overlap significantly. Schema, content depth, E-E-A-T signals, and authoritative external links help in both channels.

We use Claude as the primary drafting tool with a structured skill file that encodes our SEO rules, voice standards, schema requirements, and humanizer pass criteria. Every post goes through keyword research, a full HTML draft matching our design system, a mandatory humanizer pass to strip AI patterns and enforce brand voice, and a Python delivery checklist that validates schema, meta length, external link count, FAQ count, and GTM presence before anything ships.

Three things hurt early performance. First, skipping the humanizer pass and shipping raw AI drafts that gave Google no reason to prefer them. Second, publishing without FAQPage schema, which left AI citation potential on the table from day one. Third, thin external linking. Early posts had one or two external links, below the threshold that signals topical credibility. All three are now hard blockers in our delivery checklist.