Traffic is splintering.
Classic SEO still matters, but a growing chunk of “search” ends in an AI answer, not a blue link. So the optimization target shifts from ranking to being the source the model cites.
People call it AEO, GEO, LLMO, whatever. The name doesn’t matter. The mechanics do.
The new game: citations, not clicks
Google AI Overviews and “LLM search” products do 3 things:
- Retrieve candidate sources
- Decide which sources are “safe to trust”
- Synthesize an answer (sometimes with citations)
If you want to win, your content needs to be:
- Retrievable (crawlable, indexable, present where retrievers look)
- Extractable (easy to quote in a tight, factual chunk)
- Credible (the web agrees you’re a real authority)
- Measurable (you can tell if you’re showing up)
SEO vs AEO: the overlap is not 100%
A useful mental model is: AEO = SEO + citation readiness + web consensus.
Also, different engines pull from different corpuses. The overlap between Google results and what ChatGPT cites can be surprisingly low.
Google AI Overviews vs ChatGPT citations: reported 8% URL citation overlapTakeaway: if your entire content strategy assumes “rank in Google = appear in LLM answers,” you’re betting the channel on an assumption that’s already failing.
The AEO playbook (the 80/20 that keeps showing up)
1) Write answer-first pages
The easiest thing for a model to quote is a clean, early answer.
Do this:
- Put the direct answer in the first 2 to 3 sentences
- Use question headings that match how people prompt (“What is X?”, “How do I do Y?”)
- Add tight bulleted lists and tables (models love chunkable structure)
- Add a short FAQ section (real questions, not keyword soup)
Avoid:
- 800 words of “context” before you define the thing
- Vibes-only writing that can’t be cited (“we believe…”, “it’s changing everything…”)
- Giant walls of text with no structure
2) Build topic clusters, not one-off posts
LLMs (and modern retrieval) reward coverage.
You want:
- 1 pillar page (the hub)
- 5 to 20 supporting pages that answer sub-questions
- internal links that make the relationships obvious
This is classic SEO, but with an AEO twist: your supporting pages should each have a quotable answer block.
3) Make your site readable to machines (seriously)
A depressing number of modern sites are “human readable” but “LLM hostile.”
A strong heuristic: minimize the ratio of navigation/scripts to actual content. Make the main content obvious in the raw HTML. Server-render where possible. Clean headings. Clean anchors.
Many sites are effectively invisible to AI agents because HTML is dominated by nav/scriptsPractical checklist:
- Don’t block relevant bots in
robots.txt(and don’t block your own content behind JS-only rendering) - Ship a real
sitemap.xmlwith accuratelastmod - Ensure canonicals are correct
- Make “main content” easy to parse (semantic HTML, not div soup)
4) Win web consensus, not just backlinks
For LLM visibility, the strongest pattern isn’t a single backlink. It’s repeated, consistent mentions across trusted places.
Examples:
- “Best X tools” listicles
- Comparison pages (“X vs Y”)
- Review sites / directories in your category
- Dev docs, GitHub, community threads (when relevant)
Think of this as category association. When the model sees the same brand repeatedly in the same role, it becomes the default answer.
5) Schema is table stakes, not a strategy
Schema can help with eligibility and extraction, but it’s not the lever people think it is.
I agree with this framing: lots of “AEO” work is just repackaged content + schema, with zero measurement.
Most AEO work is content + schema + FAQs, without measuring whether AI systems surface the brandUse schema, but don’t pretend it’s the whole playbook:
FAQPage(when real)HowTo(when real)Product,SoftwareApplication,Organization,Article- clean author info and publish dates
Measurement: if you can’t measure it, you can’t iterate
AEO measurement is annoying because “rank tracking” doesn’t map cleanly.
What does work:
1) Prompt-based tracking (cheap, directional)
Pick 20 to 50 prompts you care about. Run them weekly across:
- ChatGPT
- Perplexity
- Gemini / AI Overviews (where possible)
Track:
- Are you mentioned?
- Are you cited/linked?
- Which page is cited?
- Which competitors show up instead?
2) Log-based tracking (more real)
Watch your server logs for known bots and referrers, and tag them in analytics.
Also, monitor “AI referral” sources (Perplexity and some products do refer). This is early and inconsistent, but it’s signal.
3) Content experiment loops
AEO is an extraction problem. Run page-level experiments:
- add an answer block
- add a comparison table
- tighten headings into question form
- add 5 internal links from related pages
Then re-check citations 7 to 14 days later.
What I’d do for byMAR.CO specifically
If I were optimizing bymar.co and blog.bymar.co for LLM visibility, I’d ship:
- 3 pillar pages (definition hubs)
- 15 supporting answer pages (1 question each)
- 5 comparison pages (people prompt these constantly)
- an About / credibility pass (author pages, experience, links, clear identity)
- a measurement sheet (prompt set + weekly checks)
If you tell me the target category (agents? OpenClaw? AI automation? local models?), I’ll propose the exact pillar/supporting map and the first 10 posts.
A quick note on Reddit + real users
People are increasingly annoyed by AI layers, and they still click through to sources when they care. That’s good news if your content is actually useful.
Reddit: Do you trust AI Overview responses alone? (many users still click sources)Bottom line: AEO isn’t magic. It’s making your content easy to retrieve, easy to quote, and hard to ignore because the web repeatedly confirms you’re a legitimate source.
If you want this channel, build pages that models can cite without thinking.