AI search has changed the rules. Instead of clicking links, users get direct AI-generated answers — and those answers increasingly cite and quote specific sources. The emerging practice of Generative Engine Optimisation (GEO) focuses on ensuring you are the source that those AI engines pick.
What should you do about it?
In this article I've only shared insights and tactics that are backed by peer-reviewed research and high-authority sources. Because, frankly, there's a lot of conjecture out there right now.
What follows isn’t speculation. It’s what current peer-reviewed research and industry data tells us is happening, along with a summary of the tactics needed to adapt.
Studies show that AI answer engines prefer content that supports claims with clearly attributed sources and short, quotable statements. Keyword density does little here, and can actually make inclusion worse.
Best tactics:
Source:
Aggarwal et al., “GEO: Generative Engine Optimization” (https://arxiv.org/pdf/2311.09735Semantic HTML, clear heading structure, structured data, and fresh metadata strongly correlate with improved AI citation. Which is actually good for those of us already trying to follow structural best practices - it looks like we can keep most of what we already do, with a few tweaks.
Tactics:
<h1>, logical headings, lists); add valid structured data (Article/TechArticle/FAQPage, breadcrumbs, canonicals, social cards) matching on-page contentSources:
Kumar & Palkhouski, AI Answer Engine Citation Behavior (GEO-16) (https://arxiv.org/pdf/2509.10762)AI engines show significant authority bias: they are more likely to cite third-party coverage (i.e. editorial reviews, news mentions, government/NGO domains) rather than brand-owned content.
Key tactics:
Tight scope per page; strong internal linking with descriptive anchors; avoid duplicates via canonicals.
Add editorial review for stats/regulatory claims; disclosures where needed.
Source:
Kumar & Palkhouski, AI Answer Engine Citation Behavior (GEO-16) (https://arxiv.org/pdf/2509.10762)Generative search prefers content that is:
Key tactic - include the best-performing formats identified by retrieval tests. These are:
Data tables with sources
Short, referenced definitions
FAQs with direct answers
Summary boxes explaining “what we know”
Source:
Breuer, “Large Language Models for Information Retrieval” (https://link.springer.com/article/10.1007/s13222-025-00503-x)AI search engines favour more recents sources with explicit update timestamps and revised facts. That last bit is important - visible freshness.
Tactics:
Source:
Kumar & Palkhouski, AI Answer Engine Citation Behavior (GEO-16) (https://arxiv.org/pdf/2509.10762)Research from analytics firm Authoritas shows that sites that previously ranked first can lose up to 79% of clicks when their result appears beneath an AI-generated summary. A Pew Research Center study similarly found users only clicked a link once in every 100 searches with an AI overview.
Tactics:
Source:
The Guardian, reporting (https://www.theguardian.com/technology/2025/jul/24/ai-summaries-causing-devastating-drop-in-online-news-audiences-study-finds)| Priority | What to do | Why it works | 
|---|---|---|
| High | Add cited facts, quotes, and references near key claims | Models need grounding sources | 
| High | Use structured data (Article/FAQ/HowTo schema) | Improves machine readability | 
| High | Earn third-party, non-brand citations | Authority bias in LLM retrieval | 
| Medium | Show update history & current stats | Recency is a ranking boost | 
| Medium | Create tables, definitions, FAQs | Easy for engines to lift | 
| Ongoing | GEO testing across engines + paraphrases | Retrieval behavior varies by phrasing | 
SEO isn’t dead — but the highest-value clicks now come from being the source AI answers choose to trust.
To win in this environment: