Citations shape AI answers and provide a direct link from those answers to your brand and products. Naturally, teams want to be the source AI links to. Securing a citation (or earning placement in a cited source) is one thing. Knowing how long it will last—and what helps it stick—is another.
In partnership with Stacker, we analyzed 3.5 million citation events across AI platforms from September 2025 to March 2026. For each source, we tracked cohorts—groups of sources that appeared in a given week—and measured how many were still being cited in subsequent weeks. From that survival curve, we calculated a half-life: the number of weeks it takes for 50% of a cohort’s citations to disappear.
Our headline finding: for the average source in our dataset, citation activity drops by half in about 4–5 weeks. But averages hide big differences. Platform, industry, and source type all influence how long citations persist. The upshot for teams building AI search, SEO, and AI-agent strategies: you need to play both offense (earn) and defense (retain).
Platform plays a major role in citation durability
Across all sources cited by AI platforms, citation activity drops by half in roughly 4.5 weeks. That’s a meaningful window, but also a fast-moving one—without ongoing effort, many AI citations are effectively ephemeral.
Platforms differ:
ChatGPT cycles through sources the fastest at 3.4 weeks.
Perplexity citations last nearly 70% longer than ChatGPT at 5.8 weeks.
Google’s three surfaces—AI Mode, Gemini, and AI Overviews—cluster in the mid-range at 4.3–4.8 weeks, suggesting a relatively consistent refresh cycle across Google’s AI ecosystem.
If you’re investing in AI search visibility, the platform you’re optimizing for directly affects how frequently you need to “re-earn” citations.
Industry differences exist—but platform choice matters more
Industry-level gaps are narrower than platform gaps, but still real:
Insurance (5.0 weeks) and financial services (4.8 weeks) are stickier, likely reflecting authoritative, evergreen content.
Healthcare (4.1 weeks) and retail & ecommerce (4.3 weeks) turn over faster—categories where timeliness and recency drive more frequent rotation.
Key implication: platform strategy has more leverage than vertical. A healthcare brand on Perplexity (5.8 weeks) can still outperform an insurance brand on ChatGPT (3.4 weeks).
Some sources have longer shelf lives
We partnered with Stacker in part to understand whether content syndicated via editorial sources (news outlets) impacts durability. It does.
Citations from domains in the Stacker Partner Network last roughly 2x longer than the average non-Network domain.
The advantage holds across all AI platforms and industries.
On Perplexity and Gemini, partner domains see the biggest absolute gains—more than 6.5 additional weeks of durability.
Even on ChatGPT—the lowest-durability platform—partner sources last 2.4x longer than non-Network domains.
The strongest combination in our dataset was Perplexity + partner domain at 12.3 weeks—more than 3x the average non-Network ChatGPT citation and over 2x the average non-Network Perplexity citation.
What this means for SEO, content, and AI-agent strategies
A few takeaways:
About 4.5 weeks is the effective refresh window for average AI citations.
ChatGPT has the fastest turnover; plan more frequent refreshes if ChatGPT visibility is a priority.
Perplexity is the most durable; citations there compound longer.
Industry matters at the margin: insurance and financial services enjoy a durability edge; healthcare and retail & ecommerce must work harder to keep visibility.
Editorial placements in trusted, frequently cited outlets deliver a clear durability advantage.
How to earn and retain citations:
Monitor citation performance routinely
Citations can disappear quickly. Make monitoring a weekly habit so you know which sources are driving responses for your must-win prompts, when those sources shift, and where to act first. Get step-by-step directions in the Scrunch guide: How to track citations in AI search.
Prioritize acquisition and retention with Influence Score
Scrunch’s Influence Score highlights which sources to prioritize by combining two signals: how broadly a source is cited across responses and how many unique prompts it appears on. High Influence Score sources should be your primary targets—either to earn a placement or to replace with stronger owned content. Note: Influence Score measures how broadly and consistently a source shapes AI responses, not the editorial quality of the source itself.
Balance durability and demand
Durability without demand is like a billboard on an empty road. Pair prompt monitoring with AI search trend analysis to focus effort where interest is growing.
Regularly update and optimize content
Recency likely plays a role, but quality and structure matter just as much. Start by improving pages AI already crawls and cites; they’re faster to fix than building net-new. When no page addresses a prompt, create focused content that does. Scrunch’s Deep AI Audit can help diagnose why a known page isn’t earning citations and what to improve.
Make citing your content as easy as possible
If agents can’t read your content, they can’t cite it. Audit for bot accessibility, JavaScript rendering issues, and weak page structures. Go further by optimizing for how AI consumes content, then deliver AI-ready experiences directly to agent traffic with Scrunch’s Agent Experience Platform (AXP). AXP automatically serves an AI-optimized experience to AI user agents without changing the human experience.
Scale reach with editorial placements
Our data shows citations from trusted editorial sources last longer. Tools like Stacker help turn your content into syndicated placements across a vetted publisher network. We also partner with Noble to help brands scale mentions in frequently cited sources.
How Scrunch measures impact and competitive visibility
Scrunch Monitoring & Insights is built to quantify your brand’s presence in AI answers and show how you stack up:
Cross-platform visibility: Track the same prompts across multiple platforms in aggregate, then filter by platform (e.g., ChatGPT, Claude, Gemini, Perplexity, Google AI Mode, Google AI Overviews, Meta AI, Microsoft Copilot) to see where performance diverges.
Citation analytics: In the Citations tab, see Prompts (unique prompts citing a source), Responses (citations across responses), Citation Consistency (percent of responses citing a source), and Influence Score (consistency multiplied by unique prompts).
Competitive benchmarking: Monitor brand mentions and share of voice versus competitors across prompts and platforms to spot wins, gaps, and displacement events.
Trend tracking: Monitor changes over consistent 2–3 week periods to separate real trends from one-off noise.
Traffic signals: Analyze AI agent traffic and referral traffic from cited sources to connect visibility with potential impact.