18 Months of AI Overviews: What Healthcare Tells Us About Where Finance Is Headed

BrightEdge data reveals Google uses the same YMYL playbook for both industries. The difference isn't how Google treats them — it's how people search.

It's been 18 months since Google launched AI Overviews. We now have enough data to see patterns — and make predictions.
 On the surface, Healthcare and Finance look completely different. Healthcare sits at 88% AI Overview coverage. Finance is at 21%. But 18 months of BrightEdge Generative Parser™ data reveals something deeper: Google applies the same logic to both categories. The gap isn't about how Google treats YMYL content — it's about how people search in these industries.

Data Collected

Using BrightEdge Generative Parser™, we analyzed AI Overview presence across Healthcare and Finance from May 2024 through December 2025 to understand:

  • How AI Overview coverage evolved in each industry over 18 months
  • Which query types saw the fastest expansion
  • Where Google kept AI out — and why
  • Whether the two YMYL categories follow the same underlying pattern

Key Finding

Google uses the same playbook for both industries. Finance just has a different query mix.

A large portion of Finance search is real-time queries — stock prices, tickers, market data. Healthcare doesn't have an equivalent. Google keeps AI out of real-time data for good reason: you need accuracy, not synthesis.

But when you compare similar query types — the educational, explainer, "help me understand" searches — the trajectory is nearly identical:

  • Healthcare educational: 82% → 93% (near saturation)
  • Finance educational: 16% → 67% (climbing fast)

Finance educational content is where Healthcare was 12-18 months ago. At current growth rates, Finance will reach Healthcare-level saturation (90%+) by late 2026.

The Headline Gap: Why Finance Looks So Different

At first glance, the numbers tell a simple story:

IndustryMay 2024December 2025Change
Healthcare72%88%+16pp
Finance6%21%+15pp

Both industries grew roughly the same amount in absolute terms. But Healthcare started high; Finance started low.

The reason isn't Google's caution with financial content. It's the query mix.

Finance Query Composition

Finance search includes a massive real-time component that Healthcare simply doesn't have:

Query TypeExample Keywords% of Finance QueriesAIO Rate (Dec '25)
Stock Tickers"AAPL stock price," "NASDAQ," "SPY"~70%8%
Educational"what is a Roth IRA," "how do bonds work"~13%67%
Trading"premarket futures," "stock market today"~4%44%
Tools/Calculators"mortgage calculator," "401k calculator"~3%11%

When someone searches "AAPL stock price," they don't need AI synthesis. They need a live price chart. Google's traditional SERP features — the stock widget, the market summary — already do this job perfectly.

Healthcare doesn't have an equivalent category. There's no "diabetes ticker" that needs real-time data. The vast majority of Healthcare searches are educational — symptoms, conditions, treatments — where AI synthesis adds genuine value.

The Parallel: Educational Queries Tell the Real Story

When you isolate educational queries in both industries, the pattern becomes clear:

Healthcare Educational (Specialty Care: Conditions, Symptoms, Treatments)

PeriodAIO Rate
May 202482%
September 202475%
December 202491%
May 202592%
September 202590%
December 202593%

Healthcare launched high and reached near-saturation within 18 months.

Finance Educational (Tax, Retirement, Planning, Credit)

PeriodAIO Rate
May 202416%
September 202424%
December 202427%
May 202537%
September 202566%
December 202567%

Finance educational started low but grew 51 percentage points in 18 months — accelerating sharply in 2025.

What This Means

The gap between Healthcare (93%) and Finance (67%) educational queries is now just 26 percentage points — down from 66 points in May 2024.

At Finance's current growth rate, educational content will reach 90%+ saturation by late 2026.

Where Google Expanded in Finance

The growth wasn't uniform. Some Finance categories exploded while others barely moved.

Biggest Movers (May 2024 → December 2025)

CategoryExample KeywordsMay '24Dec '25Growth
Tax Planning"tax brackets," "capital gains tax," "tax refund"0%63%+63pp
Cash Management"high yield savings account," "money market"13%79%+67pp
Financial Planning"mortgage rates," "CD rates," "compound interest"6%73%+67pp
Credit & Debt"student loan forgiveness," "how much house can I afford"5%62%+57pp
Fixed Income"treasury bills," "bond rates," "annuity"12%72%+60pp
Retirement"Roth IRA," "401k," "social security"33%61%+28pp

What Stayed Flat

CategoryExample KeywordsMay '24Dec '25Growth
Stock Tickers"AAPL stock," "SPY," "TSLA price"4%8%+4pp
Brand/Navigational"Fidelity," "Charles Schwab," "Vanguard"2%14%+12pp

The Pattern

Google went all-in on the same query types in Finance that it dominated in Healthcare: educational, explainer, planning content. The categories that grew fastest are the ones where AI synthesis genuinely helps users understand complex topics.

Meanwhile, Google kept AI out of real-time data and navigational queries — the same approach it takes in every industry.

Where Google Kept AI Out — In Both Industries

The most revealing pattern isn't where Google expanded. It's where Google deliberately kept AI out — and how consistent that logic is across both YMYL categories.

Local "Near Me" Queries

IndustryQuery TypeMay '24Dec '25
Healthcare"doctor near me," "urgent care near me"0%11%
Finance"bank near me," "financial advisor near me"0%20%

Google tested AI Overviews on local queries in both industries — then pulled back. These queries belong to maps and local pack, not AI synthesis.

Real-Time Data

IndustryQuery TypeAIO Rate (Dec '25)
FinanceStock tickers, market prices8%
HealthcareN/A (no equivalent)

Finance has a massive category of queries where real-time accuracy matters more than synthesis. Healthcare doesn't have an equivalent — which is why Healthcare's overall number is so much higher.

The Logic

Google applies the same framework everywhere:

  • AI where synthesis adds value: Educational content, explainers, planning queries
  • Traditional results where accuracy matters: Real-time data, local queries, navigational searches

The 18-Month Trajectory: Side by Side

Healthcare

PeriodConditions/SymptomsGeneral EducationLocal
May 202482%50%0%
September 202475%48%0%
December 202491%64%4%
May 202592%71%14%
September 202590%70%7%
December 202593%74%11%

Finance

PeriodEducationalReal-Time (Tickers)Local
May 202416%4%0%
September 202424%3%0%
December 202427%4%0%
May 202537%4%0%
September 202566%6%0%
December 202567%8%20%

What This Means

The trajectories follow the same pattern:

  1. Educational content saturates first — Healthcare conditions hit 90%+ by December 2024; Finance educational is on the same path
  2. Local queries get tested, then pulled back — Both industries saw Google experiment with AI on "near me" queries, then reduce coverage
  3. Real-time/transactional stays flat — Stock tickers in Finance, navigational queries in both industries

Why This Matters: The Prediction

Based on 18 months of data across both YMYL categories, here's what we expect:

Finance Educational Content Will Hit 90%+ by Late 2026

Finance educational queries grew 51 percentage points in 18 months (16% → 67%). At this rate, saturation matching Healthcare (90%+) is 12-18 months away.

The Headline Gap Will Close

As Finance educational content saturates, the overall Finance AI Overview rate will climb toward Healthcare's level. The 67-point gap (21% vs 88%) will narrow significantly — not because Google is changing its approach, but because the query mix effect will diminish as more categories reach saturation.

Local Will Stay Local

Google tested AI Overviews on "near me" queries in both industries and pulled back. This is Maps territory. Don't expect AI to take over local search in YMYL categories.

Real-Time Will Stay Traditional

Stock tickers, market data, and live prices will remain in traditional SERP features. Google won't risk AI synthesis where accuracy matters most.

What This Means for Financial Services Marketers

1. Educational Content Is AI Territory — Optimize Now

Tax explainers, retirement planning guides, mortgage education, credit fundamentals — these query types are already at 60-70% AI Overview coverage and climbing. If you're not optimizing for AI visibility on educational content, you're ceding ground.

2. The Playbook Is Clear: Healthcare Shows the Way

Healthcare's trajectory is Finance's future. The categories that saturated first in Healthcare (conditions, symptoms, treatments) are analogous to what's saturating now in Finance (tax, retirement, planning). Look at where Healthcare is today to see where Finance will be in 12-18 months.

3. Real-Time and Local Are Different Games

If your strategy is focused on stock-related queries or local branch visibility, traditional SEO still applies. AI Overviews aren't taking over these spaces — Google is deliberately keeping them in specialized SERP features.

4. Track Query Intent, Not Just Industry Averages

The headline number (21% for Finance) is misleading. Educational Finance queries are already at 67%. Knowing which of YOUR queries fall into which category — and how AI Overview coverage is changing for each — is essential for strategy.

Technical Methodology

Data Source: BrightEdge Generative Parser™

Analysis Period: May 2024 through December 2025 (6 measurement points: May '24, September '24, December '24, May '25, September '25, December '25)

Sample Size:

  • Finance: 2,580 keywords tracked consistently across all periods
  • Healthcare: 2,760 keywords tracked consistently across all periods

Categorization:

  • Finance queries categorized by L1/L2 taxonomy: Stocks & Trading, Finance & Investing (subdivided into Tax Planning, Retirement, Financial Planning, etc.), Tools, Brands
  • Healthcare queries categorized by L1/L2 taxonomy: Specialty Care (Ortho, Neuro, Gastro, etc.), General Health, Primary Care

AI Overview Detection: Keywords classified as having AI Overview if Sge State ≠ "none"

Local Query Identification: Keywords containing "near me" flagged as local intent

Key Takeaways

Google Uses the Same YMYL Playbook for Healthcare and Finance: AI where synthesis adds value (educational content). Traditional results where accuracy matters (real-time data, local queries). The logic is identical across both industries.

The Gap Is About Query Mix, Not Google's Approach: Finance has a large real-time component (stock tickers) that Healthcare doesn't. Remove that from the equation, and the trajectories are nearly parallel.

Finance Educational Content Is Climbing Fast: 16% → 67% in 18 months. The acceleration in 2025 (37% → 67% in just 7 months) suggests Google has gained confidence in AI for financial education content.

Prediction: Finance Hits Healthcare Levels by Late 2026: Educational Finance content will reach 90%+ AI Overview coverage within 12-18 months, matching where Healthcare is today.

The Opportunity Is in Educational Content: For financial services marketers, the path is clear: educational, explainer, and planning content is where AI Overviews are expanding. Optimize for these query types now — before saturation makes it harder to earn visibility.

Download the Full Report

Download the full AI Search Report — 18 Months of AI Overviews: What Healthcare Tells Us About Where Finance Is Headed

Click the button above to download the full report in PDF format.

Published on January 29, 2026

Gemini's December Surge: What Citation Data Reveals About Where It's Sending Traffic

BrightEdge data shows Gemini surpassing Perplexity with a 25% referral traffic lead. Gemini grew 33% MoM in December after October upgrades, marking a key moment in AI Darwinism.

This week, BrightEdge released data showing Gemini officially overtook Perplexity in referral traffic share — a 25% lead and a landmark moment in what we're calling AI Darwinism. Gemini's referral traffic grew 33% month-over-month in December, signaling a meaningful increase in user engagement following the October model and product upgrades.

But referral growth only tells us that more users are engaging with Gemini. It doesn't tell us where Gemini is likely sending them. To understand that, we went deeper.

Data Collected

Using BrightEdge AI Catalyst™, we analyzed Gemini's citation behavior throughout November and December to understand:

  • Whether citation depth and source diversity changed as usage scaled
  • How week-to-week citation patterns shifted during December
  • Which query intents saw increases or decreases in citation exposure
  • What this means for SEOs trying to earn visibility in Gemini's answers

Key Finding

Gemini scaled without closing off. Despite 33% referral growth, citation depth, domain diversity, and publisher concentration all remained flat. But under the hood, Gemini became more dynamic — week-to-week variance increased 34% — and citation exposure shifted toward research and planning queries. Gemini is scaling as an open discovery layer, not a compressed answer engine.

 

The Stability Story: Gemini Grew Without Compressing

When AI platforms scale rapidly, there's a concern that they'll compress answers, reduce citations, or concentrate traffic to fewer publishers. Gemini didn't do any of that.

Citation Depth:

  • November: 8.09 average citations per answer
  • December: 8.06 average citations per answer
  • Change: -0.3% (effectively flat)

Domain Diversity:

  • November: 4.85 unique domains per answer
  • December: 4.85 unique domains per answer
  • Change: 0%

Top 10 Publisher Concentration:

  • November: 23.7% of all citations
  • December: 23.7% of all citations
  • Change: 0%

Share of Source-Heavy Answers (10+ citations):

  • November: 33.6%
  • December: 33.3%
  • Change: -0.3 percentage points (effectively flat)

What This Means

As Gemini's audience expanded in December, the platform maintained consistent openness to the web. No collapse in citation depth. No concentration to fewer publishers. No reduction in how often it produced highly-grounded, multi-source answers.

For SEOs, this is encouraging: opportunities to be cited in Gemini — regardless of your site's size — remained unchanged even as usage surged.

 

Under the Hood: Gemini Got More Dynamic

While overall citation volume stayed flat, Gemini's answer composition became more variable week-to-week in December.

Week-to-Week Standard Deviation:

  • November: 0.117
  • December: 0.156
  • Change: +34% increase in relative variability

Coefficient of Variation:

  • November: 1.44%
  • December: 1.93%

Weekly Citation Fluctuation

WeekAvg CitationsWoW % Change
Nov Wk 18.06
Nov Wk 28.20+1.7%
Nov Wk 37.90-3.6%
Nov Wk 48.08+2.3%
Nov Wk 58.07-0.1%
Dec Wk 17.88-2.3%
Dec Wk 27.99+1.3%
Dec Wk 38.21+2.8%
Dec Wk 48.18-0.4%

What This Means

The increased variance suggests active tuning and optimization as usage ramped. Gemini wasn't just scaling volume — it was adjusting how it composed answers week-to-week. This is consistent with a platform in active development, testing what works as adoption accelerates.

 

Query Intent Changes Everything

The most actionable finding: Gemini's citation increases were concentrated in specific query types.

How-To / Instructional Queries:

  • November: 8.34 average citations
  • December: 8.77 average citations
  • Change: +5.2% (the largest category-level increase)

Travel & Planning Queries:

  • November: 9.88 average citations
  • December: 9.90 average citations
  • Change: +0.2%

Comparison / Shopping Queries:

  • November: 6.14 average citations
  • December: 6.11 average citations
  • Change: -0.5%

The Pattern

Gemini increased source grounding in "decision-support" moments — the queries where users move from exploration toward action. How-to queries, travel planning, instructional content. These are high-value research stages where users are learning, evaluating, and forming intent.

Meanwhile, transactional queries (comparison shopping, pricing, "best deals") saw no increase in citation exposure. Gemini held flat in checkout-adjacent moments.

What This Means

If you're only optimizing for transactional queries, you're missing where Gemini is building its connective tissue to the open web. The opportunity is in the research phase — the how-to's, the planning queries, the instructional content that helps users move from awareness to decision.

 

Why This Matters Beyond Gemini

Gemini isn't just the standalone Gemini app. It's the foundational model powering Google's AI products:

  • Google AI Mode runs on Gemini
  • Google AI Overviews runs on Gemini
  • Apple's Siri is set to be powered by Gemini

Understanding how Gemini cites and connects users to the web matters beyond just one product. As Gemini's reach expands across Google's ecosystem and into Apple's, the citation behaviors we're tracking now will have implications across multiple surfaces where users encounter AI-generated answers.

 

What This Means for SEOs

Gemini Scaled Without Closing Off: 33% more referral traffic, but citation depth, domain diversity, and publisher concentration all stayed flat. The open web stayed open.

Decision-Support Moments Are the Opportunity: How-to queries (+5.2%) and travel/planning queries (+0.2%) saw citation increases. Transactional queries stayed flat. Optimize for the research phase, not just the purchase moment.

Active Tuning Means Active Opportunity: The 34% increase in week-to-week variance suggests Gemini is still testing and optimizing. Citation patterns aren't locked in — there's room to earn visibility as the platform evolves.

Monitor Gemini Now: With Gemini set to power Siri and already powering AI Mode and AI Overviews, this platform's citation behavior is about to matter a lot more. Anyone with AI Catalyst can track how Gemini's patterns are shifting in their own vertical.

 

Technical Methodology

Data Source: BrightEdge AI Catalyst™

Analysis Approach:

  • Gemini citation data analyzed across November and December 2025
  • Weekly aggregation of average citations per answer
  • Query intent categorization: How-To/Instructional, Travel & Planning, Comparison/Shopping, and others
  • Volatility measured using week-over-week standard deviation and coefficient of variation
  • Domain diversity and publisher concentration tracked across the analysis period

Time Period: November 2025 (Weeks 44-48) through December 2025 (Weeks 49-52) and early January 2026 (Week 1)

 

Key Takeaways

Gemini Scaled as an Open Discovery Layer: Despite 33% referral growth, citation depth (-0.3%), domain diversity (0%), and publisher concentration (0%) all remained flat. Gemini grew without becoming more closed, more shallow, or more concentrated.

Week-to-Week Variance Increased 34%: Overall citation volume stayed flat, but answer composition became more dynamic. Gemini was actively tuning as usage scaled.

Research Queries Saw the Biggest Gains: How-to/instructional queries: +5.2% citation increase. Travel & planning: +0.2%. Comparison/shopping: -0.5%. The increases are concentrated in decision-support moments.

Gemini Powers More Than Gemini: AI Mode, AI Overviews, and soon Siri all run on Gemini. Citation behavior here has implications across Google's AI ecosystem and beyond.

The Opportunity Is in the Research Phase: Gemini is strengthening its role as connective tissue to the open web during learning, planning, and evaluation moments. Brands that optimize for these stages — not just transactional queries — will capture more visibility as Gemini scales.

Download the Full Report

Download the full AI Search Report — Gemini's December Surge: What Citation Data Reveals About Where It's Sending Traffic

Click the button above to download the full report in PDF format.

Published on January 22, 2026

Finance AI Citations: How ChatGPT and Google Define Trust Differently

As we continue our analysis of critical YMYL categories, this week we're examining Finance. When AI answers questions about money, investments, and taxes, accuracy directly impacts users' financial decisions. So who does each platform actually trust?

Data Collected

Using BrightEdge AI Catalyst™, we analyzed finance citations across ChatGPT, Google AI Mode, and Google AI Overviews to understand:

  • Which source types each platform cites for finance queries
  • How citation patterns differ by query type (stock lookups, tax questions, retirement planning, educational queries)
  • Platform stability and volatility in finance citations
  • Where Google is experimenting with new content formats like video

Key Finding

Unlike healthcare — where we saw a two-way split between ChatGPT and Google — finance reveals three completely different trust philosophies. ChatGPT trusts financial data aggregators (70%+). Google AI Mode trusts trading platforms (40% — 7x higher than ChatGPT). Google AI Overviews trusts consumer education and video (34% video citations vs. 0% for ChatGPT). Same YMYL category, three fundamentally different approaches to authority.

The Trust Gap: Three Platforms, Three Philosophies

ChatGPT Trusts the Data. AI Mode Trusts Where You Trade. AI Overviews Trusts Where You Learn.

When we analyzed where finance citations actually come from, the platforms diverged sharply:

Financial Data/News: ChatGPT leads at 70%, followed by AI Mode at 51% and AI Overviews at 44%.

Trading Platforms: AI Mode dominates at 40%, with AI Overviews at 30% and ChatGPT far behind at just 6%. Trading platforms appear 7x more often in AI Mode than in ChatGPT.

Consumer Finance Education: AI Overviews leads dramatically at 96% combined share, compared to AI Mode at 42% and ChatGPT at 24%.

Video Content: AI Overviews cites video at 34% — the #2 source category. AI Mode shows 7%. ChatGPT cites zero video for finance queries.

Government (.gov): All three platforms show similar trust levels — ChatGPT at 11%, AI Mode at 12.5%, and AI Overviews at 17%.

ChatGPT leans heavily on financial data aggregators and market news — the platforms that provide real-time stock data, market analysis, and financial journalism.

Google AI Mode goes all-in on trading platforms — the brokerages and investment apps where people actually execute trades.

Google AI Overviews favors consumer finance education and video content — the explainers and educational resources.

The One Thing They Agree On

Government sources (.gov) are trusted at similar rates across all three platforms (11-17%). Regulatory and official sources like IRS, SEC, and FINRA form a baseline of trust. Everything above that baseline diverges dramatically.

Why the Difference? The Interface Explains the Trust Model

This three-way split actually makes sense when you consider the user experience:

AI Overviews still sits atop traditional search results. Users are in browse-and-learn mode — they expect to click through to educational content and watch video explainers. The trust signals reflect that intent.

ChatGPT and AI Mode are chat interfaces where users want direct, data-backed answers. ChatGPT leans toward authoritative data feeds. AI Mode — still within Google's ecosystem — bridges toward transactional sources where users might take action.

The interface drives the trust model.

Query Type Matters: Different Questions, Different Sources

Stock Lookups: Everyone Agrees

When users ask for stock prices or ticker information, all three platforms converge on financial data/news sources — but ChatGPT concentrates trust more heavily:

ChatGPT: 72% from financial data/news, less than 1% from trading platforms.

AI Mode: 50% from financial data/news, 8% from trading platforms.

AI Overviews: 47% from financial data/news, 4% from trading platforms.

ChatGPT trusts fewer sources more deeply for stock data. Google spreads trust wider, giving trading platforms a meaningful share.

Tax Queries: ChatGPT Trusts Government 2x More

For tax-related questions, ChatGPT shows the strongest preference for government sources:

ChatGPT: 50% government, 14% consumer education.

AI Mode: 37% government, 10% consumer education.

AI Overviews: 26% government, 14% consumer education.

ChatGPT is twice as likely to cite IRS and other government sources for tax queries compared to AI Overviews.

Retirement Planning: Same Pattern

For retirement and 401(k) queries, ChatGPT again leads with government sources:

ChatGPT: 39% government, 18% trading platforms, 24% consumer education.

AI Mode: 32% government, 19% trading platforms, 9% consumer education.

AI Overviews: 17% government, 16% trading platforms, 15% consumer education.

ChatGPT takes a government-first approach. Google's products split between government sources AND the platforms where you'd actually open a retirement account.

Educational "How-To" Queries: The Big Divergence

For educational finance queries, the platforms diverge most dramatically:

ChatGPT: 40% consumer education, 0% video, 13% government.

AI Mode: 15% consumer education, 0% video, 12% government.

AI Overviews: 14% consumer education, 9% video, 7% government.

ChatGPT trusts established consumer education publishers. AI Overviews pulls video into financial education queries — a content format ChatGPT isn't touching at all.

Trading/Investment Concepts: AI Overviews Bets on Video

For queries about options, ETFs, dividends, and other investment concepts:

ChatGPT: 23% consumer education, 0% video, 31% financial data/news.

AI Mode: 16% consumer education, 0% video, 15% financial data/news.

AI Overviews: 17% consumer education, 11% video, 12% financial data/news.

AI Overviews is betting that video can explain complex trading concepts. ChatGPT cites zero video for these queries.

The Pattern

For data queries (stock lookups): Everyone agrees — financial news wins.

For sensitive YMYL queries (tax, retirement): ChatGPT trusts government 2x more than AI Overviews.

For educational queries: ChatGPT trusts publishers; AI Overviews trusts video.

Query intent drives trust signals within each platform.

The Volatility Factor: Stability vs. Experimentation

ChatGPT Has Conviction. Google Is Testing.

Beyond source preferences, the platforms differ dramatically in stability:

ChatGPT: 65% average citation volatility (most stable).

AI Mode: 75% average citation volatility.

AI Overviews: 95% average citation volatility (most volatile).

Google's finance citations — especially in AI Overviews — are significantly more volatile than ChatGPT's. What you see in Google AI results today may shift. ChatGPT's citations have remained more stable over our tracking period.

What This Means

ChatGPT appears to have made decisions about finance authority and is sticking with them. Google is still actively experimenting — testing different source mixes, adjusting weights, and iterating on what works. AI Overviews shows the most experimentation, which aligns with its role as a newer feature sitting atop traditional search.

Google's Video Experiment in Finance

The Video Gap Is Massive

ChatGPT: 0% video citations.

AI Mode: 7% video citations.

AI Overviews: 34% video citations.

Google AI Overviews cites video as its #2 source category for finance queries. ChatGPT cites none. This is the most dramatic divergence between the platforms.

Which Query Types Trigger Video?

When Google does cite video for finance, it's concentrated in certain query types:

Trading/Investment Concepts: 11% video in AI Overviews.

How-To/Educational: 9% video.

Retirement Planning: 5% video.

Stock Lookups: 4% video.

Tax Related: Less than 1% video.

Video shows up most on conceptual and educational queries — not tax or stock data. Google appears to be testing where video adds value, starting with explanatory content before expanding to more sensitive YMYL queries.

What This Means for Financial Services Marketers

Track Citations Across All Three Platforms: ChatGPT, AI Mode, and AI Overviews have fundamentally different trust signals. You may be winning citations in one platform and invisible in the others. Measure all three.

Query Intent Matters: Stock lookups, tax questions, and retirement queries all pull from different source mixes. Understand which query types your content targets — and who gets cited for each.

Know Your Source Advantage:

  • Financial data aggregators have an edge on ChatGPT
  • Trading platforms have an edge on AI Mode
  • Consumer education sites have an edge on AI Overviews
  • Video content has an edge on AI Overviews (and no presence on ChatGPT)
  • Government sources are trusted across all three — the baseline

Google Is Still Testing — ChatGPT Has Decided: Google's higher volatility means your visibility there may shift. ChatGPT is more stable — what you see now is likely what you'll get. Plan accordingly.

Video Is a Major Opportunity (On Google): If you're considering video for finance content, know that AI Overviews is already citing video heavily (34%). But ChatGPT isn't touching it. Your video strategy is a Google play, not a ChatGPT play — at least for now.

Technical Methodology

Data Source: BrightEdge AI Catalyst™

Analysis Approach:

  • Finance URL-prompt pairs analyzed across three platforms: ChatGPT, Google AI Mode, Google AI Overviews
  • Source categorization by type: Financial Data/News, Trading Platforms, Consumer Finance Education, Government (.gov), Video, Social/Community, Reference, and others
  • Query intent categorization: Stock Lookup (Ticker), Stock Lookup (Company), How-To/Educational, Tax Related, Retirement Planning, Trading/Investment Concepts, Credit/Banking, Market Data
  • Volatility measured using average percentage changes across citation sources

Data Volume:

  • ChatGPT: 11,000+ URL-prompt pairs
  • Google AI Mode: 35,000+ URL-prompt pairs
  • Google AI Overviews: 30,000+ URL-prompt pairs

Key Takeaways

Three Different Trust Philosophies: ChatGPT trusts financial data aggregators (70%+). AI Mode trusts trading platforms (40% — 7x higher than ChatGPT). AI Overviews trusts consumer education and video (34%). Not a two-way split — a three-way divergence.

Query Intent Drives Trust Signals: Stock lookups converge on financial news. Tax and retirement queries show ChatGPT trusting government 2x more than AI Overviews. Educational queries show AI Overviews pulling in video while ChatGPT trusts publishers.

The Video Gap Is Massive: AI Overviews cites video at 34% (the #2 category). ChatGPT cites 0%. Video is a Google play, not a ChatGPT play.

Government Is the Baseline: All three platforms trust .gov sources at similar rates (11-17%). That's the floor. Everything above it diverges.

ChatGPT Is More Stable: Google's finance citations show higher volatility (75-95%) vs. ChatGPT (65%). Google is experimenting; ChatGPT has conviction.

The Interface Explains the Trust Model: AI Overviews sits atop search results where users browse and learn. Chat interfaces serve users who want direct, data-backed answers. The UX drives the source mix.

Download the Full Report

Download the full AI Search Report — Finance AI Citations: How ChatGPT and Google Define Trust Differently

Click the button above to download the full report in PDF format.

Published on January 15, 2025

Healthcare AI Citations: How ChatGPT and Google Define Trust Differently

As we head into the new year, we're taking a closer look at critical YMYL categories — starting with Healthcare. When AI answers health questions, accuracy isn't optional. So who does each platform actually trust?

Data Collected

Using BrightEdge AI Catalyst™, we analyzed healthcare citations across ChatGPT, Google AI Mode, and Google AI Overviews over 14 weeks (October 2025 – January 2026) to understand:

  • Which source types each platform cites for healthcare queries
  • How citation patterns differ by query type (symptoms, definitions, treatments)
  • Platform stability and volatility in healthcare citations
  • Where Google is experimenting with new content formats like video

Key Finding

ChatGPT and Google don't agree on what makes a healthcare source authoritative. ChatGPT pulls 27% of its healthcare citations from government sources (.gov) — and just 1% from elite hospital systems. Google AI Overviews flips that entirely: 33% from elite hospital systems, only 10% from government. Same YMYL category, fundamentally different trust signals.

The Trust Gap: Different Platforms, Different Authorities

ChatGPT Trusts the Institution. Google Trusts the Brand.

When we analyzed where healthcare citations actually come from, the platforms diverged sharply:

Source TypeChatGPTGoogle AI Overviews
Government (.gov)27%10%
Elite Hospital Systems1%33%
Medical Specialty Orgs17%2%
Consumer Health Media7%6%

ChatGPT leans heavily on government sources like CDC, NIH, and FDA — plus medical specialty organizations (professional associations for cardiology, oncology, orthopedics, etc.).

Google AI Overviews goes all-in on elite hospital systems — major academic medical centers and nationally-ranked health systems.

Neither platform relies heavily on consumer health media (popular health information websites). Both prefer institutional authority — they just define it differently.

The Government vs. Consumer Ratio

How much does each platform prefer official sources over consumer health websites?

  • ChatGPT: 4.2:1 ratio (strongly favors official sources)
  • Google AI Mode: 1.8:1 ratio
  • Google AI Overviews: 2.3:1 ratio

ChatGPT is twice as likely to cite .gov sources over consumer health media compared to Google.

Query Type Matters: Different Questions, Different Sources

Symptom Queries Show the Widest Gap

When users ask about symptoms — arguably the most sensitive healthcare searches — the platforms diverge dramatically:

Platform% Citations from Major Hospital Systems
ChatGPT57%
Google AI Mode18%
Google AI Overviews20%

ChatGPT concentrates trust heavily for symptom queries, pulling nearly 3x more citations from major hospital systems than Google does.

Definition Queries

For "what is" style educational queries:

PlatformElite HospitalsGovernmentOther
ChatGPT36%15%29%
Google AI Mode12%11%54%
Google AI Overviews16%12%45%

Treatment Queries

For queries about treatments and procedures:

PlatformElite HospitalsGovernmentOther
ChatGPT52%8%19%
Google AI Mode17%15%46%
Google AI Overviews20%11%44%

The Pattern

ChatGPT concentrates citations on fewer, more authoritative sources — especially for sensitive queries like symptoms and treatments. Google distributes citations across a wider mix of sources.

The Volatility Factor: Stability vs. Experimentation

ChatGPT Has Conviction. Google Is Testing.

Beyond source preferences, the platforms differ dramatically in stability:

MetricChatGPTGoogle AI ModeGoogle AI Overviews
Citation Volatility (CV)2.1%17.1%20.2%
Week-to-Week Churn0.42pp2.52pp6.94pp

Google's healthcare citations are 8-10x more volatile than ChatGPT's. What you see in Google AI results today may shift significantly. ChatGPT's citations have remained remarkably stable over our tracking period.

What This Means

ChatGPT appears to have made decisions about healthcare authority and is sticking with them. Google is still actively experimenting — testing different source mixes, adjusting weights, and iterating on what works.

Google's Video Experiment

Where Is Google Testing Video in Healthcare?

One notable difference: Google is experimenting with video content for healthcare answers. ChatGPT isn't citing any video.

  • ChatGPT: 0% video citations
  • Google AI Overviews: 2.7% video citations

But the volatility tells the real story: video citations in Google AI Overviews swung more than 50 percentage points over just three months (ranging from 22% to 73% of certain result sets). Google hasn't decided whether video belongs in YMYL healthcare content.

Which Query Types Trigger Video?

When Google does cite video for healthcare, it's concentrated in certain query types:

Query Type% of Video Citations
General Health51.5%
Symptom Exploration21%
Condition-Specific11.4%
Definition/Educational5.4%
Treatment/Remedies5%

Video shows up most on general health queries — not the most sensitive clinical content. Google appears to be testing carefully, starting with lower-stakes queries before expanding to symptoms and treatments.

Citation Priority: What Users See First

Different Platforms Lead With Different Sources

Beyond which sources get cited, we analyzed which sources appear first (lower rank = higher priority):

ChatGPT Citation Priority:

  1. Crisis/Support Resources (avg rank 1.7)
  2. Wikipedia (1.8)
  3. Medical Specialty Orgs (1.8)
  4. Elite Hospitals (2.0)
  5. Government (2.3)

Google AI Overviews Citation Priority:

  1. Elite Hospitals (avg rank 3.3)
  2. Wikipedia (3.4)
  3. Medical Specialty Orgs (4.3)
  4. Government (5.0)
  5. Consumer Media (6.3)

ChatGPT puts crisis and support resources first — a deliberate safety choice. Google leads with elite hospital content.

What This Means for Healthcare Marketers

Track Citations Across Both Platforms

ChatGPT and Google have fundamentally different trust signals. You may be winning citations in one platform and invisible in the other. Measure both.

Query Type Matters

Symptom queries, definitions, and treatment searches all pull from different source mixes. Understand which query types your content targets — and who gets cited for each.

Know Your Source Advantage

  • Hospital systems: You have an edge on Google AI Overviews
  • Government agencies: You have an edge on ChatGPT
  • Medical specialty organizations: You have an edge on ChatGPT
  • Consumer health media: Neither platform favors you heavily

Google Is Still Testing — ChatGPT Has Decided

Google's 10x higher volatility means your visibility there may shift. ChatGPT is more stable — what you see now is likely what you'll get. Plan accordingly.

Video Is Unsettled

If you're considering video for healthcare content, know that Google's approach is still evolving rapidly. The 50+ percentage point swings suggest this is early experimentation, not settled strategy.

Technical Methodology

Data Source: BrightEdge AI Catalyst™

Analysis Approach:

  • Healthcare URL-prompt pairs analyzed across three platforms: ChatGPT, Google AI Mode, Google AI Overviews
  • 14 weeks of citation data tracked (October 2025 – January 2026)
  • Source categorization by type: Government (.gov), Elite Hospital Systems, Medical Specialty Organizations, Consumer Health Media, Video, Crisis/Support, Wikipedia, and others
  • Query intent categorization: Definition, Symptom, Treatment, Condition-Specific, Find Care/Provider, General Health
  • Volatility measured using coefficient of variation (CV) and week-over-week percentage point changes

Data Volume:

  • ChatGPT: 13,181 URL-prompt pairs
  • Google AI Mode: 52,876 URL-prompt pairs
  • Google AI Overviews: 47,500 URL-prompt pairs

Measurement Period:

  • Citation tracking: October 2025 – January 2026
  • Volatility analysis: 14-week rolling window

Key Takeaways

  1. Different Trust Anchors: ChatGPT pulls 27% from government, 1% from elite hospitals. Google AI Overviews pulls 33% from elite hospitals, 10% from government. Fundamentally different definitions of healthcare authority.
  2. Symptom Queries Diverge Most: ChatGPT cites major hospital systems for 57% of symptom queries vs. Google's 18-20%. ChatGPT concentrates trust; Google distributes it.
  3. ChatGPT Is 10x More Stable: Google's healthcare citations show 20% volatility vs. ChatGPT's 2%. Google is experimenting; ChatGPT has decided.
  4. Neither Trusts Consumer Health Media: Both platforms prefer official sources over popular health websites by 2-4x ratios.
  5. Video Is Google's Experiment: Google is testing video (2.7% of citations) while ChatGPT cites none. But 50pp swings suggest Google hasn't settled on whether video belongs in YMYL healthcare.
  6. Query Type Determines Source Mix: Different query intents (symptoms vs. definitions vs. treatments) pull from different source distributions. One-size-fits-all optimization won't work.

Download the Full Report

Download the full AI Search Report — Healthcare AI Citations: How ChatGPT and Google Define Trust Differently

Click the button above to download the full report in PDF format.

Published on January 08, 2026

Finance and AI Overviews: How Google Applies YMYL Principles to Financial Search

Last week we covered Healthcare; this week we focus on Finance (YMYL). We analyzed finance keywords across three years. Google applies similar rules, adapted to financial searches.

Data Collected

We analyzed finance keywords at the same point each December across three years to understand:

  • Year-over-year changes in AI Overview deployment rates
  • Query type patterns (educational vs. transactional vs. real-time data)
  • Category-level expansion patterns within finance
  • Comparisons to Healthcare AI Overview deployment

Key Finding

Google draws clear lines in Finance. AI Overview treatment depends entirely on query type: educational queries like "what is an IRA" hit 91% coverage — nearly identical to Healthcare. But real-time price queries like stock tickers sit at just 7%. And just like Healthcare, Google pulled back from local "near me" queries entirely. Same YMYL principles, adapted to different search behaviors.

Google Draws Clear Lines in FinanceGoogle Draws Clear Lines in Finance

AI Treatment Depends on Query Type

Unlike Healthcare's broad 89% coverage across clinical content, Finance shows distinct zones based on what users are trying to accomplish:

  • Educational queries ("what is an IRA"): 91% have AI Overviews
  • Rate and planning queries: 67% have AI Overviews
  • Stock tickers and real-time prices: 7% have AI Overviews

Google uses AI where synthesis adds value — explaining concepts, comparing options, guiding decisions. For live market data where real-time accuracy matters, Google keeps AI out entirely.

The Real-Time Data Decision

Stock ticker queries represent a deliberate exclusion. Queries like "AAPL stock," "Tesla price," and "Microsoft share price" request real-time price data, not information synthesis.

Google's treatment has been consistent:

  • December 2023: 5% had AI Overviews
  • December 2024: 4% had AI Overviews
  • December 2025: 7% have AI Overviews

This isn't a gap Google is trying to close — it's a line they've drawn. Real-time accuracy matters more than AI synthesis for price lookups.

Where Finance Matches Healthcare

Educational Content Gets the Same Treatment

Google treats educational queries nearly identically across both YMYL industries:

  • Finance "what is" queries: 91% have AI Overviews
  • Healthcare "what is" queries: 95% have AI Overviews

Examples of finance educational queries with AI Overviews:

  • "what is an IRA"
  • "how does compound interest work"
  • "what is dollar cost averaging"
  • "what is a derivative"
  • "ebitda meaning"
  • "what is a bond"

Planning and Guidance Content

Queries seeking financial planning guidance show similar patterns to healthcare treatment queries:

  • Retirement planning queries: 61% have AI Overviews
  • Rate information queries: 67% have AI Overviews
  • Tax-related queries: 55% have AI Overviews

Examples include "rmd table," "capital gains tax rate," "traditional ira vs roth ira," and "what is long term care insurance."

The Local Pullback — Same Story as Healthcare

Google Removed AI From Local Finance Queries

Last week's biggest Healthcare finding: Google pulled AI Overviews from "near me" queries entirely, dropping from 100% coverage in 2023 to 0% in 2025.

Finance shows the same pattern:

  • December 2023 (SGE Labs): 90% of local finance queries had AI Overviews
  • December 2024: 0% of local finance queries had AI Overviews
  • December 2025: ~10% of local finance queries have AI Overviews

The only local query retaining AI coverage is "tap to pay atm near me" — which is more educational about the feature than truly local intent.

Queries Google Removed AI From

  • "Chase bank near me" — NO AIO
  • "Wells Fargo near me" — NO AIO
  • "Bank of America financial center near me" — NO AIO
  • "Financial advisors near me" — NO AIO
  • "ATM near me" — NO AIO
  • "Moneygram near me" — NO AIO
  • "Edward Jones near me" — NO AIO

The Implication

Google tested AI on local queries in both Healthcare and Finance during the SGE Labs period. After broader rollout, they pulled back from both. For "near me" queries, it's all about local pack and maps presence — no AI layer to compete with.

Where AI Is Expanding Fast

Post-Google I/O Growth Trajectory

The SGE-to-AIO transition in May 2024 reset the baseline. Since then, Finance educational and planning content has expanded rapidly.

Post-Google I/O growth (June 2024 → December 2025):

  • Educational queries: 70% → 91% (+21 percentage points)
  • Fixed income (bonds, CDs, treasury bills): 28% → 72% (+44 percentage points)
  • Rate information (mortgage rates, HELOC rates): 26% → 67% (+41 percentage points)
  • Tax questions: 0% → 55% (+55 percentage points)

Category Examples

Educational queries include "what is an IRA," "what is dollar cost averaging," "how does compound interest work," and "ebitda meaning."

Fixed income queries include "what is a bond," "treasury bill rates," "cd rates," and "fixed income investments."

Rate information queries include "interest rates today," "current mortgage rates," "heloc rates," "money market rates," and "home equity loan rates."

Tax queries include "capital gains tax rate," "federal tax brackets," and "tax free municipal bonds."

What This Growth Means

If you rank for educational finance content — retirement planning guides, tax explainers, investment education — you're now competing for citations, not just clicks. Traffic may decline even if rankings hold steady, because AI is answering these queries directly.

What Google Keeps AI Out Of

Google has drawn clear lines around query types where AI Overviews don't appear:

Real-Time Price Data

  • Individual stock tickers: 7% have AI Overviews
  • Market indices: Low AI coverage
  • Live price queries: Traditional results dominate

Examples: "AAPL stock," "Tesla price," "dow jones industrial average today," "S&P 500 futures"

Google recognizes that real-time accuracy matters more than AI synthesis for price lookups. These high-volume queries still play by traditional ranking rules.

Local Services

  • "Chase bank near me" — NO AIO
  • "Wells Fargo near me" — NO AIO
  • "Financial advisors near me" — NO AIO
  • "ATM near me" — NO AIO

Google determined that local pack results and Maps integrations serve users better than AI summaries for finding bank branches, ATMs, and financial advisors.

Calculator and Tool Queries

  • Finance calculator queries: 9% have AI Overviews
  • Examples without AI: "401k calculator," "mortgage calculator," "investment calculator," "compound interest calculator"

Users want interactive tools, not AI explanations. Google serves the tools directly.

Brand and Login Queries

  • Brand login queries: 0-4% have AI Overviews
  • Examples: "fidelity login," "charles schwab login," "td ameritrade login," "robinhood"

Navigational queries go straight to the destination — no AI summary needed.

Finance vs Healthcare: Side-by-Side Comparison

||Healthcare|Finance|
|Educational Query AIO Rate|95%|91%|
|Local "Near Me" AIO Rate|0%|~10%|
|Real-Time Data AIO Rate|N/A|7% (tickers)|
|Overall Trajectory|Near saturation (89%)|Rapid expansion|
|YoY Growth (2024→2025)|+4.6pp|Growing fast in educational categories|

The pattern is consistent: Google uses AI where synthesis adds value (education, explanation, planning) and keeps AI out where real-time accuracy or local relevance matters (live prices, market data, "near me" searches).

Technical Methodology

Data Source: BrightEdge Generative Parser™

Analysis Approach:

  • Finance keywords analyzed at equivalent points in December 2023, December 2024, and December 2025
  • AI Overview presence tracked by category (L1/L2 taxonomy) and query type
  • Query types segmented: educational, rate/planning, real-time/transactional, local intent
  • Local intent queries identified and tracked separately
  • Trendline data analyzed for post-I/O growth trajectory

Measurement Periods:

  • December 2023 snapshot: December 29, 2023
  • December 2024 snapshot: December 29, 2024
  • December 2025 snapshot: December 26, 2025
  • Trendline data: Daily tracking November 2023 through December 2025

Key Takeaways

  • Google Draws Clear Lines: AI treatment in Finance depends entirely on query type — 91% for educational, 67% for rates/planning, 7% for stock tickers.
  • Educational Content Matches Healthcare: "What is" queries hit 91% AI Overview coverage in Finance — nearly identical to Healthcare's 95%.
  • Real-Time Data Stays Out: Stock tickers, market indices, and live price queries sit at 7% AI coverage. Google keeps AI out where real-time accuracy matters.
  • Local Pullback Confirmed: Google removed AI from "near me" finance queries just like Healthcare. Bank branches, ATMs, and financial advisors "near me" show minimal AI coverage.
  • Rapid Expansion Underway: Educational and planning categories grew 20-55 percentage points in one year. Tax, fixed income, and rate queries are expanding fast.
  • Same Principles, Different Application: Both YMYL industries show identical patterns — AI for education, traditional results for local and real-time data.

Industry Implications:

This research confirms that Google applies consistent YMYL principles across industries while adapting to category-specific search behaviors.

For financial services marketers, the implications are significant:

Educational content = New metrics. Retirement planning guides, tax explainers, investment education, "what is" content — expect AI Overviews on most of these queries. If traffic declined while rankings held steady, AI — not your content — is likely the cause. Track citations, not just clicks.

Real-time data = Traditional rankings. Stock prices, market indices, ticker lookups — Google keeps AI out. These high-volume queries still play by traditional ranking rules. Position tracking tells the full story here.

Local "near me" = Maps territory. Bank branches, ATMs, financial advisors — no AI layer to compete with. Google Business Profile optimization and local pack presence drive these results.

The expansion continues. Unlike Healthcare (near saturation at 89%), Finance educational content is still growing rapidly. Categories at 55-70% today could reach 80-90% by late 2026. Plan your measurement strategy accordingly.

Two metrics now matter. For educational and planning content, track AI citation presence alongside traditional rankings. For real-time and local queries, traditional position tracking still tells the full story.

Same strategy, different emphasis. Quality content and technical SEO excellence work across both AI and traditional results. Understanding WHERE your visibility lives helps you measure success correctly and allocate resources appropriately.

Download the Full Report

Download the full AI Search Report —Finance and AI Overviews: How Google Applies YMYL Principles to Financial Search

Click the button above to download the full report in PDF format.

Published on January 03, 2026

Healthcare and AI Overviews: How Google Sharpened Its Approach Over Three Years

Using BrightEdge Generative Parser™, we tracked healthcare keywords over three annual snapshots (Dec 2023, 2024, and 2025). The data shows Google increasing AI-related content in clinical topics, while reducing focus on local and sensitive health queries.

Data Collected

We analyzed healthcare keywords at the same point each December across three years to understand:

  • Year-over-year changes in AI Overview deployment rates
  • Query type patterns (clinical vs. local vs. sensitive topics)
  • Specialty-level expansion patterns within healthcare
  • Stability and volatility trends over time

Key Finding

Google expanded AI Overview coverage in healthcare from 59% to 89% over two years — but drew clear boundaries. Clinical content is now near-saturated (93-100% coverage), while local "near me" queries saw a complete reversal: from 100% AI Overview presence in 2023 to 0% today. Google isn't treating all healthcare queries the same. They're matching AI presence to query type — and they've stabilized their approach.

The Three-Year Transformation

December 2023: 59% of Keywords Had AI Overviews

In late 2023, Google was still in expansion mode with AI Overviews (then called SGE). Healthcare showed moderate coverage, with roughly six in ten queries triggering an AI-generated response.

This was the testing phase. Google was learning where AI helped users and where it created problems — particularly in a Your Money Your Life (YMYL) category like healthcare.

December 2024: 84% of Keywords Had AI Overviews

The biggest expansion happened between 2023 and 2024. AI Overview presence jumped 25 percentage points in a single year as Google rolled out the feature broadly following Google I/O 2024.

Healthcare saw aggressive growth across most clinical categories during this period.

December 2025: 89% of Keywords Have AI Overviews

Growth continued but decelerated significantly — just 5 percentage points year-over-year. This suggests Google has reached near-saturation for the query types it intends to cover.

The slower growth isn't hesitation. It's refinement. Google drew lines on where AI belongs in healthcare search.

The Intent Interpretation

The 30 percentage point expansion from 2023 to 2025 wasn't uniform. It reflects Google's learning about healthcare search intent:

  • Clinical information intent → Heavy AI treatment (symptoms, treatments, conditions)
  • Local provider intent → AI removed entirely (Google reversed course)
  • Sensitive health topics → AI consistently excluded (crisis, mental health)

Google is matching SERP treatment to query risk profile, not just search volume or category.

Where Google Went All-In: Clinical Content

Query Type Breakdown (December 2025)
Treatment and procedure queries now show 100% AI Overview presence — up from 45% in 2023. This represents a 55 percentage point increase.

Pain-related queries show 98% AI Overview presence — up from 58% in 2023. This represents a 40 percentage point increase.

Symptoms and conditions queries show 93% AI Overview presence — up from 57% in 2023. This represents a 36 percentage point increase.

Medical coding queries (ICD-10, CPT) show 90% AI Overview presence — up from 67% in 2023. This represents a 23 percentage point increase.

What This Means for Clinical Content
If you're measuring organic traffic on treatment pages, symptom guides, or condition explainers — and seeing declines while rankings hold steady — AI Overviews are likely the cause.

A Position 1 ranking on "rotator cuff surgery recovery" doesn't deliver the same traffic it did in 2023. The AI Overview answers the query before users reach organic results.

The Specialty Breakdown

  • Google's expansion varied significantly by medical specialty. Core clinical specialties saw the highest coverage:
  • Genetic/Genomic content: 97% have AI Overviews (up from 63% in 2023)
  • Cardiology content: 96% have AI Overviews (up from 63% in 2023)
  • Urology content: 96% have AI Overviews (up from 68% in 2023)
  • Gastroenterology content: 94% have AI Overviews (up from 41% in 2023)
  • Orthopedics content: 94% have AI Overviews (up from 63% in 2023)
  • Neurology content: 89% have AI Overviews (up from 64% in 2023)
  • Lower coverage categories include:
  • Primary Care content: 74% have AI Overviews (up from 66% in 2023)
  • Mental Health/Behavioral content: 63% have AI Overviews (up from 58% in 2023)
  • Eating Disorders content: 63% have AI Overviews (up from 32% in 2023)
  • The pattern: clinical specialties with clear diagnostic and treatment pathways saw the most aggressive expansion. Categories intersecting with sensitive topics or local intent saw more conservative treatment.

Where Google Pulled Back: The Reversal Story

Local "Near Me" Queries: 100% → 0%

This is the most significant finding in the data. Google completely reversed its approach to local healthcare searches.

  • December 2023: 100% of local/provider intent queries had AI Overviews
  • December 2024: 14% of local/provider intent queries had AI Overviews
  • December 2025: 0% of local/provider intent queries have AI Overviews

Queries affected include searches like "dermatologist near me," "cardiologist near me," "pediatric dentist near me," and "best family doctor near me."

Google tested AI Overviews on these queries. Then removed them entirely.

Why This Matters

User-Generated Content
For healthcare practices, clinics, and hospital systems: your local SEO investment still pays off. Google has decided — at least for now — that local healthcare provider searches should be handled by traditional local results (Maps, local pack, organic listings) rather than AI summaries.

The queries that drive new patient acquisition remain traditional SEO territory.

Visual and Imaging Content: 45% → 30%

Queries seeking medical images, X-rays, or diagnostic visuals also saw Google pull back:

  • December 2023: 45% had AI Overviews
  • December 2025: 30% have AI Overviews

Queries like "broken ankle x-ray," "MRI results interpretation," and "skin cancer pictures" don't lend themselves to text-based AI summaries. Google recognized this and reduced coverage.

This creates an opportunity: image-rich medical content faces less AI competition than text-based clinical content.

Where Google Consistently Excludes AI

Some healthcare topics have shown zero or near-zero AI Overview presence across all three years. Google has clearly made policy decisions to keep AI out of these areas:

Consistently Excluded (0% AI Overviews)

  • Self-harm and suicide-related queries
  • Specific eating disorder names (anorexia, bulimia)
  • Crisis intervention terminology
  • Addiction-related queries

Consistently Low Coverage

  • Mental health crisis terminology
  • Abuse-related queries
  • Visual diagnostic content

The Implication

If your content focuses on sensitive mental health topics, traditional SEO fundamentals still apply. Rankings matter. Featured snippets matter. AI isn't intercepting these users — Google has determined that human-curated results are more appropriate.

The Stability Factor: Volatility Is Over

Early 2024: High Volatility

If your content focuses on sensitive mental health topics, traditional SEO fundamentals still apply. Rankings matter. Featured snippets matter. AI isn't intercepting these users — Google has determined that human-curated results are more appropriate.

Late 2025: Stable Deployment

By late 2025, monthly volatility dropped to approximately 5 percentage points. Google has settled into a steady state.

What This Means for Planning

The wild experimentation phase is over. What you see now is Google's established approach to healthcare AI Overviews. You can plan content strategy around these patterns with reasonable confidence they'll persist.

Search Volume Analysis

AI Overview deployment in healthcare does not significantly vary by search volume:

  • High volume keywords (100K+ monthly searches): 89% have AI Overviews
  • Medium volume keywords (10K-100K monthly searches): 88% have AI Overviews
  • Low volume keywords (1K-10K monthly searches): 89% have AI Overviews
  • Very low volume keywords (under 1K monthly searches): 100% have AI Overviews

Google isn't reserving AI Overviews for high-traffic queries. Even long-tail, low-volume medical queries now predominantly feature AI-generated responses.

The decision factor isn't volume — it's query type and intent.

Technical Methodology

Data Source: BrightEdge Generative Parser™

Analysis Approach:

  • Healthcare keywords analyzed at equivalent points in December 2023, December 2024, and December 2025
  • AI Overview presence tracked by specialty area (L2 taxonomy) and query type
  • Query intent patterns categorized by clinical, local, sensitive, and visual content types
  • Stability measured by monthly variance in AI Overview deployment rates

Measurement Periods:

  • December 2023 snapshot: December 22, 2023
  • December 2024 snapshot: December 22, 2024
  • December 2025 snapshot: December 22, 2025
  • Trendline data: Daily tracking November 2023 through December 2025

All figures expressed as percentages to protect proprietary dataset composition.

Key Takeaways

  • The Coverage Expansion: Healthcare AI Overview presence grew from 59% to 89% over two years — a 30 percentage point increase. Most growth occurred between 2023 and 2024.
  • The Clinical Saturation: Treatment queries (100%), pain-related queries (98%), and symptom queries (93%) are now near-fully covered by AI Overviews. Clinical content is AI territory.
  • The Local Reversal: "Near me" and provider-finding queries went from 100% AI Overview coverage in 2023 to 0% in 2025. Google completely reversed course on local healthcare search.
  • The Sensitivity Boundaries: Self-harm, eating disorder names, crisis terms, and addiction queries remain consistently excluded from AI Overviews across all three years.
  • The Specialty Pattern: Clinical specialties (Cardiology, Gastro, Ortho) show 94-97% coverage. Mental health and primary care show 63-74% coverage.
  • The Stability Shift: Monthly volatility dropped from ~20 percentage points in early 2024 to ~5 percentage points in late 2025. Google's approach has stabilized.

Industry Implications:

This research confirms that Google's AI Overview strategy for healthcare has moved from experimentation to established policy. The testing phase produced clear conclusions that now govern deployment.

For healthcare marketers and SEO professionals, the implications are significant:

The landscape has bifurcated. Clinical information queries are AI territory — expect AI Overviews on nearly every symptom, treatment, or condition query. Local provider queries remain traditional SEO territory — Google removed AI entirely.

Traffic declines may not indicate content problems. If your clinical content shows declining traffic while rankings hold steady, AI Overviews — not content quality — are likely the cause. Adjust your success metrics accordingly.

Local SEO still delivers. For practices, clinics, and hospital systems focused on patient acquisition, Google Business Profile optimization and local search fundamentals still drive results. Google kept AI out of these queries.

Sensitive content follows traditional rules. Mental health crisis content, eating disorder resources, and addiction support pages compete in a traditional SERP environment. E-E-A-T signals and authoritative sourcing remain your primary levers.

Two metrics now matter. For clinical content, track AI citation presence alongside traditional rankings. For local content, traditional position tracking still tells the full story.

The system has stabilized. Unlike the volatile deployment of early 2024, Google's current approach appears settled. Plan your 2026 strategy around these patterns with confidence.

Download the Full Report

Download the full AI Search Report — Healthcare and AI Overviews: How Google Sharpened Its Approach Over Three Years

Click the button above to download the full report in PDF format.

Published on December 24, 2025

Black Friday vs. Cyber Monday 2024 vs. 2025: How Google's AI Overview Strategy Evolved in One Year

Using BrightEdge AI Catalyst, we analyzed AI Overview presence around Black Friday and Cyber Monday, comparing 2024 to 2025. After a year of testing, Google drew lines where AI Overviews belong in shopping — and the reveals an intent-based strategy.

Data Collected

We analyzed keywords at the same point heading into Black Friday and Cyber Monday across both years to understand:

  • Year-over-year changes in AI Overview deployment rates
  • Pixel height changes (how much screen real estate AIOs consume)
  • Category-level expansion patterns within eCommerce
  • Before/during/after timing patterns around each holiday
  • Search volume distribution across keywords with AIOs

Key Finding

Google expanded AI Overviews 3x more aggressively for Black Friday than Cyber Monday — and made them 65% larger year-over-year. The pattern reveals an intent-based strategy: AIOs dominate research moments (Black Friday) while traditional results still own purchase moments (Cyber Monday). Google isn't stepping aside for shopping anymore. They're leaning in — strategically.

The Year-Over-Year Expansion

Black Friday: +106% More Keywords with AIOs

Black Friday saw the most dramatic expansion. The number of keywords triggering AI Overviews more than doubled from 2024 to 2025.

This aligns with Black Friday's role as a research-heavy shopping moment. Consumers are hunting for deals, comparing options, and evaluating purchases. Google's AI Overviews are built for exactly this use case.

Cyber Monday: +37% More Keywords with AIOs

Cyber Monday saw meaningful expansion too — but at roughly one-third the rate of Black Friday.

Cyber Monday has evolved into a more transactional holiday. Shoppers have done their research; now they're ready to buy. Google's more conservative AIO expansion here suggests they recognize that traditional shopping results (carousels, PLAs, retail organic) better serve purchase-ready users.

The Intent Interpretation

The 3x expansion gap isn't random. It reflects Google's learning over the past year:

Research intent → Heavy AIO treatment (Black Friday mindset)

Purchase intent → Traditional results dominate (Cyber Monday mindset)

Google is matching SERP treatment to user intent, not just query volume or category.

The Size Shift: 65% More Screen Real Estate

Pixel Height Comparison

Black Friday AIOs averaged 826px in 2024 and grew to 1,368px in 2025 — a 65.6% increase.

Cyber Monday AIOs averaged 831px in 2024 and grew to 1,370px in 2025 — a 64.9% increase.

AI Overviews aren't just appearing on more keywords — they're consuming significantly more screen real estate when they do appear.

What This Means for Organic Visibility

A Position 1 organic ranking in 2025 sits roughly 540 pixels lower than the same ranking in 2024 — even if your actual position didn't change.

For context: 540 pixels is approximately half a standard desktop viewport. Content that was "above the fold" last holiday season may now require scrolling to reach.

The Q4 Trend

This wasn't a sudden holiday spike. Q4 2025 averaged 1,272px compared to Q4 2024's 823px — a 54.6% increase across the entire quarter. Google committed to larger AIOs heading into the shopping season and maintained that size throughout.

The Ramp-Up Pattern: Google's Pre-Holiday Expansion

2024: Flat Throughout November

In 2024, AI Overview pixel height remained remarkably consistent throughout the holiday shopping window. November 20 saw 821px, the day before Black Friday hit 832px, Black Friday itself was 826px, and Cyber Monday was 831px.

Variation of only ~10px across the entire period. Google appeared to be in testing mode — maintaining a steady presence without significant expansion.

2025: Deliberate Ramp-Up

2025 told a different story. Google progressively expanded AIO presence in the weeks leading up to Black Friday.

Four weeks before Black Friday, pixel height averaged 1,204px. Three weeks before, it grew to 1,298px. Two weeks before, it reached 1,307px. One week before, it hit 1,340px. During holiday week, it peaked at 1,371px.

That's a 14% increase in just four weeks — a clear signal that Google intentionally expanded AIO coverage heading into peak shopping.

The Strategic Implication

For 2026 planning: the optimization window may need to shift earlier. Content that earns AIO citations 3-4 weeks before peak shopping could establish presence before the real estate gets crowded. Waiting until November may be too late.

Category Expansion: Where Google Leaned In

eCommerce Share of AIOs Grew Significantly

On Black Friday, eCommerce's share of keywords with AIOs grew from 6.8% in 2024 to 10.1% in 2025 — a 35% increase. On Cyber Monday, it grew from 7.7% to 10.2% — a 32% increase.

Google didn't just expand AIOs overall — they specifically expanded coverage of shopping-related queries. eCommerce's share of the AIO pie grew by roughly one-third.

High-Volume Shopping Keywords

Keywords with 50,000+ monthly search volume that triggered AIOs grew from approximately 1.7% in 2024 to 2.7% in 2025 — a 59% increase in Google's willingness to serve AIOs on high-stakes, high-volume shopping queries.

The Categories That Saw the Biggest Expansion

TV & Home Theater saw the most dramatic growth at +242% for Black Friday and +57% for Cyber Monday. Electronics followed with +75% for Black Friday and +44% for Cyber Monday. Small Kitchen Appliances grew +43% for Black Friday and +42% for Cyber Monday.

These are core gift categories — and exactly where shoppers do the most research before purchasing. Queries like "best 65 inch tv," "ninja vs vitamix," and "top chromebooks" now live in AIO territory.

What Stayed Flat

Apparel saw slight contraction (-4% to -9%) and Home Furnishings declined (-29% to -30%). Google appears more aggressive with AIOs in categories where research and comparison shopping dominate, less aggressive where visual browsing and personal taste drive decisions.

The "No Pullback" Finding

Did Google Step Aside During Peak Shopping?

One theory heading into 2025 was that Google might reduce AIO presence during actual shopping days — letting transactional results take over when users are ready to buy.

The data says otherwise.

Black Friday 2025 Timing Distribution

Keywords with AIOs were distributed almost perfectly across the window: 33.3% the day before, 33.3% on Black Friday itself, and 33.4% the day after. Google maintained consistent AIO presence throughout the shopping window — no pullback during the peak.

Cyber Monday 2025 Timing Distribution

Same pattern: 33.5% the day before, 33.4% on Cyber Monday, and 33.2% the day after. Google isn't temporarily stepping aside for shopping moments anymore. AIOs are a persistent feature of the shopping SERP.

What Google Learned (And What It Means)

The 2024 Testing Phase

Last year's flat pixel heights and moderate expansion suggested Google was still experimenting — testing where AIOs helped users and where they got in the way.

The 2025 Commitment

This year's data shows Google has reached conclusions:

AIOs help shoppers research. The massive Black Friday expansion (research mode) vs. moderate Cyber Monday expansion (purchase mode) shows Google matching AIO presence to intent.

Bigger AIOs don't hurt engagement. The 65% size increase suggests Google's internal metrics support larger AI-generated content blocks for shopping queries.

Category matters. Electronics, appliances, and TV/home theater saw aggressive expansion. Apparel and furnishings didn't. Google is selective about where AIOs add value.

Timing matters less than expected. No significant pullback during peak shopping days means AIOs are now a permanent fixture, not a situational feature.

Strategic Implications for Brands

Two Metrics Now Matter

The old model: track your rank position, measure traffic.

The new model: track two different metrics depending on query intent.

For research queries (Black Friday mindset): Track citation share in AI Overviews and visibility within AIO content. These queries saw the heaviest AIO expansion.

For purchase queries (Cyber Monday mindset): Track traditional rank position, Shopping carousel presence, and PLA performance. These queries saw more conservative AIO growth.

Same content calendar. Different success metrics depending on intent.

The Optimization Window Shifted

2024: Optimize in November, compete during peak shopping.

2025+: Optimize in October (or earlier), establish AIO presence before the ramp-up begins.

Google's 14% pre-holiday expansion suggests that waiting until the shopping season to optimize for AIOs puts you behind brands who started earlier.

Category-Specific Planning

If you're in TV/Electronics/Small Appliances: AIOs are now a major part of your competitive landscape. These categories saw the biggest expansion — TV & Home Theater grew +242%, Electronics +75%, and Small Kitchen Appliances +43%. Plan for AIOs as a persistent SERP feature.

If you're in Apparel/Home Furnishings: AIOs expanded less aggressively — Apparel contracted slightly and Home Furnishings declined significantly. Traditional SEO may still deliver more impact than AIO-specific strategies in these categories.

Technical Methodology

Data Source: BrightEdge AI Catalyst

Analysis Approach:

  • Keywords analyzed at equivalent points around Black Friday and Cyber Monday in 2024 and 2025
  • AI Overview presence tracked by date, industry, and category
  • Pixel height data tracked daily throughout Q4 2024 and Q4 2025
  • Search volume distribution analyzed across keywords with AIOs
  • Category-level breakdowns by L0/L1 taxonomy

Measurement Periods:

  • Black Friday 2024: November 28-29, 2024
  • Cyber Monday 2024: December 1-3, 2024
  • Black Friday 2025: November 27-29, 2025
  • Cyber Monday 2025: November 30 - December 2, 2025

Pixel Height Tracking: Daily averages across tracked keyword set

Key Takeaways

The Expansion Gap: Black Friday saw +106% more keywords with AIOs YoY; Cyber Monday saw +37%. Google expanded 3x more aggressively for research moments than purchase moments.

The Size Shift: AI Overviews grew 65% larger YoY (~826px → ~1,368px). Same rankings now mean less visibility above the fold.

The Ramp-Up Pattern: 2025 showed a 14% increase in AIO pixel height in the 4 weeks before Black Friday. 2024 was flat. Google now deliberately expands heading into peak shopping.

The Category Story: TV & Home Theater (+242%), Electronics (+75%), and Small Kitchen Appliances (+43%) saw the biggest AIO expansion. Core gift categories are now AIO territory.

The No-Pullback Reality: Google maintained consistent AIO presence before, during, and after both holidays. No stepping aside for peak shopping.

The Intent Strategy: Google matches AIO presence to user intent — heavy for research, lighter for purchase. Your metrics should do the same.

Industry Implications:

This research confirms that Google's AI Overview strategy for shopping has moved from experimentation to execution. The 2024 testing phase produced clear conclusions that drove 2025's deliberate expansion.

For SEO and digital marketing professionals, the implications are significant:

The SERP you optimized for last holiday season no longer exists. Position 1 sits 540+ pixels lower than it did in 2024. AIO citation share is now a critical metric for research-phase queries.

Intent segmentation is essential. A single "holiday SEO strategy" no longer makes sense. Research queries and purchase queries require different optimization approaches and different success metrics.

The optimization window moved earlier. Google's pre-holiday ramp-up means establishing AIO presence in October, not November. Brands who wait until peak shopping to optimize are already behind.

Google is committed to AIOs for shopping. The theory that Google would step aside during peak transactional moments has been disproven. AIOs are a permanent fixture of the shopping SERP — plan accordingly for Holiday 2026.

Download the Full Report

Download the full AI Search Report — Black Friday vs. Cyber Monday 2024 vs. 2025: How Google's AI Overview Strategy Evolved in One Year

Click the button above to download the full report in PDF format.

Published on December 10, 2025

Who Does AI Trust When You Search for Deals? Google vs. ChatGPT Citation Patterns Reveal Different Shopping Philosophies

Using BrightEdge AI Catalyst, we analyzed tens of thousands of eCommerce prompts across Google AI Overviews and ChatGPT during the holiday shopping season. The citation patterns reveal two fundamentally different approaches to helping users shop.

Data Collected

Analyzed citation sources across both AI platforms to understand:

  • Which domains each AI cites for shopping queries
  • Retailer vs. third-party citation distribution
  • How the same queries produce different source selections
  • The underlying philosophy driving each platform's approach

Key Finding

Google AI Overviews cite retailers only ~4% of the time, leaning heavily on YouTube, Reddit, and editorial sources. ChatGPT cites retailers ~36% of the time — a 9x difference. The reason? Google's AIOs sit above Shopping carousels that handle transactions, so they can focus on research. ChatGPT has to serve both needs in one response.

The Citation Divide

Google AI Overviews: "The Crowd"

Top cited sources for shopping queries: 

  1. YouTube
  2. Reddit
  3. Quora
  4. Amazon
  5. Facebook

Retailer share of citations: ~4%

Google's AI Overviews lean heavily on user-generated content and editorial sources. YouTube product reviews, Reddit discussions, and Quora Q&A threads dominate the citation landscape for eCommerce queries.

ChatGPT: "The Store"

Top cited sources for shopping queries:

  1. Amazon
  2. Target
  3. Walmart
  4. Home Depot
  5. Best Buy

Retailer share of citations: ~36%

ChatGPT's citations skew heavily toward retailers and product pages. Major marketplaces and retail sites appear at the top, with editorial and UGC sources playing a smaller role.

Same Query, Different Sources

Example: "Costco TV Sale"

Google AI Overview cites:

ChatGPT cites:

When users search for a specific retailer's deals, Google still routes them through community discussion first. ChatGPT goes directly to the source.

Example: "Best Immersion Blender"

Google AI Overview cites:

  • Serious Eats
  • Food & Wine
  • The Spruce Eats
  • CNET
  • YouTube

ChatGPT cites:

  • Amazon
  • Consumer Reports

For research-phase queries, Google cites editorial reviews and recipe sites. ChatGPT cites the marketplace where you can buy immediately.

Why the Difference? Context Matters

Google's Advantage: The Full SERP

Google AI Overviews don't exist in isolation. They sit atop a full search results page that includes:

  • Shopping carousels with product listings
  • Product listing ads (PLAs)
  • Organic retailer results
  • Price comparisons

Because the transactional elements already exist below the AI Overview, Google can afford to make the AIO purely informational. The citation strategy reflects this: cite the crowd for research, let Shopping results handle the purchase.

ChatGPT's Reality: One Response

ChatGPT doesn't have a Shopping carousel underneath. The AI response IS the entire experience. This forces ChatGPT to serve both needs simultaneously:

  • Provide helpful product information
  • Give users a path to purchase

The higher retailer citation rate reflects this necessity — ChatGPT has to be both the research assistant AND the shopping guide.

What Each AI Prioritizes

Google AI Overviews Prioritize:

User-Generated Content

  • YouTube product reviews and unboxings
  • Reddit community discussions (r/BuyItForLife, r/Appliances, brand-specific subreddits)
  • Quora Q&A threads

Editorial Reviews

  • Serious Eats, Food & Wine, The Spruce Eats (kitchen products)
  • Rtings, CNET, Consumer Reports (electronics)
  • Wirecutter, TechRadar, Tom's Guide (tech products)

Why: Google assumes users in the AI Overview are still researching. The shopping infrastructure below handles conversion.

ChatGPT Prioritizes:

Major Retailers

  • Amazon (dominant across categories)
  • Target, Walmart (general merchandise)
  • Home Depot, Lowe's (home improvement)
  • Best Buy (electronics)

Brand Sites

  • Manufacturer product pages
  • Brand-specific information

Editorial (Secondary)

  • Consumer Reports
  • Category-specific review sites

Why: ChatGPT needs to provide a complete answer, including where to buy. Retailer citations serve that need directly.

The Philosophy Behind the Citations

Google's Approach: "What Do Real People Say?"

Google's citation pattern reveals a philosophy of objectivity through community consensus. By citing YouTube reviewers, Reddit discussions, and editorial reviews, AI Overviews position themselves as aggregators of authentic opinion.

This makes strategic sense: Google doesn't need to push users toward purchase — the Shopping results, PLAs, and retail organic listings already do that. The AIO can focus purely on being helpful for research.

ChatGPT's Approach: "Where Can You Get This?"

ChatGPT's citation pattern reveals a philosophy of utility through directness. By citing retailers and product pages, ChatGPT aims to shorten the path from question to purchase.

This also makes strategic sense: without a commercial infrastructure below the response, ChatGPT serves users best by including purchase pathways directly in the answer.

Strategic Implications for Brands

Understanding Visibility Across Platforms

The same content can surface differently depending on which AI a user asks. Brands should monitor visibility across both platforms:

On Google AI Overviews, look for:

  • YouTube content citations
  • Reddit/community mentions
  • Editorial review coverage
  • UGC and social proof signals

On ChatGPT, look for:

  • Retail listing citations
  • Product page references
  • Marketplace presence
  • Brand site mentions

Content That Works Across Both

Quality content surfaces on both platforms — but in different contexts:

  • Product reviews on YouTube → Cited by Google AIOs
  • Strong Amazon listings → Cited by ChatGPT
  • Editorial coverage → Cited by both (Google more frequently)
  • Brand product pages → Cited by ChatGPT primarily

The Monitoring Framework

Rather than building separate strategies, brands should track where their content appears:

YouTube reviews

  • Google AIO Visibility: High
  • ChatGPT Visibility: Low

Reddit presence

  • Google AIO Visibility: High
  • ChatGPT Visibility: Low

Editorial coverage

  • Google AIO Visibility: High
  • ChatGPT Visibility: Medium

Retail listings

  • Google AIO Visibility: Low
  • ChatGPT Visibility: High

Product pages

  • Google AIO Visibility: Low
  • ChatGPT Visibility: HighRetryClaude can make mistakes. Please double-check responses.

Your content surfaces differently on each platform. Use AI search tracking tools to monitor:

  • Which of your pages get cited on Google AIOs
  • Which get cited on ChatGPT
  • How citation patterns shift over time

Understand What Drives Citations

For Google AIO visibility:

  • Invest in YouTube product content
  • Engage authentically in Reddit communities
  • Pursue editorial review coverage
  • Create content that answers "which should I buy?" questions

For ChatGPT visibility:

  • Optimize retail listings and product pages
  • Ensure strong marketplace presence
  • Keep product information current and comprehensive
  • Maintain accurate brand site content

Let Each Platform Do Its Job

Google cites the crowd because Shopping results handle transactions. ChatGPT cites retailers because it has to do both. Your content strategy doesn't need to force either platform to behave differently — optimize your content, and each AI will use it where it fits.

Coming Next

This analysis focused on citation patterns — which sources each AI trusts for shopping queries.

Up next: Our full Black Friday and Cyber Monday post-mortem, examining how Google's AI Overview strategy evolved from 2024 to 2025 and what it signals for holiday shopping in 2026.

Technical Methodology

Data Source: BrightEdge AI Catalyst

Analysis Approach:

  • eCommerce prompts analyzed across Google AI Overviews and ChatGPT
  • Citation sources extracted and categorized by domain
  • Domains classified by type (retailer, UGC/social, editorial, brand)
  • Same prompts compared across platforms where possible
  • Week-over-week citation changes tracked

Measurement Periods:

Holiday Shopping Season 2025

Key Takeaways

  • The 4% vs. 36% Divide: Google AIOs cite retailers ~4% of the time; ChatGPT cites retailers ~36% — a 9x difference
  • The Source Hierarchy: Google leads with YouTube/Reddit/Quora; ChatGPT leads with Amazon/Target/Walmart
  • The Context Explanation: Google can focus on research because Shopping results handle purchase; ChatGPT has to do both
  • The Same-Query Difference: Identical searches produce different source selections based on each platform's philosophy
  • The Strategic Reality: Brands should monitor visibility on both platforms and understand where their content surfaces

Industry Implications:

This research reveals that "AI search optimization" isn't a single discipline — it's platform-specific based on how each AI approaches user intent.

Google's AI Overviews operate as a research layer above a transactional infrastructure. ChatGPT operates as a complete response that must serve multiple needs simultaneously. These different contexts drive fundamentally different citation strategies.

For brands, the implication is clear: the same content can win on both platforms, but understanding where and why each AI cites sources helps you measure success accurately. Track your visibility across both scoreboards, and optimize content that serves users at every stage of the journey.

Download the Full Report

Download the full AI Search Report — Who Does AI Trust When You Search for Deals? Google vs. ChatGPT Citation Patterns Reveal Different Shopping Philosophies

Click the button above to download the full report in PDF format.

Published on December 03, 2025