Where AI Engines Agree on Brands. And Where They Don't.

BrightEdge AI Catalyst analysis across five AI search engines shows that the brands AI engines recommend converge tightly in some categories and diverge sharply in others. Where consumers transact, the engines agree. Where consumers research, the engines

BrightEdge AI Catalyst analysis across five AI search engines shows that the brands AI engines recommend converge tightly in some categories and diverge sharply in others. Where consumers transact, the engines agree. Where consumers research, the engines develop distinct preferences for which brands belong in the answer.

BrightEdge AI Catalyst analysis reveals that ChatGPT, Perplexity, Gemini, Google AI Mode, and Google AI Overviews recommend a strikingly similar set of brands at the aggregate level. Pairwise overlap in top-named brands across the dataset falls in a tight 36% to 55% band. But that aggregate average masks meaningful category-level variance. In retail, travel, and tech, brand agreement across all five engines runs between 88% and 97%. In healthcare and finance, it drops to 60% and 71%. The same engines that converge on the same retailers, hotels, and consumer electronics brands disagree substantially about which hospitals, financial institutions, and authoritative publishers to recommend.

This is the follow-up to our recent BrightEdge AI Catalyst research on cross-engine source and brand convergence. The original study found that AI engines pull from wildly different sources but recommend the same brands. That finding holds at the dataset level. This installment splits the same dataset by category to show where the agreement is real, where the divergence concentrates, and what each engine's editorial signature looks like inside the categories where the engines part ways.

Data Collected

 

Data PointDescription
Brand mention share by engineShare of each engine's total brand mentions directed to each named brand, across all analyzed prompts
Category-level brand overlapPairwise top-30 overlap in named brands across all five engines, calculated within each category
Brand category classificationEach named brand classified by industry vertical: retail, travel, tech, finance, healthcare, education, news and editorial
Same-brand cross-engine comparisonShare of mentions for individual brands tracked across all five engines to surface dramatic treatment differences
Engine concentration by categoryShare of voice held by leading brands within each category, by engine
Brand absence patternsBrands present in one engine's top 15 within a category but missing from another engine's top 100 within the same category

 

Data PointDescription
Pairwise brand overlap rangeHighest and lowest overlap percentages across all 10 engine pairs, by category
Engine editorial signaturePattern of brand types each engine favors within high-divergence categories: associations, publishers, institutional players, editorial media
Average brand rank by categoryPosition at which engines name brands within each category, indicating engine commitment to a shortlist versus broad list
Industry coverageAnalysis spans retail, travel, tech, finance, healthcare, education, and news and editorial

Key Finding

The aggregate brand convergence finding is real, but it is not evenly distributed across categories. When the same dataset is split by industry vertical, the spread of pairwise top-30 brand overlap looks very different from category to category. Retail shows 97% average pairwise brand agreement across all five engines. Travel shows 94%. Tech shows 88%. Finance drops to 71%. Healthcare drops to 60%, with a low-to-high range of 40% to 77% across engine pairs. The categories where consumers transact directly with brands (retail, travel, tech) draw from a smaller universe of well-established brands, and the engines pick from that same shared pool. The categories where consumers conduct deep informational research (healthcare, finance) have a much wider universe of credible brands, and each engine has developed its own preferences for which brands belong in the answer. The implication for brand strategy is that one playbook works across all five engines in transactional categories, while research-driven categories require engine-by-engine measurement and category-specific positioning.

Brand Agreement Sits at Different Points by Category

The pairwise top-30 brand overlap across all five engines varies materially by category. Where the buying decision is the dominant query intent, the engines converge. Where the research process is the dominant query intent, they diverge.

CategoryPairwise Brand Overlap (Avg)Range
Retail97%92% to 100%
Travel94%90% to 100%
Tech88%83% to 97%
Finance71%60% to 90%
Healthcare60%40% to 77%

In retail, every engine pulls from a small, stable universe of major retailers and consumer brands. Amazon, Walmart, Target, Best Buy, Home Depot, and a handful of category leaders dominate every engine's top 30, leaving little room for divergence. Travel behaves the same way: a tight cluster of online travel agencies, hotel groups, and airlines (Expedia, Booking, Marriott, Hilton, Delta, United) carries across all five engines. Tech sits one step behind at 88% because the pool of relevant tech brands is slightly larger and the engines start to differentiate on which platforms, tools, and services they elevate. Finance and healthcare are the two categories where the divergence becomes the story.

Same Brand. Same Study. Different Treatment.

Inside the high-divergence categories, the same brand often receives dramatically different treatment from one engine to another. The gap is not subtle. Several flagship brands show share-of-mention differences of 9x to 22x across engines.

Mayo Clinic. Mayo Clinic is the most universally recognized healthcare brand in the dataset. Gemini directs 13.1% of its healthcare brand mentions to Mayo Clinic. Google AI Overviews directs only 1.5%. The same flagship authority commands roughly 9x more share on Gemini than on AI Overviews.

Cleveland Clinic. Cleveland Clinic captures 6.5% of Perplexity's healthcare brand mentions. On ChatGPT, that figure drops to 0.3%, a roughly 22x gap. Two engines querying the same healthcare prompts treat the same major medical institution very differently.

Goldman Sachs. Goldman Sachs ranks in the top 10 most-mentioned finance brands on Google AI Mode. On ChatGPT, Gemini, and Perplexity, Goldman Sachs does not crack the top 100. Institutional banking presence on AI Mode is materially different from institutional banking presence on the other engines.

Nasdaq. Nasdaq holds 4.5% share of finance brand mentions on ChatGPT. On AI Mode, that figure drops to 0.4%, an 11x gap. Engines that draw from exchange and regulatory sources elevate Nasdaq strongly. Engines that draw from institutional banking sources do not.

Bloomberg. Bloomberg captures 2.9% of finance brand mentions on Gemini. On AI Mode, Bloomberg has zero presence in the top 100. Editorial finance brands are heavily represented on Gemini and largely absent from AI Mode.

These are not measurement noise. They are systematic patterns that reflect the editorial signature each engine has developed inside high-divergence categories.

Each Engine Has a Personality Inside High-Divergence Categories

Inside the categories where the engines part ways, each engine favors a recognizable type of brand. The patterns hold up across the dataset and offer a usable mental model for predicting how each engine will treat brands in any high-divergence category.

ChatGPT favors specialty associations and exchanges. ChatGPT's healthcare top brands include the American Academy of Orthopaedic Surgeons (AAOS), the American Urological Association (AUA), the American College of Gastroenterology (GI), the American Academy of Pediatrics (AAP), and the American Academy of Family Physicians (AAFP). Its finance top brands lead with Nasdaq, the SEC, and major brokerages (Fidelity, Schwab, Vanguard). The pattern is consistent: ChatGPT elevates professional associations, regulatory bodies, and exchanges that carry institutional weight within their specific subcategory.

Perplexity leans on consumer health publishers and trading research tools. Perplexity is the only engine where consumer health publishers (Healthline, WebMD, Medical News Today) appear prominently in healthcare mentions. In finance, Perplexity uniquely elevates trading research tools like Stock Analysis, http://Investing.com , and Marketbeat alongside the standard exchanges. The pattern reflects Perplexity's broader posture as a research-oriented engine that surfaces both institutional and consumer-facing reference sources.

Gemini hyper-concentrates on flagship authorities and editorial finance. Gemini directs 13.1% of healthcare brand mentions to Mayo Clinic and 9.5% to NIH, two brands that account for nearly a quarter of all Gemini healthcare mentions. In finance, Gemini elevates Bloomberg, Wall Street Journal, Reuters, Forbes, and Investopedia, the editorial finance media set. The pattern is consistent: Gemini behaves like a concentrated authority recommender that leans heavily on a small set of flagship brands rather than producing broad lists.

Google AI Mode quietly favors institutional banks. AI Mode's finance top brands lead with JPMorgan, Wells Fargo, UBS, Goldman Sachs, Barclays, and Citigroup. The institutional banking presence is unique to AI Mode and is not mirrored on any other engine in the dataset. This pattern is not visible in aggregate share-of-voice reports, but it is clearly visible when finance brand mentions are isolated by engine.

Google AI Overviews spreads thin across providers. AI Overviews has the lowest brand concentration in high-divergence categories. No single brand commands more than 1.5% of healthcare mentions, and no finance brand exceeds 0.8%. The engine's top-15 lists in these categories include a wider variety of brands at lower share-of-voice levels, consistent with AIO's broader UGC-first sourcing posture.

What Marketers Need to Know

Brand agreement is real, but it is category-dependent. The aggregate finding that AI engines mostly agree on brands holds up. But the agreement is not evenly distributed. Retail and travel converge at 94% to 97% pairwise overlap. Healthcare and finance drop to 60% and 71%. Where your category sits on that spectrum determines whether one strategy works across all five engines or whether you need to look engine by engine.

Where consumers buy, engines converge. Where consumers research, engines diverge. Transactional categories pull from a small pool of well-established brands, and the engines pick from the same pool. Research-heavy categories have a much wider universe of credible brands, and each engine has developed its own preferences inside that universe. The strategic implication is straightforward: a single AI search strategy is portable across engines if you operate in retail, travel, or tech. If you operate in healthcare, finance, or other research-driven categories, you need to think harder about which engine is treating your brand how.

Each engine has a recognizable personality inside high-divergence categories. ChatGPT favors specialty associations and exchanges. Perplexity favors consumer health publishers and trading research tools. Gemini favors flagship authorities and editorial finance media. AI Mode favors institutional banks. AI Overviews spreads thin across providers. Knowing the editorial signature of the engines that matter to your buyers is half the work of building visibility in a high-divergence category.

Audit by engine within your category, not just in aggregate. The aggregate share-of-voice number can hide a lot if you are operating in a high-divergence category. A flagship competitor "missing" from one engine but dominant on another is not a measurement error. It is that engine's editorial signature in your space, and it is actionable. Brand teams operating in healthcare, finance, or any research-driven category should be measuring share of voice engine by engine and treating the divergence as a strategic input rather than a reporting inconvenience.

Match your authority strategy to the engines that matter. If your buyers rely heavily on Gemini, your strategy should prioritize being covered by the flagship authorities and editorial media that Gemini elevates. If your buyers use ChatGPT, your priority is presence in the specialty associations, regulatory bodies, and exchanges relevant to your subcategory. If your buyers use AI Mode in finance, institutional banking visibility is the lever that moves the needle. The three-layer source framework from the prior study still applies. The category-level engine personalities tell you where to put the emphasis.

A flagship competitor missing from one engine is signal, not noise. When a major brand in your category dominates one engine and is absent from another, that is meaningful information about how each engine has built its editorial signature. Mayo Clinic at 13.1% on Gemini and 1.5% on AIO is not random. Goldman Sachs in AI Mode's top 10 and absent from ChatGPT, Gemini, and Perplexity is not random. These patterns reflect each engine's source layer weighting and editorial posture. They are also the most actionable input for prioritizing PR, content, and authority-building investment.

Technical Methodology

ParameterDetail
Data SourceBrightEdge AI Catalyst
Engines AnalyzedChatGPT, Perplexity, Gemini, Google AI Mode, Google AI Overviews
Industries CoveredRetail, travel, tech, finance, healthcare, education, news and editorial
Brand CategorizationEach named brand classified into an industry vertical using a domain-level taxonomy
Overlap MethodologyPairwise top-30 brand mention lists compared across all 10 engine pairs within each category
Same-Brand ComparisonShare-of-mention values for individual brands tracked across all five engines to identify dramatic treatment differences
Engine Personality AnalysisTop-15 brand lists per engine per category analyzed for systematic patterns in brand type and editorial posture
Data CleaningCitation artifacts attributable to search engine result page disclaimers were removed from Google surfaces to avoid inflation

Key Takeaways

FindingDetail
Brand agreement is category-dependentAggregate pairwise overlap (36% to 55%) masks category-level variance from 60% in healthcare to 97% in retail
Transactional categories convergeRetail (97%), travel (94%), and tech (88%) show high pairwise brand agreement across all five engines
Research categories divergeHealthcare (60%) and finance (71%) show meaningfully lower agreement, with healthcare ranging from 40% to 77% across engine pairs
Same-brand treatment varies by 9x to 22xMayo Clinic (9x), Cleveland Clinic (22x), Nasdaq (11x), Goldman Sachs (top 10 vs absent), Bloomberg (2.9% vs zero)
ChatGPT favors specialty associations and exchangesAAOS, AUA, GI, AAP, Nasdaq, SEC, major brokerages
Perplexity favors consumer health publishers and trading researchHealthline, WebMD, Medical News Today, Stock Analysis, http://Investing.com , Marketbeat
Gemini concentrates on flagship authorities and editorial financeMayo Clinic, NIH, Bloomberg, WSJ, Reuters, Forbes, Investopedia
AI Mode favors institutional banksJPMorgan, Wells Fargo, UBS, Goldman Sachs, Barclays, Citigroup
AI Overviews spreads thin across providersLowest brand concentration of any engine in high-divergence categories
Audit by engine in research-driven categoriesAggregate share-of-voice hides engine-specific treatment that matters for strategy

 

Download the Full Report

Download the full AI Search Report — Where AI Engines Agree on Brands. And Where They Don't.

Click the button above to download the full report in PDF format.

Published on  May 07, 2026