Key Takeaways
  • **Perplexity:** 2 citations
  • **Claude:** 10 citations
  • **ChatGPT:** 6 citations
  • **Gemini:** 14 citations
  • **Total across all platforms:** 32 citations

Progressive Insurance, one of the most recognized brands in American insurance with billions in annual ad spend, earned just 32 total citations across Perplexity, ChatGPT, Claude, and Google Gemini in our AI visibility audit. That is not a typo. A company that saturates television, radio, digital ads, and sports sponsorships managed only 32 mentions when AI engines answered insurance-related questions. Meanwhile, third-party comparison sites like Insurify and MoneyGeek were pulling in more citations than Progressive itself, largely because those sites were publishing the exact comparison content that Progressive never created. The lesson from this AI visibility case study is blunt: brand size and ad budget do not translate into AI search visibility, and if you do not create your own comparison content, someone else will create it for you and capture the citations you should be earning.

Watch the full breakdown: Check out our video walkthrough of this audit on YouTube where we walk through the data in real-time.

This case study comes directly from Chapter 10 of the GetCited ebook, where we break down real audit data from real brands to show how AI visibility works in practice. Progressive was selected because it represents a scenario we see constantly: a dominant brand with enormous market presence that assumes its visibility will carry over into AI search. It does not. AI engines do not care about your media budget. They care about whether your content answers the question better than someone else's content. And right now, for Progressive, someone else's content is winning.

The Audit: What We Tested and How

GetCited ran this audit across all four major AI search engines: Perplexity, ChatGPT, Claude, and Google Gemini. We queried each platform with identical insurance-related prompts, the kinds of questions real consumers ask when they are shopping for auto insurance, comparing providers, or trying to understand coverage options. We then tracked every citation, counted how many times Progressive's domain appeared in each engine's responses, and compared those numbers against competing domains.

Here is the platform-by-platform breakdown for Progressive Insurance:

Those numbers tell a story on their own, but the story gets sharper when you look at them in context. Progressive is one of the top three auto insurers in the United States. It spends over $2 billion annually on advertising. Its brand recognition is effectively universal among American adults. And yet, when consumers ask AI engines about car insurance, Progressive appears in the response barely a third of the time, and on some platforms, it is almost entirely absent.

Two citations on Perplexity. Two. For a brand that insures more than 27 million drivers.

Why Each Platform Told a Different Story

One of the most valuable things about a multi-engine AI visibility audit is that it reveals how differently each platform selects sources. Progressive's numbers are not just low in aggregate. They are wildly inconsistent across engines, which tells us that the problem is not a single fixable issue. It is a structural gap in how Progressive's content performs in AI retrieval contexts.

Gemini: The Strongest Showing (14 Citations)

Gemini gave Progressive its best numbers, which makes sense when you understand how Gemini works. Google Gemini draws heavily on Google's existing search index. Progressive has invested heavily in traditional SEO for years and ranks well for many insurance-related queries in standard Google search. That existing search authority translates directly into Gemini citations.

But 14 citations is still not dominant. For a brand of Progressive's size, you would expect Gemini to be citing it in almost every insurance-related response. The fact that it does not suggests that even Progressive's traditional search presence has gaps, particularly in the kind of long-form, comparison-oriented content that AI engines favor.

Claude: Respectable but Incomplete (10 Citations)

Claude showed a moderate level of trust in Progressive as a source. Claude tends to favor authoritative, well-established domains, and Progressive certainly qualifies. But Claude also demands information density. It wants pages that thoroughly answer questions, not marketing pages that redirect users to a quote form. Progressive's 10 citations on Claude suggest that some of its content meets that threshold, but a lot of it does not.

The pages Claude cited from Progressive tended to be its informational resources, pages explaining types of coverage, definitions of insurance terms, and educational content. The pages Claude ignored were product pages, landing pages, and anything that read more like an ad than an article. This pattern is consistent with what we see across every Claude audit we run. If your content prioritizes conversion over information, Claude will skip it.

ChatGPT: Middle of the Road (6 Citations)

ChatGPT's 6 citations put Progressive in an awkward middle ground. Not invisible, but not authoritative either. ChatGPT tends to pull from a broad mix of sources and weights brand recognition as one signal among many. Progressive's brand recognition is helping it get some mentions, but the lack of comprehensive, question-answering content on Progressive's own domain is limiting how often ChatGPT reaches for it.

What stood out in the ChatGPT results was that comparison sites were frequently cited alongside or instead of Progressive. When a user asks ChatGPT "is Progressive cheaper than State Farm," ChatGPT does not go to progressive.com to find the answer. It goes to whichever site has already published a detailed, data-backed comparison of those two companies. And Progressive has not published that content. Other sites have.

Perplexity: Nearly Invisible (2 Citations)

Two citations on Perplexity is essentially invisible. Perplexity is a search-first AI engine that aggressively crawls the web and cites sources in real time. It tends to favor recently updated, well-structured, content-rich pages. Progressive's near-absence from Perplexity results tells us something specific: the content Progressive publishes does not match what Perplexity's retrieval system is looking for.

Perplexity rewards pages that are structured for extraction. Clear headings, direct answers in the first paragraph, schema markup, comprehensive coverage of a topic from multiple angles. Progressive's web content is largely built around driving users to get a quote, not around answering the informational queries that Perplexity surfaces. That design choice is understandable from a conversion standpoint, but it is costing Progressive visibility in the AI search engine that cites the most sources per response.

The Real Problem: Third-Party Sites Are Telling Progressive's Story

Here is where this AI visibility case study gets uncomfortable for Progressive and any brand in a similar position.

When we looked at which domains were actually earning citations for Progressive-related queries, a pattern jumped out immediately. Sites like Insurify, MoneyGeek, NerdWallet, and similar comparison platforms were getting cited more than Progressive itself. These sites were the ones AI engines turned to when users asked questions like "Progressive vs State Farm," "is Progressive good for young drivers," or "cheapest car insurance companies."

The reason is simple. Those comparison sites had already published detailed, structured, data-driven content comparing Progressive to its competitors. Progressive had not.

Think about that for a moment. There is an entire category of high-intent consumer queries that include Progressive's brand name, and Progressive does not own the content that answers those queries. Instead, Insurify publishes "Progressive vs State Farm: Which Is Cheaper in 2026?" and MoneyGeek publishes "Progressive Auto Insurance Review: Pros, Cons, and Alternatives." Those pages are structured, comprehensive, regularly updated, and packed with the kind of comparative data that AI engines love to cite.

Progressive, meanwhile, has no equivalent content on its own domain. If you go to progressive.com and look for a page that honestly compares Progressive to State Farm or GEICO, you will not find one. Progressive's site is built to sell insurance, not to answer the comparative questions that increasingly drive both AI citations and consumer decisions.

This is not unique to Progressive. We see this pattern in almost every AI citation case study we run. Brands assume that being the subject of a query means they will be the cited source. They will not. Being the subject of a query and being the best source for answering that query are two completely different things. AI engines do not care that Progressive is the brand being asked about. They care about which page best answers the question. And right now, the best answers are being published by sites that do not sell insurance at all.

The Comparison Content Gap

The strategic failure here is specific and fixable. Progressive has a comparison content gap. It has not published content that directly addresses how it stacks up against competitors on price, coverage, customer service, or claims experience.

There are reasons brands avoid this kind of content. Legal teams worry about making claims that could be challenged. Marketing teams do not want to acknowledge competitors by name. Brand guidelines say the website should focus on "our story" rather than "us versus them." These concerns are understandable but increasingly irrelevant in an AI search landscape.

Here is the math. When a consumer asks an AI engine "how does Progressive compare to State Farm," one of two things happens:

  1. Progressive has published its own honest, detailed comparison page. The AI engine finds it, evaluates it alongside other sources, and potentially cites Progressive's own data and framing.

  2. Progressive has not published that content. The AI engine finds Insurify's version, MoneyGeek's version, or NerdWallet's version instead. Those sites control the narrative, choose the data points, and frame the comparison however they see fit.

Option 1 gives Progressive a seat at the table. Option 2 gives that seat to someone else. Right now, Progressive is living in Option 2, and the audit data proves it.

This pattern is not limited to "versus" queries. It extends to any question where the user wants an objective evaluation rather than a sales pitch. "Is Progressive worth it?" "What are the downsides of Progressive?" "Does Progressive have good customer service?" These are all queries where third-party sites dominate AI citations because they provide the balanced, evaluative content that AI engines are looking for.

Content Patterns of the Pages That Win Citations

To understand what Progressive would need to create to close this gap, we analyzed the content patterns of the pages that are currently winning AI citations in the insurance vertical. The data points to a clear profile.

Word Count: Average 3,960 Words

The pages earning the most AI citations in insurance are long. Not long for the sake of length, but long because they cover the topic comprehensively. A 500-word "Progressive review" that hits three bullet points and links to a quote form is not going to compete with a 4,000-word breakdown that covers pricing by state, coverage options, customer satisfaction scores, claims process, discounts, and direct comparisons to three or four competitors.

AI engines are trying to synthesize answers from the best available sources. A source that covers a topic from eight angles gives the AI engine more material to work with than a source that covers it from two. The AI engine is more likely to find a relevant passage to cite, more likely to trust the source's depth of coverage, and more likely to return to that source for related queries.

This does not mean every page needs to be 4,000 words. It means the pages targeting high-value, comparison-oriented queries need to be comprehensive enough to serve as a genuine resource. Progressive's current web content does not meet that bar for most insurance comparison queries.

Schema Markup: 76% Use Article Schema, 56% Use FAQ Schema

Structured data is not optional for AI visibility. Among the top-cited pages in our insurance audit, 76% used Article schema markup and 56% used FAQ schema. These are not coincidental numbers.

Article schema tells AI crawlers exactly what a page is: an article with a headline, author, publication date, and defined body of content. FAQ schema goes further, explicitly marking up questions and their answers in a format that AI engines can extract directly. When an AI engine encounters a page with FAQ schema that asks "Is Progressive cheaper than State Farm?" and provides a structured answer, that engine has a pre-formatted question-answer pair it can incorporate into its response with minimal processing.

Progressive's site uses schema markup for its product pages and quote flows, but it largely lacks the Article and FAQ schema that characterizes the pages winning AI citations. This is a technical gap that a development team can fix relatively quickly, but only if the underlying content exists to mark up. Schema on a thin page does not help. Schema on a comprehensive, well-structured article helps a lot.

Content Structure: Direct Answers in the First Paragraph

Another consistent pattern among top-cited pages is that they front-load the answer. The first paragraph of a high-performing page typically contains a direct, concise answer to the primary question the page addresses. The rest of the page expands on that answer with data, context, comparisons, and nuance.

This structure matters because AI engines extract passages from pages, and the passages they extract tend to come from the beginning of the content. A page that buries its key finding in paragraph seven is less likely to have that finding surfaced by an AI engine than a page that states it upfront.

This is the same principle that has driven journalism for over a century: lead with the news. AI engines have adopted this preference because it works. The inverted pyramid structure, where the most important information comes first and supporting details follow, maps perfectly onto how AI retrieval systems identify and extract relevant content.

What Progressive Should Do: A Practical Roadmap

Based on the audit data from this AI citation case study, here is what Progressive would need to do to close its AI visibility gap. These recommendations apply to any brand in a similar position.

1. Publish Comparison Content on progressive.com

This is the single highest-impact action. Progressive needs to create detailed, honest, data-backed comparison pages that address the queries consumers are already asking. "Progressive vs State Farm" is the obvious starting point, but the list extends to every major competitor: GEICO, Allstate, USAA, Liberty Mutual, and others.

Each comparison page should be at least 3,000 words and cover pricing, coverage options, customer satisfaction, claims experience, discounts, and availability by state. The tone should be informative, not promotional. The data should be sourced and current. The page should use Article schema and FAQ schema.

Will this feel uncomfortable for a brand that prefers to talk about itself rather than its competitors? Probably. But the alternative is letting Insurify and MoneyGeek control the narrative indefinitely. Progressive has more data about its own products, pricing, and customer experience than any third-party site ever will. Using that data to create genuinely useful comparison content is not a risk. It is an opportunity that Progressive is currently handing to its competitors.

2. Build Out Educational Content That Answers Real Questions

Beyond comparison pages, Progressive needs more content that answers the informational queries driving AI citations. Questions like "how much does car insurance cost for a 25-year-old," "what does full coverage insurance include," and "how to lower your car insurance rate" are all queries where AI engines cite sources extensively. Progressive has the expertise and data to create the definitive answers to these questions, but it has not done so.

The content should be structured for AI extraction: clear headings, direct answers, supporting data, FAQ sections, and schema markup throughout. Every page should be built with the assumption that an AI engine will read it, extract passages from it, and potentially cite it in a response.

3. Fix the Perplexity Problem

Two citations on Perplexity is a fixable problem, but it requires understanding what Perplexity values. Perplexity favors recently updated content, clear page structure, and content that explicitly answers specific questions. Progressive should ensure its key pages are updated regularly with fresh data, that its site structure allows Perplexity's crawler to access and index content efficiently, and that its robots.txt file is not inadvertently blocking PerplexityBot.

A GetCited audit can identify the specific technical barriers that may be suppressing Progressive's Perplexity visibility. In many cases, the fix is as simple as updating crawler permissions and adding Last-Modified headers to key pages.

4. Monitor Citation Performance Across All Four Engines

A single audit is a snapshot. Progressive needs ongoing monitoring to track whether its content changes are translating into improved citations. Each AI engine updates its models and retrieval systems on its own schedule, so citation performance can shift without warning. Recurring audits through GetCited would allow Progressive to track its citation rate, competitive rank, and platform-specific performance over time and catch regressions before they compound.

5. Reclaim the Brand Narrative

Every day that Progressive does not publish its own comparison content is a day that Insurify, MoneyGeek, and NerdWallet get to frame Progressive's story for AI engines. Those sites are not hostile to Progressive, but they are not Progressive. They choose which data points to highlight, which competitors to compare against, and how to frame strengths and weaknesses. Progressive should want to own that framing.

This goes beyond content marketing. It is a brand strategy issue. In a world where AI engines answer consumer questions by synthesizing information from the open web, the brands that publish the most comprehensive, authoritative information about themselves and their competitive landscape will control how AI engines talk about them. Brands that stay silent will have their stories told by others.

What This Case Study Means for Your Brand

Progressive is not an outlier. It is a pattern. In audit after audit, GetCited finds that large, well-known brands with massive advertising budgets and strong traditional SEO perform surprisingly poorly in AI citation tracking. The reasons are almost always the same:

They build websites for conversion, not information. Product pages, quote forms, and landing pages do not earn AI citations. Information-dense, question-answering content does.

They avoid comparison content. Whether out of legal caution or brand strategy, most major brands refuse to publish content that directly compares them to competitors. Third-party sites fill that gap and capture the citations.

They ignore platform-specific differences. A brand might have decent visibility on Gemini because of its existing Google search rankings but be nearly invisible on Perplexity because its content structure does not match what Perplexity's crawler is looking for. Without a multi-engine audit, they never discover the gap.

They assume brand size equals AI visibility. It does not. AI engines do not weight citations by ad spend, revenue, or market share. They weight citations by content quality, structure, comprehensiveness, and relevance to the specific question being asked.

If any of these patterns sound familiar, your brand is probably in the same position as Progressive. The good news is that the fix is well understood. The content patterns that earn AI citations are measurable and replicable. The technical requirements are documented. The platform differences are known. What most brands lack is not a strategy but visibility into the problem itself.

That is exactly what GetCited's AI visibility audits are designed to provide. You cannot fix what you cannot see, and most brands cannot see their AI citation performance at all. The first step is measurement. Everything else follows from there.

The Bigger Lesson: You Cannot Outsource Your Own Story

The most important takeaway from this Progressive Insurance AI visibility study is not about schema markup or word count or Perplexity's crawler preferences. It is about who controls the narrative.

Every brand operates in a competitive landscape where consumers ask questions. Those questions used to flow through Google and end up on your website. Now they flow through AI engines and get answered in the response itself, with citations pointing to whichever source provided the best answer.

If you do not publish the best answer to questions about your own brand, someone else will. And that someone else will get cited. They will shape how AI engines describe you, compare you, and recommend you. They will control the narrative, and you will not even know it is happening unless you run an audit.

Progressive Insurance is a $50 billion company with effectively unlimited marketing resources. It could create the best insurance comparison content on the internet if it chose to. The fact that it has not, and that mid-size comparison sites are earning more AI citations as a result, is a warning to every brand in every industry. Size does not protect you in AI search. Content does.


Frequently Asked Questions

What is an AI visibility case study?

An AI visibility case study is an in-depth analysis of how a specific brand or domain performs across AI search engines when users ask relevant questions. Unlike a traditional SEO audit that measures keyword rankings on Google, an AI visibility case study tracks citations across multiple AI platforms, in this case Perplexity, ChatGPT, Claude, and Google Gemini, to understand how often, where, and why a brand gets mentioned in AI-generated responses. The Progressive Insurance AI visibility case study examined here uses real audit data from GetCited to show how a massive brand with strong traditional search presence can still underperform in AI citations due to content gaps and structural issues.

Why does Progressive Insurance get so few AI citations despite being a huge brand?

Progressive's low citation count (32 total across four platforms) comes down to a content problem, not a brand recognition problem. AI engines do not cite brands based on how well-known they are. They cite the pages that best answer the specific question a user asked. Progressive's website is built primarily for conversion, with product pages, quote funnels, and marketing content that AI engines pass over in favor of more informational sources. Most critically, Progressive has not published comparison content addressing queries like "Progressive vs State Farm," so third-party sites that have published that content are capturing the citations instead. Brand size and advertising spend do not influence AI citation behavior the way they influence traditional brand awareness metrics.

Which AI platform cited Progressive the most and why?

Google Gemini cited Progressive 14 times, making it the strongest platform for Progressive by a wide margin. This is because Gemini's citation behavior tracks closely with Google's existing search index. Progressive has strong traditional SEO performance and ranks well for many insurance queries in standard Google search, which gives it a built-in advantage on Gemini. In contrast, Perplexity cited Progressive only twice because Perplexity favors recently updated, well-structured, content-rich pages that answer specific questions directly, and most of Progressive's web content does not match that profile. This platform-level variation is why a multi-engine audit matters. A brand checking only one AI engine would get a misleading picture of its overall AI visibility.

How can brands prevent third-party sites from dominating their AI citations?

The most effective approach is to publish the content yourself before third parties do, or to create better versions of content that third parties have already published. For comparison queries (Brand A vs Brand B), you need detailed, honest, data-driven comparison pages on your own domain. For evaluative queries ("is Brand A worth it" or "Brand A pros and cons"), you need balanced, comprehensive content that addresses both strengths and weaknesses. The technical layer matters too: use Article and FAQ schema markup, structure content with clear headings, put the key finding in the first paragraph, and aim for comprehensive coverage around 3,500 to 4,000 words. You cannot stop third-party sites from writing about you, but you can make sure your own content is the best available source when an AI engine goes looking for answers.

How do I find out if my brand has the same AI visibility problem as Progressive?

Run a multi-engine AI visibility audit. GetCited audits your brand across Perplexity, ChatGPT, Claude, and Google Gemini simultaneously, tracking citation counts, competitive rankings, and platform-specific performance. The audit will show you exactly which queries your competitors are getting cited for and you are not, which platforms treat you well and which ignore you, and where third-party sites are capturing citations that should be going to your domain. The Progressive case study in this article came directly from this kind of audit. Without the data, Progressive would have no way to know that Insurify and MoneyGeek were outperforming it in AI citations. Most brands are in the same blind spot until they measure it.