Based on BrightEdge Research, for every 100 visits a site receives from human organic search, on average it receives 88 from AI agents. The takeaway from that is that AI agents have crossed from background noise into a channel that rivals human search in volume. At the current rate of growth, agent requests could very well surpass human ones before the end of this year. And approximately 95% of that agent activity already comes from OpenAI.
For digital marketers, that number has a practical implication that goes beyond strategy. If you haven't already had conversations with your IT, dev, or ops teams about AI agents, there's a good chance those agents are currently being treated the same as any other bot -- meaning they may be throttled, blocked, or otherwise restricted from reaching your content. The systems that control this sit outside of marketing's hands, and closing that gap sooner rather than later is no longer optional.
Before that conversation can happen, it helps to understand what these agents are actually doing when they show up at your site. Some are there to learn about your brand over time, building the AI model's understanding of your products, your expertise, and what you are a trusted source for. And some are acting on behalf of a specific customer at a specific moment, trying to retrieve the information that person needs right now to make a decision.
Those jobs are different, and the infrastructure decisions that affect one do not necessarily affect the others. That is why a blanket policy applied to all of them could create problems, usually invisibly, and usually at the worst possible moment in the customer journey. For example, if a policy is blocking an agent that is trying to access your site to gather pricing information to help a customer build a short list of services to consider, your brand could be out of a new RFP entirely.
The Three Agents Are Not the Same
When you talk to your IT team about AI agents, the first thing to clarify is that the agents visiting your site serve different purposes and need to be managed differently.
BrightEdge data shows that only 19% of enterprise sites have any specific directives for ChatGPT-related agents. The rest are applying legacy crawler policies that were never designed with AI agents in mind. Among sites that do have directives, the breakdown looks like this:
77% block GPTBot, the training agent
Only 21% have addressed OAI-SearchBot, the search agent
38% have a directive for ChatGPT-User, the user-facing retrieval agent.
OAI-SearchBot determines whether your content surfaces in ChatGPT search results. If it cannot access your pages, you are less visible when users query ChatGPT for topics where you should appear.
ChatGPT-User is the most time-sensitive of the three. It operates on behalf of a specific person at a specific moment, retrieving current information to answer a question they are asking right now. When a user asks ChatGPT about your product or service, this agent visits your site to get the answer. If it gets blocked or encounters an error, that user gets an incomplete response.
Unblocking Is Not Enough
Updating robots.txt to allow these agents is a great starting point. But there’s more you can to do roll out the welcome mat for these new visitors. AI agents are a lot like a human user in many ways. They hit obstacles, run into errors, and sometimes simply cannot get through. The difference is that none of this activity shows up in Google Analytics or any standard web analytics platform. The visits happen, the problems happen, and traditional web analytics won’t surface them because agents aren’t tracked the way a human is.
BrightEdge has been analyzing agent activity across thousands of websites. There’s some interesting things we have observed. When we look specifically at ChatGPT-User, the agent acting as a real-time proxy for a human customer, nearly 1 in 6 interactions hits a wall. Those failures fall into three categories.
The door is locked. Nearly two thirds of ChatGPT-User errors are 403 responses, which means the server is explicitly refusing the request. This is almost always the result of security rules that were put in place to block malicious scrapers and are now catching AI agents in the same net.
The lights went out. About 29% of errors are 503 responses, meaning the server was simply unavailable when the agent arrived. This is not a policy issue. It is a reliability issue. The site could not handle the request. No security rule change will fix this one. It requires a separate conversation about infrastructure capacity and how AI agent traffic is being handled at the server level.
The line was too long. The remaining errors are largely 429 responses, which means the site told the agent it was making too many requests and cut it off. Rate limiting rules designed for crawlers that sweep thousands of pages can end up being applied to AI agents that are making a small number of targeted requests on a specific customer's behalf. If you know this happening, you can help IT be surgical in how these safety precautions are applied.
Each of these is a moment where a customer asked a question using AI search and your brand could not answer it.
The Visibility Problem Is the Ongoing Challenge
The reason this has gone unaddressed is not indifference. It is that nobody could see it. Standard web analytics does not capture AI agent traffic. Infrastructure teams have not been looking for it. Marketing teams had no signal that anything was wrong.
BrightEdge AI Agent Insights was built specifically to surface this data. It uses your log files to surface which agents are visiting your site, what content they are accessing, and where they are running into problems. It provides the visibility layer that makes it possible to monitor AI agent health as an ongoing practice rather than a one-time configuration fix.
BrightEdge AI Agent Insights shows you exactly where AI Agents may be having trouble with your site
The agents are already visiting your site and already shaping what customers hear about your brand. How well they can do that job depends on decisions being made in your infrastructure today, most of them without full information about what is at stake. This is the information that changes that.
What to Bring to the Conversation
When you talk to your IT or infrastructure team, a few specific action items will move things forward faster than a general request to allow AI agents.
Use a capability like BrightEdge AI Agent Insights to spot where Agents are having issues with your site. Bring this analysis to your team.
Ask them to review robots.txt directives for GPTBot, OAI-SearchBot, and ChatGPT-User as well as AI agents from Claude and Google Gemini, and confirm that any blocking reflects an active decision rather than a default
Ask them to check whether WAF or CDN rules are catching AI agent traffic as a false positive, and whether those agents can be treated separately from malicious crawlers
Ask them to review rate limiting thresholds and confirm they are appropriate for retrieval-pattern agents rather than high-volume crawlers
AI agents may not be the bots your infrastructure was built to manage. They are something new to the scene for many IT teams. But make no mistake, they are active participants in the customer journey. The conversation between marketing and IT about how to handle them is one most organizations have not had yet. And it may be one of those most important ones you can have this year.