Your rank tracker is lying to you. Not because the people who built it are dishonest, but because the foundation it was built on just got demolished.
If you manage SEO for clients and you noticed keywords silently disappearing from your dashboards around September 2025, you are not imagining things. Google rank tracking broke in a way that most agencies still do not fully understand, and SEO tool accuracy across the industry took a hit that nobody is talking about loudly enough.
I build an SEO tool for a living. I could quietly pretend this problem does not affect the industry. Instead, I want to explain exactly what happened, why it matters, and what you should do about it.
What Did Google Actually Change?
Google removed a URL parameter called &num=100 that allowed any tool or user to load 100 organic search results on a single page instead of the default 10. This happened around September 10-11, 2025, with zero announcement, zero documentation, and zero warning.
That one parameter was the backbone of how virtually every rank tracking tool on the market collected data. Semrush, Ahrefs, Moz, AccuRanker, and dozens of smaller tools all depended on it. One API call could capture positions 1 through 100 for any keyword. Now, retrieving the same data requires 10 separate requests, each returning just 10 results.
This is not a minor technical adjustment. It is a structural collapse of the data pipeline the entire SEO tool industry was built on.
Why Does This Break SEO Tool Accuracy?
The cost of collecting the same ranking data jumped 10x overnight. That is not a metaphor. Tools that previously needed one server request per keyword now need ten. Multiply that across millions of keywords tracked daily and the infrastructure costs become brutal.
Here is what actually happens in practice:
- Cost explosion: Infrastructure costs increased 10x for every rank tracking provider. Those costs get absorbed, passed to customers, or the tracking scope gets quietly reduced.
- Data gaps: Many tools now prioritize tracking the top 10-20 positions reliably and let deeper positions become inconsistent or simply disappear from reports.
- Silent keyword loss: Keywords your clients rank for at positions 21-100 may show as "not ranking" in your dashboard. They are still ranking. Your tool just stopped checking.
An analysis of 319 Search Console properties found that 87.7% of sites experienced drops in impressions and 77.6% lost visibility for unique keyword phrases after the change. Short-tail and mid-tail keywords took the biggest hit because those are the terms that rank tracking tools scrape most frequently.
Here is the uncomfortable part: those impression numbers in Google Search Console were never entirely real to begin with. Bot-driven rank checks from SEO tools were generating artificial impressions that inflated your data. With the parameter gone, the phantom impressions disappeared too.
Your actual clicks never changed. Human traffic stayed exactly the same. Only the inflated impression counts dropped.
How Did Semrush and Ahrefs Respond?
Both major platforms acknowledged the disruption. Semrush publicly stated the change makes their processes "significantly more resource-intensive." Ahrefs invested heavily in proxy networks and alternative data collection methods.
I respect both teams for taking the hit and rebuilding their systems. But here is what agencies need to understand: the workarounds these tools implemented are fundamentally more expensive and less reliable than the old method. The data you get today from any SERP-scraping rank tracker is structurally different from the data you received before September 2025.
If you are comparing year-over-year ranking data from Semrush or Ahrefs across that September 2025 boundary, you are comparing two completely different data collection methodologies. That comparison is not valid.
No one is telling agencies this clearly enough.
Why Were SEO Tools Built on Scraping in the First Place?
This is the dirty secret the headline refers to. The entire rank tracking industry was built on a single URL parameter that Google controlled and could remove at any time. Thousands of SEO tools, from industry leaders to niche platforms, built their core product around something they had zero control over.
I have thought about this a lot as someone building an SEO platform for agencies. The lesson is clear: any tool that depends entirely on scraping Google's search results is one policy change away from a data crisis.
This is not the first time Google tightened the screws. In January 2025, Google began enforcing JavaScript on search result pages specifically to block scrapers. Tools like SimilarWeb, Rank Ranger, and SE Ranking experienced data blackouts. The &num=100 removal was the second punch in a one-two combination.
Google is actively hiring engineers focused on combating SERP scrapers. This is not a one-time event. It is a deliberate, ongoing campaign.
What Should Google Search Console Be in Your Workflow?
Google Search Console should be your primary source of truth for ranking data. Full stop.
GSC provides first-party data directly from Google's servers. It reflects how real users find and click on your site. It is not perfect. It has a 48-hour data lag, it only retains 16 months of history, it does not offer SERP feature visibility, and it aggregates position data rather than giving you a snapshot for a specific moment. But the data it does provide is the closest thing to ground truth that exists.
After the &num=100 removal, GSC data actually became more accurate, not less. The artificial impressions from bot-driven scraping disappeared. What remains is a cleaner picture of real human search behavior.
Here is how I recommend structuring your data sources:
1. Google Search Console as your primary ranking and impression data source
2. GA4 for validating that ranking improvements translate to traffic and conversions
3. Third-party rank trackers as a secondary directional signal, not the source of truth
4. Manual spot checks in incognito mode for your 10-20 most critical keywords
If your client reporting depends entirely on third-party rank tracker data without GSC validation, you are building reports on a shaky foundation.
What Does This Mean for Agency Reporting?
This is where it gets practical. If you run an agency, here is what I would do right now:
Audit Your Keyword Tracking Scope
Check how many of your tracked keywords fall in positions 21-100. If a significant portion of your "ranking keywords" were in that range, your dashboard may now show dramatic losses that do not reflect reality. Cross-reference with GSC.
Stop Reporting Vanity Position Data
A keyword ranking at position 47 was never driving meaningful traffic. The top 3 positions absorb roughly 75% of organic clicks. Page 2 drops to under 1% CTR. Position 47 might as well be position 470 in terms of business impact.
Focus your reporting on metrics that matter: clicks, conversions, and revenue from organic. Not position counts.
Explain the Data Break to Clients
If you have not already told your clients about this change, do it now. Any year-over-year comparison that spans September 2025 needs context. Impressions dropped. Average positions appeared to improve. Neither reflects a real change in performance.
The agencies that proactively explain this build trust. The agencies that hope clients do not notice are one awkward reporting call away from losing credibility.
Diversify Your Data Sources
Do not rely on a single tool for ranking data. Use GSC as the foundation, supplement with a rank tracker for competitive intelligence, and validate with actual traffic data from GA4.
If you are evaluating tools, compare how they handle data accuracy transparently. Vantacron vs Ahrefs and similar comparisons should include how each tool sources its ranking data and what happens when that source breaks.
A Checklist for Agencies Dealing With Broken Rank Data
Here is what to do this week:
- [ ] Connect Google Search Console to your reporting dashboard if you have not already
- [ ] Compare your rank tracker data with GSC for your top 50 keywords
- [ ] Flag any keywords showing "not ranking" in your tool but still generating impressions/clicks in GSC
- [ ] Update client report templates to lead with clicks and conversions, not raw position counts
- [ ] Add a data methodology note to any report comparing data across the September 2025 boundary
- [ ] Identify which tracked keywords in positions 21-100 actually drove clicks in the past 6 months (most will not have driven any)
- [ ] Set up manual spot-check routines for your 10 highest-value keywords per client
- [ ] Review your tool costs and consider whether tracking 1,000 keywords to position 100 still makes financial sense
Is the &num=100 Removal Actually a Good Thing?
Honestly, yes. I think it is.
The old system was a house of cards. Tools were scraping Google at massive scale, generating artificial impressions that polluted Search Console data, and agencies were reporting those inflated numbers to clients as if they meant something. We built an industry-wide habit of tracking keywords at position 87 and treating that data as meaningful.
Position 87 was never meaningful. It never drove a click. It never generated revenue. It was a comfort metric that let SEOs feel like they were making progress.
The removal forces the industry to get honest about what data actually matters. Track the keywords where you can win. Measure by traffic and conversions. Stop padding reports with 500 keywords that rank on page 6.
And beyond traditional rankings, this shift pushes the industry toward something I care deeply about: understanding how your content performs in AI search. Over 40% of search queries now flow through conversational AI interfaces. If your entire measurement strategy is still "what position am I on Google's blue links," you are measuring the past.
At Vantacron, we check 15 GEO factors per page, track whether AI crawlers can access your content, and score your AI search readiness. Not because traditional SEO is dead, but because the measurement landscape just proved it can change overnight. You need data sources that are not built on someone else's scraping loophole.
The Bigger Picture: Google Is Closing the Open Web
This is worth saying plainly. Google is systematically reducing what third-party tools can observe about its search results. The &num=100 removal. JavaScript enforcement on SERPs. Increased bot detection. Hiring dedicated anti-scraping engineers.
The direction is clear: Google wants to be the sole provider of search performance data, delivered through Google Search Console and its APIs on Google's terms.
For agencies, this means reducing dependency on any single data pipeline you do not control. It means treating Google Search Console integration as non-negotiable in your tool stack. And it means building your reporting and strategy around first-party data wherever possible.
The tools that survive this shift will be the ones that are transparent about where their data comes from, honest about its limitations, and built to integrate with first-party sources like GSC rather than replace them.
That is the standard I hold Vantacron to. And it is the standard you should hold every tool in your stack to.
Frequently Asked Questions
Why did Google remove the &num=100 parameter?
Google removed the &num=100 parameter around September 10-11, 2025, without any public announcement. The move targeted large-scale SERP scraping by rank tracking tools and AI systems that were hammering Google's servers with massive data requests. It also eliminated artificial bot-driven impressions that were distorting Search Console data for millions of websites.
Are my rankings actually lower since the &num=100 removal?
No. Your actual search rankings have not changed because of this parameter removal. What changed is how tools collect and report that data. Keywords may appear as "not ranking" in third-party dashboards because the tool stopped checking positions 21-100 reliably. Cross-check with Google Search Console. If clicks and impressions for those keywords remain stable, your real performance is unchanged.
Should I still use Semrush or Ahrefs for rank tracking in 2026?
Both Semrush and Ahrefs rebuilt their data collection systems after the &num=100 removal and remain valuable tools for competitive intelligence, keyword research, and directional rank tracking. But treat their position data as a secondary signal, not your source of truth. Always validate against Google Search Console, which provides first-party data directly from Google's servers.
What is the best source of truth for SEO ranking data in 2026?
Google Search Console is the most accurate source for your own site's ranking data because it uses first-party data from Google's servers. It shows real human impressions, clicks, and average positions without the distortion that third-party scraping introduced. Supplement GSC with GA4 for traffic validation and a rank tracker for competitive insights.
How does the &num=100 removal affect SEO tool accuracy for agencies?
Agencies are the hardest hit. Rank trackers now cost more to operate (10x more API calls), report less complete data for positions beyond the top 20, and create year-over-year comparison gaps across the September 2025 boundary. Agencies should proactively explain these data shifts to clients, shift reporting toward clicks and conversions, and integrate GSC as the primary data source in every client dashboard.