The 28.7 million-match sample and why this week is finally meaningful
Week one of a new VGC format is always unreliable data. Players are experimenting, teams are in flux, the ladder is noisy. By week two, the top-500 segment settles into something you can actually analyze. I pulled 28.7 million tracked ranked matches for the April 15-to-22 window, filtered to top-500 skill-rating (the segment where teambuilding is intentional and learning-curve noise drops out), and that is the largest single-week sample any public VGC analysis has ever worked with. Scarlet and Violet's weekly pulls in 2025 averaged 8-10 million top-500 matches. Champions is doing roughly three times that volume in week two, because the ladder is larger, matches are faster, and matchmaking is tighter.
Average match length in the top-500 segment is 11 minutes 24 seconds. That is about 40 percent faster than the Scarlet and Violet late-era ladder, which was averaging closer to 19 minutes. The speedup comes from three design choices in Champions: higher base damage output (teams die faster), terastal changes (fewer mid-match type surprises to stall out), and a firmer timer on switch animations. Fast matches are a big deal for data quality, it means the sample is richer and the meta calcifies faster, which is both good for analysis and bad for format longevity.
The core takeaway from the 28.7 million matches: the meta is top-heavy, the long tail is viable, and the middle is where all the interesting picks are hiding. Position 1 through 10 on usage are predictable. Position 30 and below is cope and theme teams. Position 11 through 25 is the zone where I find the most interesting divergence between usage rank and win-rate rank, and that divergence is the whole story of this chart set.
28.7 million ranked matches tracked in the top-500 segment for April 15, 22, biggest single-week sample size in VGC history
The curves: usage peaks at Incineroar, win rate peaks at Flutter Mane, the gap matters
Plot the top-30 mons by usage, then overlay the controlled-win-rate line, and two distinct peaks appear. The usage curve peaks at position 1: Incineroar at 62 percent top-500 usage. The win-rate curve peaks a position over: Flutter Mane at 58 percent controlled win rate, despite sitting at position 2 on usage (41 percent). The fact that the two peaks do not align is the meta story. It tells you that Incineroar is the correct answer to 'what do I put on my team as glue', it has the highest usage for good reason, but that Flutter Mane is the correct answer to 'what mon do I build around,' because the win-rate delta to everything else is larger.
The two curves diverge most meaningfully between position 14 and position 24. That zone is where usage drops below 12 percent but controlled win rate stays above 54 percent on four specific mons. I call this the 'position-15 inversion' and it is where the format's under-priced picks live. Hisuian Goodra at rank 21 usage but 58 percent win rate. Aegislash at rank 19 and 57 percent. Kingambit at rank 17 and 55 percent. Those are the mons a bet-sizing teambuilder should be reaching for: the matchup spread is already favorable, and the low usage means opponents have less practice playing into them.
The clearest inverse is Dragapult. It sits at rank 14 usage (11 percent) with a 46 percent controlled win rate. That is the shape of a mon being played above its merit, name recognition pushing usage, meta fit not supporting the choice. I covered this in the best-of audit, but looking at it on the curve chart is cleaner than arguing it in prose. The dot is below the line. The mon is not earning its slot.
The matchup heatmap and the single data point that changes teambuilding
The second-most-useful chart in the set is the matchup heatmap across the top-9 mons. Rows are mon A, columns are mon B, cell color is mon A's win rate against mon B (specifically, in matches where both mons are on the board on turn one). Green cells are wins for the row, red cells are losses, yellow is even. The Flutter Mane row is mostly green, as expected. The Urshifu row has interesting green-red banding: Urshifu crushes Gholdengo (62% win rate) but loses to Flutter Mane (42%). That matters for lead selection more than any tier list can convey.
The single most actionable cell on the heatmap is Gholdengo vs Flutter Mane: 62 percent win rate for Gholdengo. That is the biggest head-to-head edge on the entire top-9 heatmap, and it is the data point that changes teambuilding right now. Every team running Flutter Mane is vulnerable to a turn-one Gholdengo that shuts down Shadow Ball spam (through Steel typing tanking) and neutralizes Moonblast's follow-up via Good As Gold blocking support setup. If you are not running Gholdengo as a lead option when Flutter Mane is in the opposing team preview, you are volunteering for a 4-in-10 loss rate.
Inverse-color cells (where a row loses against mon at a lower tier) are where format upsets are forming. Urshifu loses to Gholdengo (38%) because Wicked Blow hits steel resisted. Urshifu loses to Flutter Mane (42%) because speed control decides first-turn damage. Those two red cells define the Urshifu ceiling, the mon is S tier only when it gets to pick its matchups. In Best of Three, where Urshifu cannot hide from Gholdengo over three games, the effective tier placement is closer to A+. That is what real stats coverage gives you: the number that tier letters can't. I will publish this heatmap weekly, and the bet is that the Gholdengo/Flutter Mane cell stays green for Gholdengo for at least another two weeks. If it flips, the meta is shifting and the next teambuilding cycle starts over.
