Before we get to the topic at hand, let's go to the way-back machine and review the infancy of the Calpreps.com system.
- All teams started each year at 0.0
- It was California-only
- Results from each region were generally weighted evenly
- Early predictions were erratic, but got better as the season progressed
- Sutter would sometimes end the year rated higher than Long Beach Poly
Take a moment with that last one.
It didn't take long before the programmers began to understand the importance of sample diversity for a system like this. If a region was too enclosed, the top team(s) could appear to be artificially strong. Eventually, a scale was put on the regions so that the end results would make more sense. This was later applied to various states after the system went national. Within California, this adjustment generally works well, although I think we do see some NorCal teams underrated a little for some of the bowl games in relation to our SoCal brethren. But, generally, the in-state ratings do work. Out-of-state is still difficult, but it's getting better.
So now, Calpreps has decided to make another adjustment for its predictions. They believe they see trends for some teams that may improve more at the end of the year than others, translating into greater playoff success. Here are some examples their predictions reflecting this adjustment:
regular season (neutral field)
[2024] St. Francis (Mountain View, CA) 28 (72%), [2024] Soquel (CA) 20 (28%)
playoffs (neutral field)
[2024] Soquel (CA) 24 (58%), [2024] St. Francis (Mountain View, CA) 21 (42%)
regular season (neutral field)
[2024] Westlake (Westlake Village, CA) 28 (57%), [2024] Birmingham (Van Nuys, CA) 26 (43%)
playoffs (neutral field)
[2024] Birmingham (Van Nuys, CA) 34 (81%), [2024] Westlake (Westlake Village, CA) 20 (19%)
regular season (neutral field)
[2024] Rocklin (CA) 34 (62%), [2024] Grant (Sacramento, CA) 28 (38%)
playoffs (neutral field)
[2024] Grant (Sacramento, CA) 35 (73%), [2024] Rocklin (CA) 26 (27%)
Now, I don't mean to pick on any of these teams, but Soquel, Birmingham, and Grant are examples of teams that are getting this "playoff bump". The idea is that these teams 'turn it up' during the playoffs so Calpreps wants their playoff predictions to reflect that to be "more accurate". I can understand their desire to make an adjustment and even applaud the fact they're willing to deviate from their normal path so that they don't get stuck in a mindset.
However, does anything stick out about these three teams? Anything similar about them?
I would point out that, while they have had success in recent years in the playoffs, all three were in either lower-division brackets (Soquel in CCS D-II, Grant in SJS D-III) or just a weak one (Birmingham in LACS-Open). At this moment, the 72-team LA City Section has exactly 6 members with positive Calpreps ratings. Heck, the 39-team Northern Section has 7! I'm not questioning whether last year's Soquel or Grant teams were good, just that this recognition of playoff success may be strongly tied to their respective playoff field. Could Grant beat Rocklin in this year's SJS D-II playoffs? Sure, but why the above 15-point swing?
The ratings boosts these teams get in the playoffs are also amplified, since playoff games are weighted more than regular season games. Soquel's run through the sectional, NorCal, and state 4AA bowl game saw them get up to 37.2. Palma had a similar run en route to a state 4A bowl win. Their rating got up to 27.9. I find that interesting because the rating with which Salinas ended was 27.4. It's interesting because these teams played in the regular season with the Cowboys beating the Knights 27-0 and the Chieftains 35-14. In the playoffs Soquel would play Carmel, Monterey, and Christopher. Palma would play Hollister, Menlo-Atherton, and Alisal. Salinas played NorCal Open rep Serra. Slight difference? Yes, there may be some who would suggest that Soquel really did improve towards the end of the year, but by this many rating points?
Yes, I am a Salinas backer as an alum, but no, I do not harbor any ill will towards the Soquel program. I was in the stands cheering them on to their state bowl win in Pasadena (always good to see a local team have success). But I'm certain there are several examples like this that can be found around state now that most of the CIF sections have moved to some sort of competitive equity model... and that's where the problem lies for Calpreps. Those who have followed my posts throughout the years should know that I am generally a pretty strong supporter of the website and the algorithm. However, this playoff adjustment is the wrong one, in my opinion.
I believe it's not so much that the teams advancing in the playoffs should be getting a predictive boost, but rather each respective division needs to be scaled just as the overall sections and states have been. Should the winner of SS D-III be rated higher than most of the teams making the SS D-II field? Should the CCS D-II winner be rated higher than 5 CCS D-I/Open teams? Generally speaking, I don't think so. It should be noted in the system and predictions just how much more difficult it gets as teams move up to higher division. Yes, there will end up being a good deal of overlap amongst some of the middle divisions as those are the teams in the middle of the bell curve, but overall, this is the adjustment that would improve the final results of the algorithm.
For decades, Calpreps has fought for recognition, acceptance, and credibility for their ratings. Although I applaud their willingness to try something new, I believe they're going down the incorrect path on this.
So, as the playoffs quickly approach and you start to see some interesting (if not, strange) predictions, this is why.