The use of the Calpreps.com system has grown significantly over the last few years by the CIF sections. While, it initially had garnered interest with a separate rating list that did not include MoV, the full system in all its glory is now being utilized. In the SS and LACS, as examples, the ratings are used exclusively to determine divisional separations and seeding placements, while in the CCS, they are incoporated into the seeding process. While I do agree that the system with MoV is a much more accurate algorithm than that without, it's not without issues that I believe tend to detract from the ratings for NorCal more teams than those for SoCal.
For those who have not spent a great deal of time trying to understand how it works, the ratings are based on the results of game. The system awards points based on MoV within a certain window and with some base minimum credit for a win itself. The average results (for the most part) ends up being a team's rating. There are a couple of extra facets that I will cover shortly, one of which is the focus of this post.
Specifics:
Teams get a base rating value for a win or loss based on the opponent's rating. This is to say that if Team A beats Team B and B's rating was 20, Team A's game rating value is the MoV + 20. If Team A's rating was 25, then Team B's game rating would be 25 - MoD There are limits to this and the actual values remain fluid until all of the results for the week get entered into the system.
Let's say in the example above Team A wins 24-20. There is a base 15 minimum rating boost just for winning the game, so the game rating for A would be 35, while B would get a 10. This would boost A's rating a bit and lower B's, so the numbers would not be exact, but generally this is how it would work. These game rating would remain the same even if the score was 35-20 (thus, the 15-point minimum boost/detraction). Had A won by 17, then their rating would be 37, while B's would be 8, etc.
But, let's say Team A won by a blowout, then there is a cap (30 points) after which it no longer matters. Had the score been 56-10, Team A's game rating would be 50, while B's would be -5. It could have been 100-0 and these numbers would remain the same.
A key exception to this blowout scenario would be if A's rating was more than 30 than that of B's. In this case, a blowout would have been generally expected, so the system greatly lowers the impact of the result. This is for the protection of the integrity of the ratings. If B was a program that was really struggling and had a rating of -60, it wouldn't be fair for A to receive a game rating of 0 for beating them 60-0, nor realistic for B to get a -35. These results are not considered to be significant.
There is one more specific game result scenario, but I will get to that in just a second...
Overall, a teams' ratings eventually become the general average of the year's results.
The last exception to the average result is the undefeated team rule. An undefeated team must remain rated higher than any team it's defeated. That gap typically depends on the rest of the season each team has. It could be a lot of points, but it typically has a minimum of 0.2. A perfect example of this exists with Long Beach Poly (current rating of 64.2) and Mission Viejo (64.0), due to the fact that the Jackrabbits defeated the Diablos on September 2nd. LBPs average for significant games is 63.2, while that for MV is 64.3, but Poly must remain higher because of that win.
Now for the last specific game result scenario... if Team A is rated more than 30 points higher than Team B, but does not blow them out by 30+, then that game becomes significant and it gets tallied as such in the ratings. This exact scenario played out twice in NorCal last week, and I would argue takes place more commonly in general than in SoCal. I think NorCal teams just tend not to run up scores nearly as much, particularly in tight communities. In SoCal, you typically have one continuous mass of humanity, so it's not like there is a huge culture different between the cities of Whittier and Santa Fe Springs, when it comes to football.
Serra defeated Valley Christian 36-7. The Padres' current rating (post-game) is 61.0, while the Warriors' have 10.2. This 29-point result is now considered significant because Serra simply chose not to run up the score. Granted the 30-point limit is supposed to take this into consideration, but the Padres were actually up 36-0 at the half. Could they have dropped 60 or even 70 on VC if they wanted and made the game insignificant? Yeah, probably, but that didn't happen. They clearly called off the dogs very early.
In Serra's case, this result is not going to really impact anything than their placement on the overall state and national list. Could they have been higher than LBP or MV this week without this result being a part of their rating average? Maybe. If they win their next two games, though, they will enter the CCS D-I playoffs as the #1 seed and pretty handily so.
For Salinas, last week's result could end up costing them a place or two during the seeding process. The Cowboys' current rating is 30.5 and they defeated Alisal (also located in Salinas, rating -12.4) by the score of 42-17. While this game was not close and was never in doubt, the 25-point margin makes the result significant. In this game, the starting Salinas QB went down with an injury, so the coaches seem to have become a little gun shy about keeping their starters in for longer than was necessary. Their last TD appears to have been scored by a reserve who didn't have even 6 carries for the entire year entering that game.
Now, I'm not suggesting that teams do run up the score late in games purposely to get past that 30-point barrier, but because of the system that the CIF sections are adopting, perhaps holding a 31-point lead may have to be considered. I do think it's very important to get your young players game time experience to help them grow for subsequent years. This is just a potential side-affect that has been introduced.
I just think that when looking at the state rating list, this should be kept in mind. I think there is a perception that the Calpreps system tends to artificially place NorCal teams lower. I think this is one of the factors in that.
One final note is that the references to the 15- and 30-point limits also tend to be fluid. It can be 14 on the low-end or perhaps 28 in the upper. Calpreps adjusts this depending on the results it sees throughout the year.
For those who have not spent a great deal of time trying to understand how it works, the ratings are based on the results of game. The system awards points based on MoV within a certain window and with some base minimum credit for a win itself. The average results (for the most part) ends up being a team's rating. There are a couple of extra facets that I will cover shortly, one of which is the focus of this post.
Specifics:
Teams get a base rating value for a win or loss based on the opponent's rating. This is to say that if Team A beats Team B and B's rating was 20, Team A's game rating value is the MoV + 20. If Team A's rating was 25, then Team B's game rating would be 25 - MoD There are limits to this and the actual values remain fluid until all of the results for the week get entered into the system.
Let's say in the example above Team A wins 24-20. There is a base 15 minimum rating boost just for winning the game, so the game rating for A would be 35, while B would get a 10. This would boost A's rating a bit and lower B's, so the numbers would not be exact, but generally this is how it would work. These game rating would remain the same even if the score was 35-20 (thus, the 15-point minimum boost/detraction). Had A won by 17, then their rating would be 37, while B's would be 8, etc.
But, let's say Team A won by a blowout, then there is a cap (30 points) after which it no longer matters. Had the score been 56-10, Team A's game rating would be 50, while B's would be -5. It could have been 100-0 and these numbers would remain the same.
A key exception to this blowout scenario would be if A's rating was more than 30 than that of B's. In this case, a blowout would have been generally expected, so the system greatly lowers the impact of the result. This is for the protection of the integrity of the ratings. If B was a program that was really struggling and had a rating of -60, it wouldn't be fair for A to receive a game rating of 0 for beating them 60-0, nor realistic for B to get a -35. These results are not considered to be significant.
There is one more specific game result scenario, but I will get to that in just a second...
Overall, a teams' ratings eventually become the general average of the year's results.
The last exception to the average result is the undefeated team rule. An undefeated team must remain rated higher than any team it's defeated. That gap typically depends on the rest of the season each team has. It could be a lot of points, but it typically has a minimum of 0.2. A perfect example of this exists with Long Beach Poly (current rating of 64.2) and Mission Viejo (64.0), due to the fact that the Jackrabbits defeated the Diablos on September 2nd. LBPs average for significant games is 63.2, while that for MV is 64.3, but Poly must remain higher because of that win.
Now for the last specific game result scenario... if Team A is rated more than 30 points higher than Team B, but does not blow them out by 30+, then that game becomes significant and it gets tallied as such in the ratings. This exact scenario played out twice in NorCal last week, and I would argue takes place more commonly in general than in SoCal. I think NorCal teams just tend not to run up scores nearly as much, particularly in tight communities. In SoCal, you typically have one continuous mass of humanity, so it's not like there is a huge culture different between the cities of Whittier and Santa Fe Springs, when it comes to football.
Serra defeated Valley Christian 36-7. The Padres' current rating (post-game) is 61.0, while the Warriors' have 10.2. This 29-point result is now considered significant because Serra simply chose not to run up the score. Granted the 30-point limit is supposed to take this into consideration, but the Padres were actually up 36-0 at the half. Could they have dropped 60 or even 70 on VC if they wanted and made the game insignificant? Yeah, probably, but that didn't happen. They clearly called off the dogs very early.
In Serra's case, this result is not going to really impact anything than their placement on the overall state and national list. Could they have been higher than LBP or MV this week without this result being a part of their rating average? Maybe. If they win their next two games, though, they will enter the CCS D-I playoffs as the #1 seed and pretty handily so.
For Salinas, last week's result could end up costing them a place or two during the seeding process. The Cowboys' current rating is 30.5 and they defeated Alisal (also located in Salinas, rating -12.4) by the score of 42-17. While this game was not close and was never in doubt, the 25-point margin makes the result significant. In this game, the starting Salinas QB went down with an injury, so the coaches seem to have become a little gun shy about keeping their starters in for longer than was necessary. Their last TD appears to have been scored by a reserve who didn't have even 6 carries for the entire year entering that game.
Now, I'm not suggesting that teams do run up the score late in games purposely to get past that 30-point barrier, but because of the system that the CIF sections are adopting, perhaps holding a 31-point lead may have to be considered. I do think it's very important to get your young players game time experience to help them grow for subsequent years. This is just a potential side-affect that has been introduced.
I just think that when looking at the state rating list, this should be kept in mind. I think there is a perception that the Calpreps system tends to artificially place NorCal teams lower. I think this is one of the factors in that.
One final note is that the references to the 15- and 30-point limits also tend to be fluid. It can be 14 on the low-end or perhaps 28 in the upper. Calpreps adjusts this depending on the results it sees throughout the year.
Last edited: