ADVERTISEMENT

Calpreps shortcoming for NorCal teams (a little long)

Cal 14

Hall of Famer
Gold Member
Jun 12, 2001
5,591
1,690
113
The use of the Calpreps.com system has grown significantly over the last few years by the CIF sections. While, it initially had garnered interest with a separate rating list that did not include MoV, the full system in all its glory is now being utilized. In the SS and LACS, as examples, the ratings are used exclusively to determine divisional separations and seeding placements, while in the CCS, they are incoporated into the seeding process. While I do agree that the system with MoV is a much more accurate algorithm than that without, it's not without issues that I believe tend to detract from the ratings for NorCal more teams than those for SoCal.

For those who have not spent a great deal of time trying to understand how it works, the ratings are based on the results of game. The system awards points based on MoV within a certain window and with some base minimum credit for a win itself. The average results (for the most part) ends up being a team's rating. There are a couple of extra facets that I will cover shortly, one of which is the focus of this post.

Specifics:

Teams get a base rating value for a win or loss based on the opponent's rating. This is to say that if Team A beats Team B and B's rating was 20, Team A's game rating value is the MoV + 20. If Team A's rating was 25, then Team B's game rating would be 25 - MoD There are limits to this and the actual values remain fluid until all of the results for the week get entered into the system.

Let's say in the example above Team A wins 24-20. There is a base 15 minimum rating boost just for winning the game, so the game rating for A would be 35, while B would get a 10. This would boost A's rating a bit and lower B's, so the numbers would not be exact, but generally this is how it would work. These game rating would remain the same even if the score was 35-20 (thus, the 15-point minimum boost/detraction). Had A won by 17, then their rating would be 37, while B's would be 8, etc.

But, let's say Team A won by a blowout, then there is a cap (30 points) after which it no longer matters. Had the score been 56-10, Team A's game rating would be 50, while B's would be -5. It could have been 100-0 and these numbers would remain the same.

A key exception to this blowout scenario would be if A's rating was more than 30 than that of B's. In this case, a blowout would have been generally expected, so the system greatly lowers the impact of the result. This is for the protection of the integrity of the ratings. If B was a program that was really struggling and had a rating of -60, it wouldn't be fair for A to receive a game rating of 0 for beating them 60-0, nor realistic for B to get a -35. These results are not considered to be significant.

There is one more specific game result scenario, but I will get to that in just a second...

Overall, a teams' ratings eventually become the general average of the year's results.

The last exception to the average result is the undefeated team rule. An undefeated team must remain rated higher than any team it's defeated. That gap typically depends on the rest of the season each team has. It could be a lot of points, but it typically has a minimum of 0.2. A perfect example of this exists with Long Beach Poly (current rating of 64.2) and Mission Viejo (64.0), due to the fact that the Jackrabbits defeated the Diablos on September 2nd. LBPs average for significant games is 63.2, while that for MV is 64.3, but Poly must remain higher because of that win.

Now for the last specific game result scenario... if Team A is rated more than 30 points higher than Team B, but does not blow them out by 30+, then that game becomes significant and it gets tallied as such in the ratings. This exact scenario played out twice in NorCal last week, and I would argue takes place more commonly in general than in SoCal. I think NorCal teams just tend not to run up scores nearly as much, particularly in tight communities. In SoCal, you typically have one continuous mass of humanity, so it's not like there is a huge culture different between the cities of Whittier and Santa Fe Springs, when it comes to football.

Serra defeated Valley Christian 36-7. The Padres' current rating (post-game) is 61.0, while the Warriors' have 10.2. This 29-point result is now considered significant because Serra simply chose not to run up the score. Granted the 30-point limit is supposed to take this into consideration, but the Padres were actually up 36-0 at the half. Could they have dropped 60 or even 70 on VC if they wanted and made the game insignificant? Yeah, probably, but that didn't happen. They clearly called off the dogs very early.

In Serra's case, this result is not going to really impact anything than their placement on the overall state and national list. Could they have been higher than LBP or MV this week without this result being a part of their rating average? Maybe. If they win their next two games, though, they will enter the CCS D-I playoffs as the #1 seed and pretty handily so.

For Salinas, last week's result could end up costing them a place or two during the seeding process. The Cowboys' current rating is 30.5 and they defeated Alisal (also located in Salinas, rating -12.4) by the score of 42-17. While this game was not close and was never in doubt, the 25-point margin makes the result significant. In this game, the starting Salinas QB went down with an injury, so the coaches seem to have become a little gun shy about keeping their starters in for longer than was necessary. Their last TD appears to have been scored by a reserve who didn't have even 6 carries for the entire year entering that game.

Now, I'm not suggesting that teams do run up the score late in games purposely to get past that 30-point barrier, but because of the system that the CIF sections are adopting, perhaps holding a 31-point lead may have to be considered. I do think it's very important to get your young players game time experience to help them grow for subsequent years. This is just a potential side-affect that has been introduced.

I just think that when looking at the state rating list, this should be kept in mind. I think there is a perception that the Calpreps system tends to artificially place NorCal teams lower. I think this is one of the factors in that.

One final note is that the references to the 15- and 30-point limits also tend to be fluid. It can be 14 on the low-end or perhaps 28 in the upper. Calpreps adjusts this depending on the results it sees throughout the year.
 
Last edited:
A cogent explanation.

I just wanted you to know that I read the whole thing so that you are aware that at least one person found it interesting. I truly appreciate Cal Preps, but I am also aware of some of its inherent foibles. Thanks for taking the time to delve into them with examples.
 
  • Like
Reactions: Cal 14 and THEOC89
Calpreps starts out with a bias. It use last years ratings to compile next year's teams and adjust accordingly.
I'm not sure what you are saying regarding the new year data and starting point from last year's ratings. Mainly I don't understand what you mean by "...and adjust accordingly."
 
Love Calprep, hate sections using it exclusively to create brackets. In say multiple brackets of 16, #16 gets screwed and #17 is luckier than a leprechaun.

Divisions need to be preset by enrollment, past success, program strength over time. Last year in CCS Scared Heart was gifted a playoff birth, great for them for winning but teams that beat them, such as Riordan, say at home. Same league, and Riordan has a smaller enrollment!
Don’t get me started on SS/LACS.
 
Love Calprep, hate sections using it exclusively to create brackets. In say multiple brackets of 16, #16 gets screwed and #17 is luckier than a leprechaun.

Divisions need to be preset by enrollment, past success, program strength over time. Last year in CCS Scared Heart was gifted a playoff birth, great for them for winning but teams that beat them, such as Riordan, say at home. Same league, and Riordan has a smaller enrollment!
Don’t get me started on SS/LACS.
Riordan also didn't beat the same teams Sacred Heart did in the WCAL league.
 
The use of the Calpreps.com system has grown significantly over the last few years by the CIF sections. While, it initially had garnered interest with a separate rating list that did not include MoV, the full system in all its glory is now being utilized. In the SS and LACS, as examples, the ratings are used exclusively to determine divisional separations and seeding placements, while in the CCS, they are incoporated into the seeding process. While I do agree that the system with MoV is a much more accurate algorithm than that without, it's not without issues that I believe tend to detract from the ratings for NorCal more teams than those for SoCal.

For those who have not spent a great deal of time trying to understand how it works, the ratings are based on the results of game. The system awards points based on MoV within a certain window and with some base minimum credit for a win itself. The average results (for the most part) ends up being a team's rating. There are a couple of extra facets that I will cover shortly, one of which is the focus of this post.

Specifics:

Teams get a base rating value for a win or loss based on the opponent's rating. This is to say that if Team A beats Team B and B's rating was 20, Team A's game rating value is the MoV + 20. If Team A's rating was 25, then Team B's game rating would be 25 - MoD There are limits to this and the actual values remain fluid until all of the results for the week get entered into the system.

Let's say in the example above Team A wins 24-20. There is a base 15 minimum rating boost just for winning the game, so the game rating for A would be 35, while B would get a 10. This would boost A's rating a bit and lower B's, so the numbers would not be exact, but generally this is how it would work. These game rating would remain the same even if the score was 35-20 (thus, the 15-point minimum boost/detraction). Had A won by 17, then their rating would be 37, while B's would be 8, etc.

But, let's say Team A won by a blowout, then there is a cap (30 points) after which it no longer matters. Had the score been 56-10, Team A's game rating would be 50, while B's would be -5. It could have been 100-0 and these numbers would remain the same.

A key exception to this blowout scenario would be if A's rating was more than 30 than that of B's. In this case, a blowout would have been generally expected, so the system greatly lowers the impact of the result. This is for the protection of the integrity of the ratings. If B was a program that was really struggling and had a rating of -60, it wouldn't be fair for A to receive a game rating of 0 for beating them 60-0, nor realistic for B to get a -35. These results are not considered to be significant.

There is one more specific game result scenario, but I will get to that in just a second...

Overall, a teams' ratings eventually become the general average of the year's results.

The last exception to the average result is the undefeated team rule. An undefeated team must remain rated higher than any team it's defeated. That gap typically depends on the rest of the season each team has. It could be a lot of points, but it typically has a minimum of 0.2. A perfect example of this exists with Long Beach Poly (current rating of 64.2) and Mission Viejo (64.0), due to the fact that the Jackrabbits defeated the Diablos on September 2nd. LBPs average for significant games is 63.2, while that for MV is 64.3, but Poly must remain higher because of that win.

Now for the last specific game result scenario... if Team A is rated more than 30 points higher than Team B, but does not blow them out by 30+, then that game becomes significant and it gets tallied as such in the ratings. This exact scenario played out twice in NorCal last week, and I would argue takes place more commonly in general than in SoCal. I think NorCal teams just tend not to run up scores nearly as much, particularly in tight communities. In SoCal, you typically have one continuous mass of humanity, so it's not like there is a huge culture different between the cities of Whittier and Santa Fe Springs, when it comes to football.

Serra defeated Valley Christian 36-7. The Padres' current rating (post-game) is 61.0, while the Warriors' have 10.2. This 29-point result is now considered significant because Serra simply chose not to run up the score. Granted the 30-point limit is supposed to take this into consideration, but the Padres were actually up 36-0 at the half. Could they have dropped 60 or even 70 on VC if they wanted and made the game insignificant? Yeah, probably, but that didn't happen. They clearly called off the dogs very early.

In Serra's case, this result is not going to really impact anything than their placement on the overall state and national list. Could they have been higher than LBP or MV this week without this result being a part of their rating average? Maybe. If they win their next two games, though, they will enter the CCS D-I playoffs as the #1 seed and pretty handily so.

For Salinas, last week's result could end up costing them a place or two during the seeding process. The Cowboys' current rating is 30.5 and they defeated Alisal (also located in Salinas, rating -12.4) by the score of 42-17. While this game was not close and was never in doubt, the 25-point margin makes the result significant. In this game, the starting Salinas QB went down with an injury, so the coaches seem to have become a little gun shy about keeping their starters in for longer than was necessary. Their last TD appears to have been scored by a reserve who didn't have even 6 carries for the entire year entering that game.

Now, I'm not suggesting that teams do run up the score late in games purposely to get past that 30-point barrier, but because of the system that the CIF sections are adopting, perhaps holding a 31-point lead may have to be considered. I do think it's very important to get your young players game time experience to help them grow for subsequent years. This is just a potential side-affect that has been introduced.

I just think that when looking at the state rating list, this should be kept in mind. I think there is a perception that the Calpreps system tends to artificially place NorCal teams lower. I think this is one of the factors in that.

One final note is that the references to the 15- and 30-point limits also tend to be fluid. It can be 14 on the low-end or perhaps 28 in the upper. Calpreps adjusts this depending on the results it sees throughout the year.
Thank you for the Outstanding Explanation!…. 🍻
 
The use of the Calpreps.com system has grown significantly over the last few years by the CIF sections. While, it initially had garnered interest with a separate rating list that did not include MoV, the full system in all its glory is now being utilized. In the SS and LACS, as examples, the ratings are used exclusively to determine divisional separations and seeding placements, while in the CCS, they are incoporated into the seeding process. While I do agree that the system with MoV is a much more accurate algorithm than that without, it's not without issues that I believe tend to detract from the ratings for NorCal more teams than those for SoCal.

For those who have not spent a great deal of time trying to understand how it works, the ratings are based on the results of game. The system awards points based on MoV within a certain window and with some base minimum credit for a win itself. The average results (for the most part) ends up being a team's rating. There are a couple of extra facets that I will cover shortly, one of which is the focus of this post.

Specifics:

Teams get a base rating value for a win or loss based on the opponent's rating. This is to say that if Team A beats Team B and B's rating was 20, Team A's game rating value is the MoV + 20. If Team A's rating was 25, then Team B's game rating would be 25 - MoD There are limits to this and the actual values remain fluid until all of the results for the week get entered into the system.

Let's say in the example above Team A wins 24-20. There is a base 15 minimum rating boost just for winning the game, so the game rating for A would be 35, while B would get a 10. This would boost A's rating a bit and lower B's, so the numbers would not be exact, but generally this is how it would work. These game rating would remain the same even if the score was 35-20 (thus, the 15-point minimum boost/detraction). Had A won by 17, then their rating would be 37, while B's would be 8, etc.

But, let's say Team A won by a blowout, then there is a cap (30 points) after which it no longer matters. Had the score been 56-10, Team A's game rating would be 50, while B's would be -5. It could have been 100-0 and these numbers would remain the same.

A key exception to this blowout scenario would be if A's rating was more than 30 than that of B's. In this case, a blowout would have been generally expected, so the system greatly lowers the impact of the result. This is for the protection of the integrity of the ratings. If B was a program that was really struggling and had a rating of -60, it wouldn't be fair for A to receive a game rating of 0 for beating them 60-0, nor realistic for B to get a -35. These results are not considered to be significant.

There is one more specific game result scenario, but I will get to that in just a second...

Overall, a teams' ratings eventually become the general average of the year's results.

The last exception to the average result is the undefeated team rule. An undefeated team must remain rated higher than any team it's defeated. That gap typically depends on the rest of the season each team has. It could be a lot of points, but it typically has a minimum of 0.2. A perfect example of this exists with Long Beach Poly (current rating of 64.2) and Mission Viejo (64.0), due to the fact that the Jackrabbits defeated the Diablos on September 2nd. LBPs average for significant games is 63.2, while that for MV is 64.3, but Poly must remain higher because of that win.

Now for the last specific game result scenario... if Team A is rated more than 30 points higher than Team B, but does not blow them out by 30+, then that game becomes significant and it gets tallied as such in the ratings. This exact scenario played out twice in NorCal last week, and I would argue takes place more commonly in general than in SoCal. I think NorCal teams just tend not to run up scores nearly as much, particularly in tight communities. In SoCal, you typically have one continuous mass of humanity, so it's not like there is a huge culture different between the cities of Whittier and Santa Fe Springs, when it comes to football.

Serra defeated Valley Christian 36-7. The Padres' current rating (post-game) is 61.0, while the Warriors' have 10.2. This 29-point result is now considered significant because Serra simply chose not to run up the score. Granted the 30-point limit is supposed to take this into consideration, but the Padres were actually up 36-0 at the half. Could they have dropped 60 or even 70 on VC if they wanted and made the game insignificant? Yeah, probably, but that didn't happen. They clearly called off the dogs very early.

In Serra's case, this result is not going to really impact anything than their placement on the overall state and national list. Could they have been higher than LBP or MV this week without this result being a part of their rating average? Maybe. If they win their next two games, though, they will enter the CCS D-I playoffs as the #1 seed and pretty handily so.

For Salinas, last week's result could end up costing them a place or two during the seeding process. The Cowboys' current rating is 30.5 and they defeated Alisal (also located in Salinas, rating -12.4) by the score of 42-17. While this game was not close and was never in doubt, the 25-point margin makes the result significant. In this game, the starting Salinas QB went down with an injury, so the coaches seem to have become a little gun shy about keeping their starters in for longer than was necessary. Their last TD appears to have been scored by a reserve who didn't have even 6 carries for the entire year entering that game.

Now, I'm not suggesting that teams do run up the score late in games purposely to get past that 30-point barrier, but because of the system that the CIF sections are adopting, perhaps holding a 31-point lead may have to be considered. I do think it's very important to get your young players game time experience to help them grow for subsequent years. This is just a potential side-affect that has been introduced.

I just think that when looking at the state rating list, this should be kept in mind. I think there is a perception that the Calpreps system tends to artificially place NorCal teams lower. I think this is one of the factors in that.

One final note is that the references to the 15- and 30-point limits also tend to be fluid. It can be 14 on the low-end or perhaps 28 in the upper. Calpreps adjusts this depending on the results it sees throughout the year.
Thanks for posting this. It was really good and informative
 
  • Like
Reactions: NCS707 and Cal 14
Riordan also didn't beat the same teams Sacred Heart did in the WCAL league.
But did defeat SHC head to head rather handily. Will be a challenge for sure this Saturday. Could be a great matchup of the Irish skill players vs the Crusader skill players if all are healthy.
 
Last edited:
  • Like
Reactions: NCS707
Calpreps starts out with a bias. It use last years ratings to compile next year's teams and adjust accordingly.
That's not bias. That's preceding data. Bias would be "although this team had a really bad year, they are typically really good, so we're going to ignore the 20 rating they had last year and start them with a 40". Calpreps does not do that.
 
Last edited:
Love Calprep, hate sections using it exclusively to create brackets. In say multiple brackets of 16, #16 gets screwed and #17 is luckier than a leprechaun.

Divisions need to be preset by enrollment, past success, program strength over time. Last year in CCS Scared Heart was gifted a playoff birth, great for them for winning but teams that beat them, such as Riordan, say at home. Same league, and Riordan has a smaller enrollment!
Don’t get me started on SS/LACS.
We're going to continue to disagree about how playoff divisions should be setup. What you're describing is the NCS, which has a complete crap system.

East Bay Time and Mercury News calls CCS system much better than the NCS
 
That's not bias. That's preceding data. Bias would be "although this team had a really bad year, they are typically really good, so we're going to ignore the 20 rating they had last year and start them with a 40". Calpreps does not do that.

Not the best word choice, maybe, but his point is valid. Using data from the previous season is flawed to say the least.

You could say the “bias” he mentions is prejudice in favor of a team that was very good the season before versus one that wasn’t. Looking at it that way it‘s not exactly an incorrect word to use.
 
Not the best word choice, maybe, but his point is valid. Using data from the previous season is flawed to say the least.

You could say the “bias” he mentions is prejudice in favor of a team that was very good the season before versus one that wasn’t. Looking at it that way it‘s not exactly an incorrect word to use.
So, the system should just ignore the fact that Mater Dei, St. John Bosco, De La Salle, and Serra tend to be really good from year to year? It should ignore the fact that Davis, Long Beach Cabrillo, and Santa Cruz Harbor tend to not be nearly as strong? What system does that?

People who have paid attention to this service from the beginning will remember a time where everyone started a 0. Many would laugh and ridicule the early ratings, which would often have DLS #12 in the state during its dominant years. The goal was to establish something that made more sense from beginning to end. Eventually, if a team plays enough significant games, the preseason rating is supposed to disappear. Based on my observations, it generally does. There is an issue when that does doesn't play enough good competition, but SoCal people made that argument for years about DLS, didn't they?

Are there hits? Yes. Are there misses? Yes. But are the misses any more egregious than Cal-Hi starting Servite at #23 in the state this year?

Being really good one year does not guarantee a high preseason rating the following. If one team had 15 senior starters for a playoff team, the system (provided the coach replies to the questionnaire) will know that the team will be replacing a bunch. That team will get dropped. It will also know if a team is returning 15 starters. That team will get boosted.

To me, complaining about a team's preseason rating is like complaining that they have new uniforms. So what, go out and beat them.
 
  • Like
Reactions: carlmerritt
Is there anything in the system that gives more weight to more recent games? We often see teams with surprising wins or losses early in the season, then a gradual shift in that team's trajectory. I have seen early-season scores that drag teams down or prop them up. I think Turlock and Rocklin may be experiencing a bit of this right now.

Does the system just average it all out and figure that is enough or is there any indication that more recent games are given a more significant impact?
 
Cal14 provided a great explanation of how Calpreps works on a game to game basis and how it builds throughout the year. I thought there were two other elements at play as well.

First, It takes all the data it has for the season and applies it retroactively to determine what rankings would yield the highest predictive value or closets proximity to what has actually occurred and makes adjustments on that basis. It is a linear program that optimizes for predicting results of games and ranking teams.

Second, in the past few years - to MT's question above it has added a trending factor placing more weight on the past 4-5 weeks as a predictive model for what has occurred during that period and lessens the weight on earlier games unless they make the model more accurate rather than less accurate.

Finally, I think Calpreps gets highly accurate within a league (everyone has common opponents) and pretty good within a section where there are a lot of intersection points to calibrate leagues and teams. Once you start applying it on a normal regional basis there just aren't that many data points and each one gets magnified in importance and is less likely to be an accurate representation over a larger sample size. I think there was only one CCS game vs a southern section team (Corona Del Mar at Los Gatos) and one against a San Diego Section team (Sequoia played Bonita) so you end up with minimal intersecting data to draw an analysis.

For its limitations, it is highly objective and in my opinion the best alternative. I like the CCS method of providing half of the seeding rank to the Calprep standings and half to its more traditional point system. It makes it more complex and the CCS point system is influenced by Calpreps by providing bonus points for being or playing a top 100 and top 150 team.
 
  • Like
Reactions: THEOC89
2018-2019.
Preseason (calpreps) had us about 35.9 and Central Catholic at 43.something points to start the season. By the end of the year we gained ground but in calpreps algorithm we couldn't pass them. CC played DLS, we didn't and the section stated that to us along with calpreps being so close as to why we got 3 seed and CC got the 2 seed. They beat us on a rainy grass field by 1 pt after we failed to convert our 2pt conversion w/1min left. We lost. I'm saying the argument is we couldn't have done anything to pass CC in calpreps. If CC handled their business, there was nothing we could have done to pass them all year long. I will always feel if that game was played on tturf that our speed was a huge advantage but on a rain soaked field our speed advantage was nullified. Again we lost and CC won but imo we could do nothing to get that game at Indy, we needed CC to mistep otherwise we where just playing out the year till we got to that game on grass in Salida...Calpreps already had that game in Salida 13 weeks prior I guess is what I'm saying.
 
  • Like
Reactions: THEOC89
I'm glad this has sparked some interest.

I don't believe there is a weight added to the later games vs earlier. Regardless of when the game is played, the undefeated rule remains in place, as an example. Yes, there is a "trend" feature, but that is mostly to assist in demonstrating how the more recent results are impacting the current rating. E.g., if Team A has a rating of 30, but has back-to-back game ratings of 15, the trend will be downwards.

What I did not add was that the playoff games are weighted more. There used to be some statement floating around, but I believe it was stated that they were worth 2.1 x a regular season game. While some may view this as odd, you have to keep in mind that pretty much every human pollster does the same thing. If a team wins a rematch in the playoffs, just about everyone will move the playoff winning team higher, regardless of what happened in the regular season.
 
  • Like
Reactions: FBAddict
2018-2019.
Preseason (calpreps) had us about 35.9 and Central Catholic at 43.something points to start the season. By the end of the year we gained ground but in calpreps algorithm we couldn't pass them. CC played DLS, we didn't and the section stated that to us along with calpreps being so close as to why we got 3 seed and CC got the 2 seed. They beat us on a rainy grass field by 1 pt after we failed to convert our 2pt conversion w/1min left. We lost. I'm saying the argument is we couldn't have done anything to pass CC in calpreps. If CC handled their business, there was nothing we could have done to pass them all year long. I will always feel if that game was played on tturf that our speed was a huge advantage but on a rain soaked field our speed advantage was nullified. Again we lost and CC won but imo we could do nothing to get that game at Indy, we needed CC to mistep otherwise we where just playing out the year till we got to that game on grass in Salida...Calpreps already had that game in Salida 13 weeks prior I guess is what I'm saying.
Your numbers are not accurate. Here is the preseason information for both teams:

2018 Central Catholic: 28.2

2018 Inderkum: 26.4

Both teams started right around the same spot. It's impossible to tell what the ratings were at the end of the regular season, but if the Tigers remained just behind the Crusaders, then that's what the data would have demonstrated up to that point.

You say that there was nothing you could to do pass them, but that's not true. The system rewards teams that schedule well and perform well against that good schedule. It wasn't just the De La Salle game. Central Catholic also beat St. Mary's. Inderkum didn't have anyone anywhere near either of those teams on their schedule. American Canyon? Sacramento? Bella Vista? Really?
 
  • Like
Reactions: mistark
Not the best word choice, maybe, but his point is valid. Using data from the previous season is flawed to say the least.

You could say the “bias” he mentions is prejudice in favor of a team that was very good the season before versus one that wasn’t. Looking at it that way it‘s not exactly an incorrect word to use.
In his example, Inderkum's 2018 preseason rating had dropped 9.5 points from their 2017 final.

Central Catholic was dropped 15.2.

They started 2018 only 1.8 points away from each other.

Sorry, but no. The reason why CC was rated higher at the end of the regular season is that they schedule a much better non-league slate.
 
I think CalPreps is a cool site for projected matchups and gathering data... I'm not a huge fan of CalPreps having input and control of seedings, rankings, and ratings. I'll take the sportswriters and observers that watch the games over the computers any day. I understand that sports writers can be bias but computers and programs are typically created by biased people (even if unintentional). Computers don’t get a real good feel for the teams that actually compete.

There is so much that the computer can't account for....

How do you account for how teams matchup: scheme, physicality, speed, size, etc.

How do you account for teams that struggled early but figured it out week 6 or 7? Or teams that had people sitting out due to grades, injury, etc.

Why are teams that have a difficult time scheduling quality opponents penalized? Some coaches get turned down for games all the time....
What about the teams that can't afford to travel out of area for tough games?

What about the teams that get forced into mediocre or lesser leagues? Why are they penalized? Those teams get penalized for their strength of schedule….

The Inderkum example is a great one.... I can also think of the 2009 CIF SJS Div 2 Playoffs.... There is no way Grant should not have been the #1 seed... They were penalized for playing in a horrible football league and had a difficult time scheduling quality opponents. I'm sure they would have loved to play Rocklin, Folsom, St.Mary's, Del Oro in the regular seson.

A Friday Night game at Grant is a completely different environment than a Satuday night neutral site game at Folsom... St. Marys and Rocklin both had great teams... But Grant had just won the Open state championship, was a top 10 nationally ranked team and had a bulk of talent coming back from the year before...

CalPreps can be easily manipulated... By Coaches, leagues, affiliations and CIF
 
Last edited:
  • Like
Reactions: THEOC89
I think CalPreps is a cool site for projected matchups and gathering data... I'm not a huge fan of CalPreps having input and control of seedings, rankings, and ratings. I'll take the sportswriters and observers that watch the games over the computers any day. I understand that sports writers can be bias but computers and programs are typically created by biased people (even if unintentional). Computers don’t get a real good feel for the teams that actually compete.

There is so much that the computer can't account for....

How do you account for how teams matchup: scheme, physicality, speed, size, etc.

How do you account for teams that struggled early but figured it out week 6 or 7? Or teams that had people sitting out due to grades, injury, etc.

Why are teams that have a difficult time scheduling quality opponents penalized? Some coaches get turned down for games all the time....
What about the teams that can't afford to travel out of area for tough games?

What about the teams that get forced into mediocre or lesser leagues? Why are they penalized? Those teams get penalized for their strength of schedule….

The Inderkum example is a great one.... I can also think of the 2009 CIF SJS Div 2 Playoffs.... There is no way Grant should not have been the #1 seed... They were penalized for playing in a horrible football league and had a difficult time scheduling quality opponents. I'm sure they would have loved to play Rocklin, Folsom, St.Mary's, Del Oro in the regular seson.

A Friday Night game at Grant is a completely different environment than a Satuday night neutral site game at Folsom... St. Marys and Rocklin both had great teams... But Grant had just won the Open state championship, was a top 10 nationally ranked team and had a bulk of talent coming back from the year before...

CalPreps can be easily manipulated... By Coaches, leagues, affiliations and CIF
"We're the best team in the section! We can shut down anyone... unless they have a good, big running back... 'cause, you know... they're big... and hard to tackle."

"We have the best offense around! We can score 50 on all those teams... unless they have really fast defensive backs... 'cause, you know... it's hard to outrun them when they're really fast."

At the end of the day, you have to beat the teams in front of you. For those who do that more regularly, their rating typically goes up. For those that don't, their rating typically goes down.

Long Beach Poly's Moore League is not very good at all... yet, they have the 5th highest rating in the state. If your league sucks, you schedule up in non-league. If you read what I wrote above, you might see that if a team is really that much better than the rest of their league mates, there isn't much of a penalty at all... unless they can't prove that they're really that much better.

But, it doesn't appear you read anything that I stated.
 
My misunderstanding of the #s. Still want the game on turf.
Right, but "We should be seeded higher even though we didn't play tougher teams like they did because it will give us an advantage" isn't a sound argument to make to your local section selection committee.
 
I think we were rated higher at time of selection but I can't fully remember. One of those things where they made an adjustment in seeding for 1 division but then a complete opposite decision in another.
 
Cal 14, you pick your schedule and hope it challenges you and is the best for that team and that team alone. The year or 2 prior American Canyon beat our doors off, the next year we did the same to them, we scheduled the Sac Hi contract after they knocked off folsom. BV is a league game. I hear ya on the schedule, none of them are DLS but you get a schedule and all you can control is your team, can't control a normally really good opponent to be drastically down.
 
Last edited:
Cal 14, you pick your schedule and hope it challenges you and is the best for that team and that team alone. The year or 2 prior American Canyon beat our doors off, the next year we did the same to them, we scheduled the Sac Hi contract after they knocked off folsom. BV is a league game. I hear ya on the schedule, none of them are DLS but you get a schedule and all you can control is your team, can't control a normally really good opponent to be drastically down.
In 2018, wasn't there a rule that the seeds were determined by your W-L record in conjunction with the number of wins of your regular season opponents? I don't think the SJS has had a subjective seeding process in any of the time I've been really following HS football (close to 30 years). They used to have that goofy pre-determined system where the league automatic bid winners were placed in brackets automatically with no seeding at all. This Calpreps thing has only been fairly recent (last couple of years).

When it comes to things like this, coaches typically know the rules before the season starts and it's relatively objective. It's when it's subjective like in the NCS where things can get really messy and stupid.
 
In 2018, wasn't there a rule that the seeds were determined by your W-L record in conjunction with the number of wins of your regular season opponents? I don't think the SJS has had a subjective seeding process in any of the time I've been really following HS football (close to 30 years). They used to have that goofy pre-determined system where the league automatic bid winners were placed in brackets automatically with no seeding at all. This Calpreps thing has only been fairly recent (last couple of years).

When it comes to things like this, coaches typically know the rules before the season starts and it's relatively objective. It's when it's subjective like in the NCS where things can get really messy and stupid.
The change might have been between the 2017 and 2018 seasons.

Anyway, that old system led to some really strange seeding assignments.

That older system you are talking about (Pre-2009) was really bad. No at-large bids. A rotating number of bids per league. So many good teams were left out because their league might have only had two bids that season, while other teams in weak leagues got in with that extra bid. It really was just guesswork everywhere but at the top.

The last year of that system (2008) was the first time that Monterey Trail made the playoffs. There was a three-way tie for 3rd place in the Delta River League between Jesuit, Sheldon, and MT. Each was 2-3 in league and none of them had a winning record overall. MT won the tiebreaker (13-pt rule) and made the playoffs at 3-7. Over in the sister league, The Delta Valley, they only had two playoff bids that year. Elk Grove at 6-4 and 2-3 in league stayed home.

BTW, if that system had stayed in place for 2009, the Delta River would have only had two bids. Folsom, Pleasant Grove, and MT ended in a three-way tie for first place. If the 13-pt rule tie-breaker was applied, Pleasant Grove (7-3) and MT (8-2) would have gone to the playoffs, and Folsom (9-1 with a one-point loss in overtime) would have stayed home. PG and Folsom made the semi-finals and MT made it to the finals, so all three were legit teams. Even 4th place Sheldon made it at 7-3 and won their first-round playoff game.
 
  • Like
Reactions: THEOC89
"We're the best team in the section! We can shut down anyone... unless they have a good, big running back... 'cause, you know... they're big... and hard to tackle."

"We have the best offense around! We can score 50 on all those teams... unless they have really fast defensive backs... 'cause, you know... it's hard to outrun them when they're really fast."

At the end of the day, you have to beat the teams in front of you. For those who do that more regularly, their rating typically goes up. For those that don't, their rating typically goes down.

Long Beach Poly's Moore League is not very good at all... yet, they have the 5th highest rating in the state. If your league sucks, you schedule up in non-league. If you read what I wrote above, you might see that if a team is really that much better than the rest of their league mates, there isn't much of a penalty at all... unless they can't prove that they're really that much better.

But, it doesn't appear you read anything that I stated.
I'm not a huge fan of CalPreps having input and/or control of seedings, rankings, and ratings.

Regarding "Scheduling UP": I don't know you so I won't assume. Have you ever scheduled games for a top flight HS football program? It can be a difficult task..

A lot of good programs (both in weak and strong leagues) deal with scheduling woes. This goes on around the country. I've seen it in the inner-city, private, suburban, and rural areas. Scheduling is not as easy as you make it out to be.

Are the NorCal teams (Folsom, Serra, St. Mary's) complaining about their state rankings, ratings? I highly doubt it.

If so, maybe they need to "Schedule Up"... Maybe they should put Long Beach Poly and Mission Viejo on the schedule lol... Or maybe they should schedule MD, SJB,or CC10... I'm sure that lost would elevate them above LBP and MV
 
The use of the Calpreps.com system has grown significantly over the last few years by the CIF sections. While, it initially had garnered interest with a separate rating list that did not include MoV, the full system in all its glory is now being utilized. In the SS and LACS, as examples, the ratings are used exclusively to determine divisional separations and seeding placements, while in the CCS, they are incoporated into the seeding process. While I do agree that the system with MoV is a much more accurate algorithm than that without, it's not without issues that I believe tend to detract from the ratings for NorCal more teams than those for SoCal.

For those who have not spent a great deal of time trying to understand how it works, the ratings are based on the results of game. The system awards points based on MoV within a certain window and with some base minimum credit for a win itself. The average results (for the most part) ends up being a team's rating. There are a couple of extra facets that I will cover shortly, one of which is the focus of this post.

Specifics:

Teams get a base rating value for a win or loss based on the opponent's rating. This is to say that if Team A beats Team B and B's rating was 20, Team A's game rating value is the MoV + 20. If Team A's rating was 25, then Team B's game rating would be 25 - MoD There are limits to this and the actual values remain fluid until all of the results for the week get entered into the system.

Let's say in the example above Team A wins 24-20. There is a base 15 minimum rating boost just for winning the game, so the game rating for A would be 35, while B would get a 10. This would boost A's rating a bit and lower B's, so the numbers would not be exact, but generally this is how it would work. These game rating would remain the same even if the score was 35-20 (thus, the 15-point minimum boost/detraction). Had A won by 17, then their rating would be 37, while B's would be 8, etc.

But, let's say Team A won by a blowout, then there is a cap (30 points) after which it no longer matters. Had the score been 56-10, Team A's game rating would be 50, while B's would be -5. It could have been 100-0 and these numbers would remain the same.

A key exception to this blowout scenario would be if A's rating was more than 30 than that of B's. In this case, a blowout would have been generally expected, so the system greatly lowers the impact of the result. This is for the protection of the integrity of the ratings. If B was a program that was really struggling and had a rating of -60, it wouldn't be fair for A to receive a game rating of 0 for beating them 60-0, nor realistic for B to get a -35. These results are not considered to be significant.

There is one more specific game result scenario, but I will get to that in just a second...

Overall, a teams' ratings eventually become the general average of the year's results.

The last exception to the average result is the undefeated team rule. An undefeated team must remain rated higher than any team it's defeated. That gap typically depends on the rest of the season each team has. It could be a lot of points, but it typically has a minimum of 0.2. A perfect example of this exists with Long Beach Poly (current rating of 64.2) and Mission Viejo (64.0), due to the fact that the Jackrabbits defeated the Diablos on September 2nd. LBPs average for significant games is 63.2, while that for MV is 64.3, but Poly must remain higher because of that win.

Now for the last specific game result scenario... if Team A is rated more than 30 points higher than Team B, but does not blow them out by 30+, then that game becomes significant and it gets tallied as such in the ratings. This exact scenario played out twice in NorCal last week, and I would argue takes place more commonly in general than in SoCal. I think NorCal teams just tend not to run up scores nearly as much, particularly in tight communities. In SoCal, you typically have one continuous mass of humanity, so it's not like there is a huge culture different between the cities of Whittier and Santa Fe Springs, when it comes to football.

Serra defeated Valley Christian 36-7. The Padres' current rating (post-game) is 61.0, while the Warriors' have 10.2. This 29-point result is now considered significant because Serra simply chose not to run up the score. Granted the 30-point limit is supposed to take this into consideration, but the Padres were actually up 36-0 at the half. Could they have dropped 60 or even 70 on VC if they wanted and made the game insignificant? Yeah, probably, but that didn't happen. They clearly called off the dogs very early.

In Serra's case, this result is not going to really impact anything than their placement on the overall state and national list. Could they have been higher than LBP or MV this week without this result being a part of their rating average? Maybe. If they win their next two games, though, they will enter the CCS D-I playoffs as the #1 seed and pretty handily so.

For Salinas, last week's result could end up costing them a place or two during the seeding process. The Cowboys' current rating is 30.5 and they defeated Alisal (also located in Salinas, rating -12.4) by the score of 42-17. While this game was not close and was never in doubt, the 25-point margin makes the result significant. In this game, the starting Salinas QB went down with an injury, so the coaches seem to have become a little gun shy about keeping their starters in for longer than was necessary. Their last TD appears to have been scored by a reserve who didn't have even 6 carries for the entire year entering that game.

Now, I'm not suggesting that teams do run up the score late in games purposely to get past that 30-point barrier, but because of the system that the CIF sections are adopting, perhaps holding a 31-point lead may have to be considered. I do think it's very important to get your young players game time experience to help them grow for subsequent years. This is just a potential side-affect that has been introduced.

I just think that when looking at the state rating list, this should be kept in mind. I think there is a perception that the Calpreps system tends to artificially place NorCal teams lower. I think this is one of the factors in that.

One final note is that the references to the 15- and 30-point limits also tend to be fluid. It can be 14 on the low-end or perhaps 28 in the upper. Calpreps adjusts this depending on the results it sees throughout the year.
Until the likes of Serra and Bellermine schedule and beat So Cal teams like LBP and MV, the ratings are not going to give them the benefit of the doubt

The #1 hinderance for decent Nor Cal teams is that the section as a whole is terrible, and little effort is made to schedule outside the Nor Cal bubble.
 
  • Like
Reactions: THEOC89
ADVERTISEMENT

Latest posts

ADVERTISEMENT