I'm a little late to the party with this, but the way I think about farm system rankings requires having top 100 lists, most of which don't come out until this time of year. With John's list out yesterday, all the lists I care to include as data points in my farm rankings are out.
When I think about ranking farm systems, I think along two dimensions - depth and upside. By depth, I mean the sheer number of players in a system with a reasonable chances of providing a team value, with some differentiation on the basis of likelihood to contribute. By upside, I mean having potential impact players, or solid-average regulars with relatively higher chances of contributing. My rankings are an attempt to objectively analyze farm systems on both of these factors, and then to create an overall ranking by weighting both factors equally. I accomplish this by creating measurements for each factor, adding up them up for each teams, and then standardizing the scores using Z scores. This means that the scores for a team one factor can be directly compared with the other factor score, and averaged to create an overall score and ranking. With Z scores, a score of 0 is average, a positive score of 1 represents one standard deviation (SD) above average, a negative score of 1 likewise one SD below average, and so on. Below my overall rankings, I explain in detail how I measured the factor, and show how each team scored on that factor. Oh, and I've used the standard rookie criteria for eligibility, but Yu Darvish and Yoenis Cespedes are not included - they're in no way part of the farm system.
Before moving on, a cautionary note. As I have stressed when others have put rankings using similar methodologies, the key is in how things are weighted. The same applies here. First off, this is meant to be fun, and I don't pretend to claim that this is the definitive word. I've made judgments on the factors I used (and how they're weighted), how to measure them, and how to value players in relation to each each when measuring them. In terms of those latter two criteria, I've tried to use include some objective research into this, but there's also a good dose of personal intuition. At the bottom, I present some information on how my system values prospects vis-vis other prospects, as I think this provides a good basis for understanding how my system ultimately works. Questions and comments on the measurements are welcome, and if there's some suggestions on tweaking how the measurements/valuations which are easy to incorporate, I'm happy to run them through my data and show how it would affect the rankings. Finally, a disclosure: I'm a Jays fan, and while I don't think that's coloured (I also use the Queen's English) my analysis, a reader is entitled to make their own inference on that point.
PRE-2012 FARM SYSTEM RANKINGS
As I summarized above, this criteria is meant to consider number of players in a system with a reasonable chance of providing their MLB value, not necessarily differences in quality among those players (that will be considered in the upside factor). At the same time, some differentiation is still needed. To measure this, I'm using John's grades, as reported in the updated team Top 20s. My cutoff is players of Grade C+ and higher, since straight grade C are prevalent, less likely to even make to the majors in a sigificant capacity, and most significantly, are not reported for all teams (in fact weaker teams will have more players ranked in the top 20). I divded players into 3 groups - those ranked B or higher, those ranked B-, and those ranked C+.
I awarded teams 2 points for every B or higher player, 1.5 points for every B- player, and 1 point for every C+ player. Of course, a player ranked "A" is worth well more than a player ranked "B", but those differences will be taken care of in the upside rankings, where I consider the Top 125 prospects and award value. Back end of the top 125 will generally be B propsects, so that's why I've lumped all B level and higher prospects together. Why those values? As a starting pointing, I look at values reported by Sky Kalkman at Beyond the Box Score (these are from 2009, I've slightly different ones) that a Sickels B grade prospect was worth $5.5-7.3 million, whereas C prospects were worth $0.5-2.1 million. This doesn't help with the value for levels in between, but gives a sense. Ultimately, I settled on those numbers intuitively - it's the relative valuations that matter, and basically says 3 B- prospects = 2 B prospects and 2 C+ prospects = 1 B prospects. That feels about right to me. Below are the team totals and depth rankings
As summarized above, this criteria is meant to consider potential impact players, or solid-average regulars with relatively higher chances of contributing. The way I measured this was to build a Top 125 prospect list, by aggregating 8 individual Top 100 lists (some went a little longer) - Baseball America, Bullpen Banter, Kevin Goldstein (Baseball Prospectus), Keith Law (ESPN), Jonathan Mayo (MLB.com), Frankie Piliere (Scout.com), Project Prospect and of course, John Sickels. A total of 165 players were named on at least one list, and 62 were named on all 8 lists. For each player, I averaged their ranking on all lists on which they were named. To account for players not named on all lists, I created an adjusted rank by multipling the average ranking by 8/X, where X was the number of lists made. So, a player who averaged 75th in 4 of 8 lists would have an adjusted rank of 150. I also created an override to ensure that missing a list couldn't improve the adjusted ranking (for example, Mike Montgomery missed Goldstein's list, and the (8/7) adjustment factor left him higher than if he had been 102). In general, players on more lists are higher than players on fewer lists, but there are a few exceptions where being high on one list boosts a player above another player on more lists (for example, David Holmberg is rated #47 on Project Prospect's list, which puts him higher than Joe Benson, named in the 90s on two lists).
After creating the consensus Top-125 list, I had to determine how to value each spot. In a seminal study on age and the draft Rany Jazayerli tried to relate the value produced by a prospect and the position in the draft:
I looked at a number of different formulas to determine which would best fit the data, and the most accurate correlation I came up with was a linear relationship between "expected value" and 1/SQRT(PK). That is to say, the value of a draft pick correlates with the reciprocal of the square root of the pick number.
This type of reciprocal, square root relationship accurately describes the nature of prospect value - the guys at the top are worth a ton, and as you go down, the differences in value fall. I have adopted this method as well, using the ranking number. I made one tweak however - since these lists can have multiple top picks, the differences might not be as large as you go down the lists, especially at the top where the value dropoffs are biggest. So before taking the inverse of the rank, I averaged the square root of the rank number, the rank above and the rank below. This has little effect beyond the top 20, but slightly smoothes the relative values of the highest ranked prospects. Below, I present the Top 125 list, the adjusted rank, and the value attributed to each player (really the spot they occupy):
Interestingly, Mike Trout and Bryce Harper ended up with the exact same average ranking across the 8 lists, to share the #1 prospect title. Now, you might look at this and say, Harper or Trout is not worth 33% more than Matt Moore, but keep in mind this about spots, not players. Moreover, it's only half of the overall ranking, and in the other part they are considered equally valuable. Therefore, overall Harper/Trout is worth about 16% more in my analysis. Likewise, this method says the #1 prospect (when not tied) is worth 9x the #100 prospect - when combined with the other half, it's closer to 5x, which is reasonably in line with Sky Kalkman's tables referred to above.
From there, I added up the players values for each team to get totals, and standardized them to get the following rankings:
PUTTING THE TWO PIECES TOGETHER
As I said above, key to doing these is how to you weight players relative to each other. Because of the disparate nature of my rankings, where in one component the #1 prospect has the same value #100 and in the other the #1 would be closer to 10x value of #100, I'm providing the following table to show the value attributed to #1 rank in relation to other positions:
Finally, to show how the system values positions relatively, the following table (the way to read it is the rank # on the vertical axis is valued X times as much as the corresponsing number on the horizontal axis):