Prior to the 2012 season, I did a post based on a methodology I developed to measure farm systems across two dimensions: depth and upside. With all the Top 20s done and having seen some other discussion in posts with different ways of valuing farm systems, as well as John's intention to release his rankings soon, I thought it was a good time to update mine. For reasons I'll further discuss below, I consider these rankings preliminary because at this point not all the information that I used last year has been published, Therefore, whereas my exercise last year was based on aggregation of multiple rankings, this ranking is just based on John's individual rankings in the Top 20 lists, as well as some necessary interpolation on my part.
I laid out the methodology in great detail last year, so if you're interested in the nitty gritty details, just follow the above link. My method is a little counter intuitive, so I suggest in particular looking at the chart showing how overall players ranks (ie, #1 vs. #100) match up relative to each other value wise as it provides a more integrated view of how the system values players. Otherwise, I'm just going to excerpt a small portion from what I wrote last year (updated slightly) to explain my approach:
When I think about ranking farm systems, I think along two dimensions: depth and upside. By depth, I mean the sheer number of players in a system with a reasonable chances of providing MLB value, with some differentiation on the basis of likelihood to contribute. By upside, I mean having potential impact players, or solid-average regulars with relatively higher chances of contributing. Essentially, this is a measure of having elite talent.
My rankings are an attempt to objectively analyze farm systems on both of these factors independently, and then to create an overall ranking by weighting both factors equally. I accomplish this by creating measurements for each factor, adding up them up for each team, and then standardizing the scores using Z scores. This means that the scores for a team one factor can be directly compared with the other factor score, and averaged to create an overall score and ranking. With Z scores, a score of 0 is average, a positive score of 1 represents one standard deviation (SD) above average, a negative score of 1 likewise one SD below average, etc.
PRE-2013 PRELIMINARY RANKINGS:
One thing that's really interesting to note is that after the Mets at #12 overall, there's a huge step down to Colorado at #13 with a difference of one half standard deviation. That's a pretty huge gap, and there's also a pretty big gap from Wasington at #26 down to the truly awful systems in CWS, LAA and DET. Broadly speaking, things are pretty much in line with what I expected in terms of how teams stack up. Below, I've charted the upside value against the depth value for each of the teams:
Broadly speaking, there are the better farm systems in the upper right quadrant, with good upside and good depth. Likewise, in the lower left are the poorer systems on both dimensions. Towards the middle/along the axes are some of the more interest cases, the split decisions. There are teams like Houston, Chicago and San Diego with tremendous depth, but lacking in elite upside at this point. In a similar vein is a system like Philadelphia, which has decent depth but is largely lacking in elite talent. On the opposite end are systems like Baltimore and Cincinnati, with poor depth but anchored with some elite talent and therefore high upside. Below are some brief details on how the calculations were done, as well as rankings for each dimensions separately.
To measure depth (and depth only), I take John's rankings and assign a value of 2 for each player ranked a B or above. 1.5 for each B-, and 1 for each C+. The idea is that once you get to the level of straight C prospects, it's a pretty ubiquitous commodity. Essentially, I consider them replacement level prospects. On the flip side, there's no distinction here above B-level prospects, since this is really a matter of difference in upside. Based on John's grade distribution, the backend of the Top 100 is generally made of straight B level prospects, and that's loosely where I draw the line. With all the Top 20 lists published (as always, yeoman's work by John), this is just a matter of totalling things up.
Measuring the upside is a little more difficult at this point. Last year, I used a consensus Top 125 based on 8 Top 125 lists, assigning values to each spot based on inverse square root of overall ranking, with a smoothing factor. At this point, a number of those lists aren't out yet. Moreover, neither is John's list. So essentially, my upside measurements is based on a interpolated ranking based on John's grades. I did this by separating players according to grades, ordinality within team rankings, anecdotes that I recalled from John's writing and comments, as well as previous rankings (as a last factor). This obviously isn't going to be completely accurate, but it should be reasonably close, and given the inherent error in this type of exercise, not material at all. For 2012 draftees, I simply used the average value for their grade level.