Skip to main content

Baseball Statistic

November 2024
1min read


Overrated

Batting average. Batting average arrived in baseball in 1872, when a fan from Washington proposed that hitters be ranked not by hits per game, the custom of the day, but by hits per at-bat. When the National League formed in 1876 and adopted this “batting average” as the standard by which to award the annual batting championship, the statistic became the primary method for rating hitters forevermore. (Hitting .300 confirmed a player as one of the league’s best, while a .400 average punched his ticket to baseball immortality.) The problem was, the batting average was patently and painfully simplistic: Singles counted the same as doubles and home runs, and walks were altogether ignored. Complaints about this started earlier than many know. “Would a system that placed nickels, dimes, quarters and fifty-cent pieces on the same basis be much of a system whereby to compute a man’s financial resources?” wrote F. C. Lane in Baseball Magazine in 1916. “And yet it is precisely such a loose, inaccurate system which obtains in baseball… .” Fans, baseball executives, and Nobel laureates have spent more than a century devising more sophisticated and useful methods, from slugging percentage (total bases per at-bat) to on-base percentage (times reaching base per plate appearance) to equations that take computers hours to parse. But batting average remains the most common way to decide if Johnny can hit.

Underrated

Range factor. While hitters and pitchers have inspired hundreds of different statistics since the days of Alexander Cartwright, numbers for fielders have remained virtually unchanged since the 1860s: Glovemen are rated either by their total errors (chances they’ve flubbed) or by their fielding percentages (the rate at which they do not make those errors). This is horribly faulty, however, because a fielder’s job is not to avoid errors but to reach as many balls as he can and then convert them into outs. With this in mind, in 1975 the aspiring baseball writer Bill James devised a new system of rating fielders simply by their successful plays (putouts plus assists) per game; he called his statistic the range factor. The metric did more than just quantify fielding skill far more accurately; it helped launch James’s career, turning him into a best-selling author and the pied piper of baseball’s modern statistics movement. As for the range factor itself, perhaps the most amazing aspect is that its roots reach deep into baseball’s primordial past. As far back as 1872 the club scorekeeper of the National Association’s Philadelphia Athletics rated fielders not by errors but by putouts and assists per game. It took a century for James’s approach to be brought into the mainstream.

We hope you enjoy our work.

Please support this 72-year tradition of trusted historical writing and the volunteers that sustain it with a donation to American Heritage.

Donate