clock menu more-arrow no yes mobile

Filed under:

Thunder Pseudo-Stats and Jeff Green

New, comments

It was interesting to read through jksnake99's FanPost regarding Jeff Green, particularly given how many ABPRmetrics were thrown out there. Given the discussion that ensued, I figured we could use a frontpage post to go over some of this again. The title of the post mentions Jeff Green, but most of this has nothing specifically to do with him -- it's just that all this writing happened (if not on whim then) directly as a result of reading the commentary following that Jeff Green-centric FanPost.


Anyhow, moving on. Roland Beech, he of Plus-Minus / 82games.com fame does a great job of talking about ABPR stats. Here's a few quotes from an interview SLAM conducted with him (emphasis mine):

RB: Well I am not after the one number rating, neither is Dean Oliver and a number of other people. Yes the stats folks often tend to be people using box score data only since that’s what they have at hand. Similarly there are a number of folks who are ‘true believers’ in regression based +/- type metrics. I simply feel we need to go out and collect more data on the specifics of games and that when we have this data things will be much more self evident. This is already happening. For example, a lot of people point to defense as one of the missing ingredients in the box score, but by tracking who is guarding who on plays and what transpires you can actually create very detailed defensive stats, and then even adjust them by the quality of player being guarded, etc. There’s no need once you have the data to try and deduce things, it’s right in front of you.

RB: Right I kind of intentionally present most data in really a raw, unadjusted form. Anytime you make adjustments you are using some kind of assumptions, which may or may not be true. I do think the site is ‘advanced stats’ but I’m much more of a ‘let’s collect more data’ type of analyst rather than delving into trying to infer things through regressions, etc.

RB: I am not a fan of one number, overall type player ratings since I don’t think players have constant value. Their contributions depend heavily on who they play with, the coaching schemes, the role they are asked to play, whether they are happy, healthy, etc. The Roland Rating used to just be straight on/off but then people started to think I was advocating that as a stand alone player rating, so I added in a few more simple elements, intending maybe one day to publish a more comprehensive rating system, but that hasn’t been a priority since I don’t really look at players in that way. On the other hand something like ‘clutch stats’ is a pretty straightforward look at some specific numbers and so yes, I’m happy to say that a player is a good clutch scorer or something by stats.

RB: I like to include team influence numbers in any kind of player evaluation and that can be on/off, a simple adjusted plus/minus (not regression based) and so on. Yet I don’t think you ever want to fully rely on only those kinds of things—it’s just part of the puzzle. Oddly while I have published a lot of regression based ‘adjusted +/-’ articles on 82games, I am not actually a fan of that approach. I think again, with more data on hand you can really understand a player’s strengths, weaknesses and traits very clearly without having to resort to mathematical techniques to try and extract info that you think is ‘missing.’

The important thing to note here is that it's pretty obvious that at the moment there just aren't as all-encompassing statistical measures for basketball as there are for say, baseball. PER and Adjusted Plus/Minus don't provide you with as much "useful" information as the myriad of stats like FIP, CHONE, tRA, etc. Even comparable stats just aren't as accurate/reliable, due to the greater amounts of noise present in any given basketball game. The outcome of any singular possession in basketball is manifold and dependent upon the players, positioning, and other factors at any given point in time during the course of the game. It can all be quantified and analyzed, but to extract meaningful information from that is much more of a problem. Furthermore, as Roland touches upon above, the more "advanced" the stat becomes the more assumptions start going into play.

(Read the rest after the jump)

In his response to jksnake99's post, Zorgon wrote:

I don’t really put too much stock on those PER ratings, because according to those, Troy Murphy should have contended for the MVP last season, and Corey Maggette is the Best player on the Warriors, even though he’s essentially a black hole with legs and is much maligned by Warriors fans. They’re useful sometimes, but they never tell the whole story, at least, not as much as you get by watching the person play. Think of it this way. If we take away the scoring, who’s the better player? Durant could probably dish out more assists, as is the virtue of his position, and he’s a better rebounder, but only marginally. But, Green has far less turnovers, and is a much better defender. There are nights where KDs scoring is almost pointless, because he gets scored on equally by the guy he’s defending.

Why is is the team better when he’s off the floor? Because there are times when he’s on the floor when the team is without Durant, and the team, in general, does worse without him. At least, that’s my guess. I’m not sure if I want to put too much stock in that site, because according to it, we’d be better off starting Harden and Collison, and we should have kept Livingston, because he’s better than Maynor. Eh….
Address the PER ratings! I can’t, I’m not an insider.

And that pretty much illustrates the issue here. PER and Adjusted +/- or any other stat are not "holy grail" statistics. They are all limited in varying ways. Some of these issues we've already covered above (see: data noise, assumptions, etc). Others include: amount of data, sample skew (ex. certain players are rotated in nearly all the time with the same teammates), lack of separating out items players have lots of control over (rebounding, TS%) vs items players have some control over (teammate FG%) vs items players have almost no control over (teammate FT%), etc. You can see how quickly this can devolve into a mess. Use of defense stats are equally problematic if they are used independently and/or out of context. If a good defensive player (and we also need to separate out defense scheme, on-ball versus off-ball defense, opponent offense ability, pace, etc) on a terrible team moves to a good team, how much will his defensive stats change? And then how are we using his defensive stats to analyze his productivity? Are we trying to compare his year to year changes in skill (accounting for changes in teammates, teams, etc), his in-season changes (with increasing amounts of data), or something altogther different?

Tom Tango aka "TangoTiger", the man behind the FIP stat (and a stats consult to various NHL teams and formerly with the Seattle Mariners) noted that:

I agree that you need to look at the adjusted plus/minus in conjunction with player-events. If, for example, a player happens to have a high plus/minus, but there is no indication in any of his individual stats that he could have been directly responsible, then you have to “regress” that plus/minus heavily.

And that's really what needs to happen here. But how does one determine which parts of what events are directly responsible for each action or actions? All in all, these stats are definitely interesting and can be useful ... but when they are just thrown out there as a straight comparison, they start to lose a lot of their intrinsic value. If nothing else, this post serves as a nice spot for you to post your thoughts and doubles as a great way for me to jot down some of my mindless ramblings.