Every year when the NBA announces the full rosters for the All-Star Game, the fans, like clockwork, get mad about the selections. Many see their favorite players snubbed, left off for inferior choices. But some, a righteous contingent to say the least, take offense out of principle to the inclusion of some players irrespective of who the alternative selections could have been. This tweet from Nate Duncan of the Dunc’d On Podcast was a particularly salty version of this sentiment:
As bad as some of the West selections were, Sabonis is the worst actual player selected to the All-Star game in a long time.
— Nate Duncan (@NateDuncanNBA) January 31, 2020
Unfortunately for us trying to keep up with the takes and arguments of #NBATwitter, Duncan provides little evidence to support his claim. In the replies to this tweet, he argues that Sabonis is not one the 12 best players in the Eastern Conference, which might be the case, but is a different claim all together than the original.
Luckily, though, I had been in an argument with a friend at the beginning of the year over who were some bad All-Star selections and had begun to put together a framework for answering, once and for all, who has been the worst NBA All-Star of recent history.
Importantly, by recent history I mean since the 2000-01 season, when the All-Star Game was played in Washington D.C. and featured the likes of Allan Houston, Jerry Stackhouse, Anthony Mason, Antonio Davis, Michael Finley, Antonio McDyess, and Vlade Divac. There have been 18 more All-Star Games since the 2001 iteration, giving us a 19 game sample.
Furthermore, when we think about NBA All-Stars, we may not always remember those who were named to a team and did not play due to injury or those who the commissioner’s office named as replacements. But, they were still All-Stars according to the history books, so we need to take them and their performances into consideration
Finally, what does it mean to be the worst All-Star? We could take this question as being sort of existential, a prompt that could take us down the philosophical rabbit hole of trying to define an abstract concept. In most cases, we would probably settle for we’ll know it when we see it. However, we could instead turn to the metrics. By looking at a player’s win shares and how many they accrued per 48 minutes, as well as their offensive, defensive, and net ratings and VORP (Value Over Replacement Player), we can start to tell who had good seasons and who had disappointing seasons. All of these metrics attempt to describe the season-long contributions of a player. Each of these stats on their own, come with some significant limitations, but we can use them in conjunction to look for variation between those selected into the All-Star Game.
Finding the worst All-Star selection begins with collecting two sets of information about every player voted into or selected for the All-Star Game. First, we need to collect data about how the players were selected (e.g. voted in as a starter or selected a reserve), whether they were injured for the game, whether or not they ultimately started in the game, the number of selections they had received up to that point, and whether or not this season would be their last in the NBA. Second, we need statistics about the number of games and minutes they played in that season, as well as their end of the season win shares, ws/48, offensive rating, defensive rating, net rating, and VORP. We will use these last six stats as the units of our comparison. All of these stats are available from Basketball Reference.
Unfortunately, though, these stats all have different units and scales, making it hard to generate a single measure of season quality from all of them. To solve this problem, we can calculate the standard score (z-score) for each statistic. The standard score tells us how many standard deviations above or below the mean value an observation appears. In this case, we will compare across all 19 games for each statistic. Since all six sets of standard scores are in standard deviation units, we ultimately can add them together to get a single measure of just how far above or below average a player appears relative to the other All-Stars over this period.
Crucially, remember that I am treating each All-Star appearance for a player as a unique observation. (For example, LeBron appears 15 different times.) This condition means that a player could have multiple good or bad seasons according to our measure. (Or a mix of both!)
Before we get to who exactly has had the worst All-Star season in recent history, let’s run through some insights about the worst All-Star players: differences in selection methods, selection counts, and time.
First off, we might be curious about whether players selected via one mechanism are worse than players selected via the others. In the NBA, players are named All-Stars in three different ways. First, the five starters are voted upon by the fans, media, and players. (Up until recently, only the fans voted for the starters.) Second, the league’s head coaches vote to select the reserve players (those coming off the bench). Finally, the league commissioner selects any other injury replacements or honorary selections. Given the disparate groups making each type of selection, it is entirely plausible that one method produces worse All-Stars than the others.
We can see in the above figure that the three distributions appear to be relatively similar. Each group has a median close to zero (Coaches’ Vote = -0.363, Commissioner’s Replacement = -0.969, and Voted Starter = 0.709). Obviously, though, the set of players named by the commissioner is much smaller (N = 35) than either the set of reserves voted in by the coaches (N = 270) or the starters (N = 190). Furthermore, the standard deviation of the starter distribution is about one unit greater than either of the other distributions (Voted Start = 4.95, Coaches’ Vote = 3.80, and Commissioner’s Replacement = 4.00).
But, we know that each group considers different factors when making their selections. Notably, fans and the commissioner’s office face strong nostalgia motivations, as they want to recognize and see legendary players participate in All-Star Weekend, even if they are not having their best seasons. If such an influence is meaningful, we should see the quality of selections in these groups decrease as a player’s All-Star selection count increases.
This pattern is what we see when we plot each All-Star’s total displacement over the number of All-Star selections they reached that season, broken out by selection method. In the first panel, we see a downward trend in season quality as the selection number increases. Yet, this downward trend is slight compared to those for players selected by the commissioner and those voted as starters. Both of these groups experience dramatic drops in season quality as a player’s selection number increases.
Of course, we might also wonder if there are any important temporal variations in the quality of selections. At first glance, it does not appear that there is a consistent trend in quality over time.
Even so, the effect of yearly progress may appear for some modes of selection and not others. This potential moderation is especially plausible given the recent changes to the rules governing how starting players are selected. The introduction of media and player votes, in addition to the fan vote, could easily improve (both of these new groups take the voting more seriously) or worsen (both groups are even more deeply attached to and respectful of the history of the game) the quality of selections.
What we see when we plot the quality of each set of selections over time is that there does appear to be some interesting interactions between time and selection process, but not in the ways we would maybe expect. First of all, the players selected as reserves via the coaches’ vote generally improved in quality from the 2000-01 season through the 2007-08 season. But, beginning in the 2008-09 season, we see the quality in the reserve selections drop until the 2012-13 season, when they started to improve once again. The voted-in starters reflect a similar temporal dynamic. The data show a long stretch of steady improvement in their quality, feature a small period of decline, before beginning to improve once again. Notably, the biggest jump occurs in the 2016-17 season, the very one in which the league changed the voting process to include media members and players. The selections made by the commissioner, though, show no similar improvements. Recent selections by the commissioner’s office have generally had some of the worst All-Star campaigns of any selections in those years.
But, regardless of all these trends and patterns, we all just want to know who has been the worst All-Star in recent history. Let’s get to work answering that question.
The most straightforward answer is to be found by just looking at who had the worst-performing seasons, regardless of any other criteria.
The top half of this figure, where I highlight the ten best All-Star seasons, may not surprise us. We see a set of familiar and expected names: LeBron, KD, Steph, and Chris Paul. But, the bottom half may shock us. Kobe shows up three times! AI is in there twice! Hell, if we don’t stop to think about it, there’s a Dirk season in there.
Of course, we should probably stop to think about it. Dirk’s highlighted season was his final one when the commissioner made him a special selection. The three Kobe seasons mark his last three seasons in the league, in which he totaled only 107 games as he dealt with injuries. The second Iverson campaign highlighted in the bottom half? That was his final season as well.
To be fair and account for the fact that fans, as well as the league itself, suffers from some nostalgia, we should take out players’ final seasons.
We can see that removing players’ final seasons did not change the top half of the figure, but has led to some interesting insights in the bottom half. Chief among these might be the appearance of Michael Jordan’s 2002 All-Star season, his first with the Washington Wizards after taking three years off following his sixth NBA title. Jermaine O’Neal’s 2007 season now falls into the bottom ten as well, highlighting the fact both that the Eastern Conference used to lack a full slate of quality big men and that being named an All-Star is streaky. Finally, Jrue Holiday’s last season in Philadelphia, his fourth in the league, also enters the bottom ten. We still have those two remaining Kobe seasons though when he was hurt, but the fans still voted him in as a starter.
If we want to account for the power of nostalgia, we need to remove seasons in which players got hurt and missed large chunks of time. Let’s set the threshold at 60 games, or roughly three-quarters of the NBA season. Using a fixed amount of total games will likely remove players selected during the lockout-shortened 2011-12 season, but we can see that none of those seasons look to be candidates for the worst season anyway.
Now we have some interesting results. Gone are those Kobe seasons, but going with them are the 2001 selection of Alonzo Mourning and the 2004 Iverson selection. In their places, we have Zydrunas Ilgauskas’s 2003 selection, Kenyon Martin’s 2004 selection, Roy Hibbert’s 2014 selection, and Dwyane Wade’s 2015 selection. That Wade season would be his penultimate non-honorary selection and came in a season where he did miss 20 games, just keeping him inside the threshold for inclusion.
So, these are our final ten candidates for the worst All-Star in recent history. Let’s start evaluating.
First, we can throw out the Wade and MJ selections. These selections were made with the weight of NBA history behind them. No matter whether or not you believe Jordan is the GOAT, you cannot deny that he was one of the NBA’s best players ever and a true icon. There was no way he would not make that All-Star team. Wade, similarly, was one of the best players of his generation (even if he was nowhere near as good as Jordan) and won multiple titles on one of the most-watched and discussed teams ever, the Heatles. If he never left Miami, he probably could have added two more All-Star selections to his resume.
Second, we can toss the Walker, Martin, and Hibbert seasons out as well. These players were all high impact players on successful teams. Walker’s Celtics made the second round, losing to the eventual conference champions, the New Jersey Nets. The year before Walker and the Celtics had lost to the Nets in the Eastern Conference Finals. He had even been the cover-athlete for NBA Live ’99. Martin’s sole All-Star appearance came in his fourth season in the NBA and after he and the Nets had appeared in back-to-back NBA Finals. That year he (barely) led the Nets in scoring with 14.9 points per game, while playing standout defense (in regards to counting stats), averaging 1.2 steals and 1.7 blocks per game. He was also the cover athlete for NCAA Basketball 2001. Finally, Hibbert’s 2014 selection also came after playoff success for his Pacers. 2013 saw Indiana take Miami to seven games in the Eastern Conference Finals, a series in which Hibbert dominated. He had also appeared multiple times on the popular NBC comedy Parks and Rec by the 2013-14 season. All three of these players were selections that could be easily rationalized based on their contributions to winning playoff teams.
Third, Ilgauskas’s 2003 and O’Neal’s 2007 selection are both easily explained by the lack of quality big men playing in the Eastern Conference at that time. In 2003, the big men considered for the Eastern Conference All-Star team were Ben Wallace, Dikembe Mutombo, Ilgauskas, Antonio Davis, Brad Miller, Jermaine O’Neal, Grant Hill, Kenyon Martin, Shareef Abdur-Rahim, Richard Jefferson, and Jamal Mashburn. O’Neal and Wallace ended up starting, while Miller, Mashburn, and Ilgauskas joined as reserves. We could split hairs and try to figure out which other legitimate centers should have made the roster, but chances are we’d be having the same conversation if Mutombo or Davis were selected over Big Z. O’Neal’s 2007 selection follows a similar story. That year, Chris Bosh and Shaq were the East’s starting power forward and center. O’Neal and Dwight Howard were the only bigs selected as reserves, while potential alternatives included Ben Wallace, Alonzo Mourning, Andrew Bogut, Rasheed Wallace, and Emeka Okafor. Except for Wallace, none of those alternatives jumps out as a potentially better selection.
These explanations leave Latrell Sprewell’s 2001 selection, Chris Kaman’s 2010 selection, and Jrue Holiday’s 2013 selection from which to choose the worst All-Star campaign.
The 2000-01 season was Sprewell’s ninth, and he had already been an All-Star three different times while playing for the Warriors. This season was also his third playing for the Knicks, having just led them to a 50-win season the year before. While not voted a starter, Sprewell was the third-highest vote-getter among Eastern-Conference forwards. Behind him were Brian Grant, Mashburn, Glen Rice, Walker, Anthony Mason (who was named an All-Star reserve and ended up starting that year), Elton Brand, and Marcus Camby. Glen Robinson was the other small-forward reserve for the East that year. Allan Houston, another Knick, was also selected that year.
The 2009-10 season was Kaman’s seventh in the NBA and his best up to that point. But that makes the selection all the more confusing. The Clippers had been pretty bad, with a high win-total over Kaman’s career of 47 in the 2005-06 season. The Clippers, for all of Kaman’s career, had been secondary viewing in Los Angeles and a team of little consequence around the NBA. Kaman was not even in the top five in fan voting at his position! He wasn’t selected as a reserve by the coaches either. Instead, he was chosen by the commissioner’s office to replace Brandon Roy. The West All-Stars were conceivably short of big men, but they did start Dirk, Amar’e Stoudemire, and Tim Duncan, with Paul Gasol and Zach Randolph on the bench. Kaman was not a necessary big body the way Ilgauskas and O’Neal’s selections were.
Jrue Holiday was one of the bright spots for the pre-Process Sixers and his 2012-13 season, his fourth in the league, showed signs of continued improvement. He did crack the top-10 in voting in the Eastern Conference backcourt, but he had about a tenth of the votes of the top guard, Dwyane Wade. However, the East was relatively thin at the guard position that year. After Wade, the next four highest vote-getters included Rajon Rondo, Deron Williams, Kyrie Irving, and 37-year-old backup Ray Allen. With Rondo and Wade selected as starters, Kyrie was an obvious choice for reserve, but another guard would be necessary. Not selecting Deron Williams seems odd, but the Nets finished a disappointing 22-44 the year before. Holiday’s Sixers, on the other hand, made the playoffs and had an unexpected upset over the Bulls in the first round before losing a back-and-forth seven-game series to the Celtics in the Conference Semifinals. In the pursuit of logic, one can only assume that this boosted Holiday’s candidacy.
Left to choose between these three candidates, I think we have to recognize Chris Kaman’s 2009-10 selection as the worst of all time. Compared to Sprewell’s 2000-01 selection and Holiday’s 2012-13 selection, Kaman’s lone appearance at the All-Star game lacks any logical explanation. Sprewell played on a successful big market team and Holiday was the best player on what appeared to be an exciting upstart team. Kaman, when he was selected, was midway through a good, but not great career playing for a historically awful franchise that was consistently overshadowed by its in-town alternative. Nor was Kaman filling an essential positional requirement on the roster. Looking back, the best explanation we have for Kaman’s selection is: 🤷.
But, how does identifying Chris Kaman as the worst All-Star in recent history help us solve the original Sabonis problem raised by Nate Duncan? Well, while we lack full-season data for this year’s All-Stars, we do have their numbers going into the break. With a small tweak, we can change our statistical comparison from being across all years to within years. By doing so, we ask not how far does a player deviate from every other All-Star selected since the 2000-01 season, but rather how far does a player deviate from the average of those players selected in the same year. Making this change produces similar results, as the ranked orders for each metric are highly correlated (\(\tau = 0.68\)).
We can see in the figure that we have a relatively similar Top- and Bottom-10 from the initial analysis with no other conditions or filters applied. But, what about if we set the same injury and final season restrictions as before?
We end up, again, with a very similar set of seasons highlighted as being among the worst since the 2000-01 season. Notably, nowhere to be seen is Sabonis’s selection this year. In fact, Sabonis is nowhere near the bottom when compared to just his fellow All-Stars selected this year. The statistical approach I’ve used actually ranks Westbrook as the worst selection this year by quite a bit, followed by a tight pack of Donovan Mitchell, Pascal Siakam, Trae Young, and Brandon Ingram.
There’s still plenty of time left in the season, but it seems unlikely that we will be looking back in five, ten, or even twenty years and asking how did Sabonis make it. Right now, Sabonis looks and is playing like an All-Star on a team that has been successful for multiple years in a row now. For young players, this is pretty much the definition of an All-Star.
If you would like to decide for yourself who the worst All-Star in recent history is, check out the Worst All-Stars app!