EDIT: It turns out that Lawro’s predictions have been subject to analysis before by the excellent @We_R_PL, which can be found here. His is a more even-handed analysis, which also features a nice summary of the benefits of statistically-based analytics.
The BBC football department has long stretched the definition of the word ‘expert’. From regularly publishing Garth Crooks’ (“football analyst”) team of the week to being responsible for the reintroduction of Robbie Savage to our TV screens, The BBC’s punditry and analysis has often felt more Jamie Redknapp than Gary Neville. One regular BBC column is Mark Lawrenson’s (“football expert”) weekly predictions. In these pieces, he predicts the score for each the the weekend’s Premier League matches. Upon completion of the games, he (and a celebrity guest who has also submitted a set of predictions) receives a score based on the accuracy of his predictions: 3 points for the correct scoreline or 1 point for just the correct outcome (win/lose/draw) and receiving no points if he is wrong. So how expert Lawrenson’s opinion? To test this, I decided to test some simple models against Lawro’s predictions for this season. In the first round of testing, I used 3 ‘dumb’ (in other words, it uses no information about the teams aside from whether they are playing at home or away):
- (Red) – This model predicted the home and away goals as random numbers from one to eight. These numbers were not weighted, so this model was just as likely to pic 8-8 as its prediction as it would a more reasonable scoreline like 0-0 or 2-1.
- (Green) – This model predicted the home and away goals as random numbers from one to eight. However, this time, the probability of picking each number was weighted using historical averages, so 0,1 or 2 goals are by far the most likely results.
- (Blue) – This model predicted the home and away goals as random numbers from one to two, weighted by whether a team was playing at home or away. In short, this meant that there was a 4% chance of an away win (0-1), a 14% chance of a draw (0-0, 1-1) and an 82% chance of a home win (2-1, 2-0, 1-0).
By simulating the predictions 10,000 times we can get an idea of how well each model is likely to do and compare that with Lawro’s score for the season, which at the time of writing is 217. As we can see Lawro beats these models pretty comfortably, although he has a roughly 0.8 % chance of being beaten by the 3rd model (blue). So what happens when we use a slightly more sophisticated model? The 4th model (purple) is very similar to the 3rd one (blue) except instead of biasing towards the home team, it favours the team with a higher Shots on Target Ratio (shots on target taken divided by the sum of shots on target taken and shots on target conceded). This new model beats Lawro’s score for the season 55% of the time. In other words,the average score for this model is very similar to Lawro’s. So what is it about this model that makes it similar to Lawro? Well, if we think about what the model is doing, it is essentially picking the better team(i.e. the favourite) to win the vast majority of the time. In fact, if we reduce the luck element and alter the model to simply pick whichever team has the higher Shots on Target ratio to win 2-0, the model scores marginally higher than Lawrenson’s own score. The same model comes out very similarly to Lawro for previous seasons, too. For instance, here is the same plot as above for 2013/14: In other words, we can get a very good estimate of Lawro’s total score for the season simply by backing the favourite team to win each game. And herein lies the true nature of Mark Lawrenson’s expertise. What good is an expert who fails to give more insight than simply “the best team will win”? This may be a good strategy to take of you knew little about football, but one would expect better from someone whose employment is based upon their access to a higher level of understanding of the game. Perhaps this is harsh, after all, football is a low-scoring sport and as we are so often reminded, underdogs can nab a goal and come out victorious. All of which makes football inherently difficult to predict. Moreover, I will concede that there is an argument that the predictions are meant to be a bit of fun and not to be taken too seriously, which is a fair criticism. However, all of this is part of a broader malaise in football coverage, especially within the BBC. As other media outlets like Sky invest more time and energy into providing increasingly sophisticated analysis, the BBC, content with Alan Hansen’s declarations of “schoolboy defending” and endless discussions on refereeing mistakes, runs the risk of falling further behind.