Stats 101 and beyond mailbag!

From how the sausage is made to the grim details of xG

logo

Mark Thompson

Jan 14 2020

9 mins read

0

Hello everyone! This week’s Get Goalside! is a bit of a mailbag. I know that there’s a lot of questions people might have about football stats and so… I’m answering them!

I haven’t been able to get to all of the questions, so apologies if I haven’t answered yours here, but there’s a lot to get your teeth into, and it’s broadly split into three sections: how the sausage gets made; the ‘it depends’ section; expected goals.

What systems are used to collect data?

People sat in front of screens, tapping away at keyboards mostly. This is a very old video, but Opta has given a glimpse to their data collection centres. To my knowledge, other collectors do it in a similar way: one collector per team, watching the game and noting that team’s actions with a mixture of mouse clicks and keyboard hotkeys.

How do you record line-breaking passes?

One company, Impect, collect line-breaking passes manually (i.e., using a similar method to how Opta et al collect their stats).

However, if you have ‘tracking data’ (cameras used to track each player on the pitch, and the ball, during the match — see this video) then calculating line-breaking passes with an algorithm is not too difficult.

How do you differentiate between player and team?

How much of the stat is a player being good/bad vs a coach’s tactic or a player’s assigned role?

It’s really difficult, and I personally don’t think it’s something that’s (publicly) been discussed enough. So much of a player’s statistical output is dictated by the role they’ve been told to play.

That said, the same can be said for scout/pundits’ opinions of players from watching them. Quite often, we get pundits expressing surprise at a players’ performance when perhaps they were always capable of those heights but had previously been played in a more restrictive role.

How many minutes are enough before we can make a good conclusion on their skills?

I think friend of the newsletter Danny was hoping for a more scientific answer than I’m willing to put in the time to give, but here we go.

One technical answer I’m able to give is to point to David Sumpter, who reckons that it takes about three matches for team expected goals to settle down and stop being too subject to the whims of luck and noise.

That’s teams, though. This is going to need to be longer for players, who make fewer actions. I’ve been caught out in the past by players who racked up a lot of [insert stat of choice here] in one match early in the season, and it boosted their average for a good couple of months.

Depending on your use, and depending on the stat, and depending how patient you are, somewhere between 6-10 matches should start giving decent signal.

What’s the easiest way to find teams or players of similar styles?

The pithy answer is that if I had a good answer to this I wouldn’t be writing a free newsletter.

There are a few ways that you can tackle this. You can create some kind of statistical model that takes in all the data, categorises it, and spits out a similarity score between any two entities.

Or you could go for a bit more of a manual approach, choosing stats that you think are particular stylistic markers. For strikers, for instance, that could be where and how they get their shots; for teams it could be devising a metric about how they build up out of the back, the way they create their shots, or the way they press.

It’s basically one of the awkward ‘it depends’. It depends what kind of players you’re interested in, or, to put it more broadly, what aspect of play you’re interested in. If you look at teams, their level of possession might be the biggest factor in separating different ‘classes’, but it might be what they do in the final third that you care most about.

The best way to do it, I think, is to have some idea of the aspect of play you’re looking for, devise a metric for it, and go from there. But fancy statistical models have their place too. Shrug.

A summary

I figure it’s worthwhile summarising this section, because there are a number of questions about football stats that could be legitimately answered with ‘it depends’.

I don’t think that it’d wrong to call football stats a language. Sometimes ‘stats’ sounds like something overly sophisticated that relies on statistical models learnt in university tutorials, but at its heart its just information. When a scout or coach watches the game, that’s information too. It all needs to be interpreted.

I don’t think that there’s any one single aspect of a player’s technical game that determines who is good and who is not. Work rate maybe? But even then, you can have all the work rate in the world and still not be a good football player. And what’s needed for a midfielder is different to what’s needed for a striker, and depending on the type of midfielder it all changes again.

Anyway. It depends.

Is xG related to specific players?

(e.g. is it calculated differently for someone like Messi compared to everyone else)

Generally, no.

Expected goals models are an average of everything that’s happened in the past which can basically be boiled down to: shots from X location, set up in Y way. How much detail goes into the ‘Y’ can vary model to model, and different data providers collect different amounts of information about how many defenders are around and/or pressuring the shooter.

Players don’t usually shoot that much, and when they do shoot it’s often from quite different situations, so it’s difficult to get a sense of certainty for how much extra ‘finishing skill’ they might have. Someone like Messi is an unusual example, both for the fact that he’s the best player in the world but equally because he’s had such a long career that’s included so many shots.

Maybe one day we’ll be able to factor in player finishing skill (more on that below), but it’s not usually in there at the moment.

There’s also a reason why you might choose not to include player skill. If you’re interested in expected goals from the team-as-a-whole’s perspective, then you just care about how good the chance is regardless of who was taking it.

Best way to judge a player’s finishing?

The answer above alludes to the difficulty in knowing how good a finisher a player is (i.e. sample sizes), and I’m going to mention another: what is ‘finishing’.

In football parlance, ‘finishing’ is a definable subset of shots which, in my mind at least, is more or less the archetypal Thierry Henry goal. You wouldn’t call someone smacking in a 30-yarder a good ‘finish’.

When stattos talk about ‘finishing skill’, though, they’re just talking about how many more goals than their expected goals figure a player scores.

On this, as much as you can rely on the sample sizes, the difference between expected goals and post-shot expected goals is interesting. As I mention above, regular expected goals models are based on location of the shot and how it was set up, and the calculation stops there. ‘Post-shot expected goals’ adds into the equation the part of the goal that the shot is going towards: top bins = good, off-target or blocked = 0.

Often, a player who’s scoring more than their expected goals will also have a higher post-shot expected goals figure than their regular (or pre-shot) expected goals figure. So it could look something like 10 goals, 8.1 expected goals, but 9.8 post-shot expected goals. The player would probably be finishing pretty well, although there is still, as ever, the spectre of sample sizes haunting you. How you deal with that haunting is up to you.

What are the error bars/margin of error on xG values?

A good question, and one I don’t really know the answer to. I know that, even over the course of a season, a few expected goals here or there is nothing to read too much into. If you were ranking the league table by expected goals difference, for example, anyone within, say plus- or minus-[arbitrary number, like 3] would be in broadly the same band of ‘midtable’ teams.

On reflection, does this make me wonder why we pay attention to player xG totals at all? Maybe!

Non-shot xG — threat or menace?

(Or, is ‘non-shot xG’ a misleading name)

In the beginning there was expected goals. But the Lord looked upon it and thought that it missed out on some key contributions from players who didn’t take shots.

And so people started building ‘non-shot’ expected goals models backwards. If a pass goes starts at X location and goes to Y location, what was the likelihood of a shot at X location going in, and what was the likelihood of a shot at Y location going in. Those were the first models.

Then, like Ultron, non-shot xG started evolving.

Nowadays, you might hear about xThreat or Possession Value models, and they’re both roughly similar to the non-shot xG concept. All of them want to work out how much a player’s actions add to the likelihood of their team scoring, generally by looking at passes and/or dribbles.

I think that friend of the newsletter @unfitforpurpose, who asked this question, is right to be concerned that the name ‘non-shot expected goals’ might confuse people about what the model is doing. It’s kinda weird. But I also think that enough people have stopped using it that we shouldn’t worry too much — we should just avoid using it in future.

FWIW, I think Opta made the right decision in calling their version of this kind of thing a Possession Value framework, and the actual output is the Possession Value added. I could go on for longer about the value of all these names, but I’ll spare you that. If you’ve read this far, you’ve suffered enough already.

Thanks for your time, and thanks for reading.

Read more posts like this in your inbox

Subscribe to the newsletter

Mailbag