Methodology

How FPL Tactics Works

Every projection on this site comes from one model with three moving parts: a Fantasy Rating, an xGI ensemble, and a multi-gameweek MILP transfer planner. The Fantasy Rating is built from five Elo components and a Game Time availability scaler. This page walks through what each part does, how we validate it, and why we made the calls we made.

The projection model, end to end

For each (player, gameweek) pair the system yields one number: expected fantasy points. That number is a blend of two predictors that work independently, each with its own model and its own data.

The Elo predictor keeps a fantasy rating per player and updates it after every match. The update is proportional to the gap between the points the player actually scored and the points his pre-match rating implied he’d score. Position-aware caps keep ratings on the same scale across goalkeepers, defenders, midfielders, and forwards.

The xGI predictor tracks each player’s expected goals and assists per 90 minutes — the underlying chance quality he’s generating, regardless of whether the shots went in. A new signing with three good matches would otherwise look like a world-beater, so we pull every player’s number partway toward the positional average until he has enough minutes to stand on his own.

We don’t blend the two predictors with one global weight, because the right mix of xGI and Elo depends on context — xGI tells you a lot about a premium forward at home, much less about a goalkeeper. So we slice every player-gameweek into one of 60 buckets, based on position group × home/away × premium tier × fixture difficulty, and fit a separate weight α for each bucket. The fit is ridge regression against realised points, anchored to a 50/50 blend so buckets with thin data don’t overfit to a handful of noisy examples.

What “premium tier” means

A two-bucket flag based on the player’s Elo rating: a player is premium if his Elo sits in the top quartile (Q3) for his position group, and non-premium otherwise. The thresholds are computed on the training fold and frozen — for this round’s fit they’re 1205 for attackers, 1231 for defenders, and 1227 for keepers. Haaland / Salah / KDB sit in the premium bucket; the average rotation player sits in the non-premium bucket. We split on this because the optimal xGI-vs-Elo mix really is different for the two groups — premium attackers in good fixtures lean on xGI; non-premium players and goalkeepers lean on Elo.

Why two predictors and not one

Elo captures what xG alone misses: clean sheets, bonus points, set-piece duties, defensive contributions. xGI catches attacking returns that Elo is slow to react to after a hot streak. The ensemble’s held-out RMSE is 5–9% lower than either component alone, and its bias is 30–50% smaller. The numbers are on the accuracy page.

The Fantasy Rating components

The Fantasy Rating you see on the player page is built from six numbers. Five of them are independent Elo ratings — each one updated on its own scoring outcome, each one earned by a specific kind of contribution. The sixth, Game Time, isn’t an Elo at all; it’s a direct read of how reliably the player is actually getting on the pitch.

  1. Finisher — Elo trained on FPL goal points.
  2. Playmaker — Elo trained on FPL assist points.
  3. Clean Sheet — Elo trained on clean sheets and goals conceded; relevant for defenders, goalkeepers, and midfielders.
  4. Saves — Elo trained on goalkeeper saves; goalkeepers only.
  5. Defender — Elo trained on the 2025/26 CBIT rule, where a defender or midfielder earns 2 FPL points for hitting the 10-event defensive threshold.
  6. Game Timenot an Elo. The mean of the player’s last five matches’ minutes, mapped to a 0–100 scale where 90-minute regulars sit near 100 and bench cameos sit near 30. A rotation-prone striker averaging 45 minutes will read 50 here, regardless of how good his Finisher or Playmaker numbers are.

The five Elo components update independently. The projection combines them according to what matters for that position — Finisher and Playmaker weigh heavily on a forward, Clean Sheet and Defender weigh heavily on a defender. We then scale the position-weighted total by fixture difficulty, using a multiplier worked out from how players have historically scored against opponents of each strength.

Game Time then multiplies the whole thing. It’s the most consequential lever in the projection: two players with identical Finisher and Playmaker numbers can end up projected very differently because one is locked in for 90 minutes a week and the other is splitting time with a competitor. We use realised rolling minutes rather than FPL’s chance_of_playing flag because the rolling signal is continuous and catches rotation honestly — a player averaging 60 minutes for the last five weeks lands at 67%, not “100% available” or “75% doubtful”.

The practical effect: a defender heading into a soft run of fixtures can have his Clean Sheet and Defender numbers both climb on the back of clean sheets, his Finisher and Playmaker move (or not) on completely separate evidence, and his Game Time hold steady so long as he’s still nailed in the starting XI. Three independent stories, one number on the page.

A streak-boost K-multiplier speeds up Elo updates for low-seed players who are outperforming their cohort. If a player started below his cohort’s median seed and shows sustained positive deviation, his update rate accelerates. This is what stops breakout players — Igor Thiago in 2025/26 is the textbook example — from being underrated for half a season.

The composite Player.Rating you see in the UI uses a log-normal CDF anchored to a fixed (mu, sigma), so a player with an unusually high underlying Elo doesn’t blow out into the 90s. Per-match component bars use historical-pool anchors that get refreshed annually. The two normalisations are deliberately different.

The xGI ensemble

The xGI predictor turns one input — shrunk expected-goal-involvement per 90 — into an expected fantasy points number. The shrinkage prior is the positional median, which keeps a new signing with three matches under his belt from being projected off a freak underlying run.

We fit the per-cell coefficient by ridge regression against three full Premier League seasons, starting in 2022/23 (the earliest year for which per-match xG data is inlined in the FPL feed). 60 cells, regularised. An α of 0.5 means the cell’s output is split evenly between xGI and Elo; in practice the fitted weights vary cell to cell — higher α (more xGI) is typical for top-tier forwards in good home fixtures, lower α (more Elo) is typical for goalkeepers and defenders.

The horizon planner

The transfer planner is a mixed-integer linear program (MILP), solved by CBC. It maximises projected points across a planning window — currently six gameweeks — subject to every FPL rule that actually matters:

  • £100m salary cap, three-per-club limit, position quotas (2 GK / 5 DEF / 5 MID / 3 FWD).
  • Free transfers (1/week, capped) and the −4 hit cost for extras.
  • The half-rise sell mechanic: selling a player whose price has risen since you bought him only realises half the gain.
  • Per-gameweek decision variables: who to buy, who to sell, whether to take a hit, captain, starting XI, bench order.

We re-plan every gameweek. A plan that looked optimal at GW10 with a six-week view gets re-evaluated at GW11 against the new state — ratings have updated, prices have shifted, lineups are confirmed. The planner doesn’t bank decisions for the future; it commits to this week’s transfer and re-asks the same question next week.

Chips (Wildcard, Free Hit, Bench Boost, Triple Captain) are not optimised inside the MILP yet. A separate chip-aware extension is on the roadmap. The accuracy page covers what this means for the validated results.

The conviction threshold (T)

Every recommended transfer has to clear a conviction bar set in projected points: a swap only goes through if it gains at least T points across the planning window. Right now T = 2.5.

The threshold and the projection model have to move together — change one and you have to revisit the other. A stricter threshold filters out marginal transfers that are really just noise; a looser threshold catches small upgrades the model has genuinely earned the right to take. The current value comes out of a walk-forward sweep across the full strategy grid — we tried thresholds of 2.0, 2.5, 3.0, and 3.5 across every cell, and 2.5 landed in the top tier for both starting conditions we tested (a strong squad and a deliberately weak one).

At T = 2.5, the planner makes roughly one transfer every gameweek and ends up sitting on a free transfer about one week in five.

The two strategy archetypes

The planner ships with two presets. Both look six gameweeks ahead, use the same projection model, and run the same conviction threshold (T = 2.5). The only thing that differs is whether the planner is allowed to take a −4 hit.

Climber (default) — h = 6, T = 2.5, hits allowed (up to 2 per gameweek)

For the manager whose squad has work to do. Starting from a deliberately weak GW1 squad — bottom 15 by GW1–3 form — Climber averaged +82 ± 78 points per season above Steady across three test seasons in walk-forward cross-validation.

The hit budget is self-regulating. Climber is allowed up to two −4 hits per gameweek, but the MILP only takes one when the projected gain across the planning window clears the cost. On a strong squad those high-conviction hits don’t exist — nothing on an elite XV is worth a −4 — so Climber barely uses the budget. On a weak squad they’re everywhere, and Climber spends them. The same archetype matches the situation gracefully. From a strong start, Climber and Steady finish within noise of each other (2378 vs 2380); from a weak start, the hit budget is what separates them.

Pick this if your starting squad has work to do — which describes most managers’ GW6 squad after a couple of bad early picks.

Steady — h = 6, T = 2.5, no hits

For the manager maintaining an established squad. Same horizon, same threshold, same projection model as Climber. The only difference: Steady is never allowed to take a −4. Every transfer it makes is funded by a free transfer.

From a top-15-by-form starting squad, Steady averaged 2380 ± 30 points per season across three test seasons — the most stable cell among no-hits strategies in the 48-cell grid. From the same starting squad, Climber averaged 2378 ± 31 — effectively identical mean, effectively identical risk. The two archetypes converge from a strong start because Climber’s hit budget collapses to “no hits taken” anyway.

From any starting position weaker than top-15, Steady loses to Climber by margins that grow as the starting squad gets worse — because Steady can’t dig out of a hole with −4s when there’s a real hole to fill. Pick Steady if your XV is already top of the field, or if you simply want the explicit no-hits guarantee.

A previous “Balanced” archetype was retired earlier in the 2026 season — every differential mechanism we tested lost across all three seasons. A previous shorter-window (h=4) variant went with it. The current pair (Climber, Steady) was set after a walk-forward 48-cell strategy sweep — see the accuracy page for the details.

Data sources and refresh cadence

  • FPL official API — player stats, fixtures, prices, lineups, ownership. Ingested hourly.
  • Per-match xG / xA — inlined from the FPL detail endpoint from 2022/23 onward.
  • Elo state — updated after every match. Per-match parameters (rating_mu_match, rating_sigma_match) are calibrated separately from the composite Player.rating shown in the UI.
  • Projections — recomputed after every Elo update.
  • Sitemap & player pageslastmod stamped from the underlying refresh.

Six warm-up Premier League seasons (2019/20 through 2021/22 for the Elo half, then xGI joining from 2022/23) walk the model state forward before any season is treated as a test. A test-season projection therefore sees a model that’s already been running for three full seasons.

What this model cannot do

Five things, kept honest:

  1. Chips. The MILP doesn’t optimise Wildcard / Free Hit / Bench Boost / Triple Captain. The planner runs as if chips don’t exist.
  2. Rank-relative EV. The planner maximises raw expected points, not rank against a mini-league. Differentials — low-ownership picks that move you up the rankings when they haul — need rank-equivalent EV as the objective, and we haven’t run that study. Naively boosting low-ownership picks lost points in backtest.
  3. News-based xMins. Expected minutes come from rotation patterns, not press conferences. A rotation confirmed 2 hours before deadline doesn’t enter the projection until the next refresh.
  4. Set-piece role changes. When a taker changes mid-season, the rating absorbs the move via realised points, not via a dedicated set-piece-takers feed. A proper set-piece surface is on the roadmap.
  5. Sub-21min DNP weighting. Players coming back from injury who play under 21 minutes get a partial-decay Elo update instead of a full one. Backtests showed a gain, but it does add lag to genuine breakout cases.