Finding Value in SuperCoach: The Simple Preseason Formula I Use (and Why It Works)

Our newest member of the SCT Writing team, Derek, has opened up here with an absolute banger!! Today Derek goes through the preseason formula he employs to find the best value in Supercoach!

 

Every SuperCoach pre-season starts the same way. We all open the team picker and convince ourselves this year will be different. This year we’ll be calm. Disciplined. We’ll pick value, not names. We’ll “trust the process”.

And then, somehow, by the end of January we’ve already talked ourselves into three midpricers, two injury comeback stories, and a bloke we haven’t seen play a full game for two seasons.

That’s SuperCoach.

So instead of pretending this is an exact science, I’m going to share what I do. I’m a simple man. I’m not trying to build a PhD-level model, and I’m not pretending I can predict the future. There are plenty of people on YouTube, podcasts and social media who will try to predict exact averages and exact point outcomes, but the reality is everyone has bias. Even the good ones. Everyone falls in love with certain players, certain clubs, certain roles, certain narratives.

I’m not immune either. Otherwise I wouldn’t have spent half my SuperCoach career finding a reason to put Cyril Rioli in my team.

So instead of trying to “predict scores”, I use a simple formula that gives me a benchmark, not a prophecy. It’s just a way of working out what a player needs to score to justify their price and actually be a value selection.

The simple value formula
Here’s the formula I use:
Expected Score (3-week benchmark) = (Starting Price + $300,000) ÷ 8,000
That’s it. No wizardry. No spreadsheets required (although spreadsheets are still elite). It’s simple enough to do in your head, which is important because most SuperCoach decisions happen under pressure.

What the formula gives you is a number that tells you, roughly, what a player needs to score early to be “value” at that price. It’s not saying what they will score. It’s saying what they need to score to justify their price, a value benchmark, not a prediction.

Why the $300k and why 8,000?
This part is not science. It’s habit and practicality.
I’ve used $300k and 8,000 for years because it works nicely around common SuperCoach price points and it’s easy maths. It also doesn’t feel like the usual straight-line method of dividing everyone’s price by the same Champion Data magic number.
Two quick examples (the ones that matter)
If you’re paying $500,000 for a player, the benchmark becomes:
(500,000 + 300,000) ÷ 8,000 = 100
So a $500k player, by this method, needs to score around 100 to justify being selected as value. Not 88. Not 91. Around100. Otherwise you’re paying for a name, not value.
If you’re paying $400,000, the benchmark is:
(400,000 + 300,000) ÷ 8,000 = 87.5

So at $400k you’re basically expecting around 88. That’s the land of midpricers and breakout candidates, which is also the land where SuperCoach seasons go to die if you get this wrong.

The real trick: expected score vs your projected score
This is the part where the formula becomes genuinely useful.

Once you apply the benchmark to players, you now have an “expected” score for each player. Step one is done: you have a value target based purely on price.

Now comes the part that matters: you need to decide what you think they’ll actually score.

But here’s the key. Most people try to predict 23 rounds, which is impossible. You can’t even predict the next 23 minutes in this game.

So I focus only on the short term, the next three rounds. It’s less daunting, more realistic, and fixtures and roles create the sharpest edges early in the season. Some players open with soft matchups and can go bang immediately. Others might be great players but start with a brutal run, underperform early and become value later.

In simple terms, the method becomes: calculate what they need to score to be value, estimate what you think they’ll score over the next three games, and compare the gap. If you think they’ll beat the benchmark, you’ve found value. If you think they’ll fall short, you’re either avoiding them or waiting for them to get cheaper.

Using Champion Data without getting fooled by it

There’s an obvious problem: we can’t predict scoring. We can make educated guesses, but bias creeps in for all of us. We all love certain players. We all talk ourselves into things. We all get influenced by hype and highlight clips.

And yes, that includes the loudest bloke on YouTube who insists he’s “just being objective”, while picking eight players from his own club.

This is where Champion Data projections are useful. Not because they’re perfect, but because they give a neutral baseline to start from. The smart way to use CD is not to worship it, it’s to let it set the baseline and then apply common sense.
Example: Zac Butters.
Priced at $654,800, Champion Data has him projected to average 138 over the first three rounds. That’s not a typo, 138. That’s basically CD saying Butters is about to start the season like he’s playing against Auskick kids. Is it possible? Sure. Is it likely? Probably not.
I might peg him closer to 125, while still acknowledging his ceiling, role and fixture.

That’s how I use it. Let CD set the baseline, then make the obvious tweaks.

Examples: what value actually looks like
Let’s run this through some players that are common talking points.
Nasiah Wanganeen-Milera is priced at $622,300.
The benchmark says he needs to score 115. Champion Data has him around 110. That doesn’t mean he’s a bad pick, he’s an excellent player and could easily be a keeper, but by this method you’re paying pretty close to full freight. You’re not selecting him because he’s “value”; you’re selecting him because he’s good and reliable.
Hayden Young is priced at $389,000.
The benchmark says he needs to score 86, while Champion Data projects him at 73. And here’s where the fun begins. Young’s price is cheap because he missed a heap of footy last year. So the real question isn’t “is Hayden Young value?” The question is “is Hayden Young over the injury, and is the role there?” Because personally… I’m going to do what every SuperCoach coach does at some point in January: I’m going to look at the Champion Data number, gently cross it out with a red pen, and write in my own.
I honestly think Hayden Young can go 100 early if he’s fit and gets his role back.
Is that bias? Maybe. Probably. But this is exactly why we need a framework like the benchmark, it forces you to admit when you’re making an opinion-based call, rather than pretending it’s “data”.

Keidean Coleman is priced at $233,800.

The benchmark says he needs to score 67. Champion Data says 52, and my own expectation sits around 60. Coleman is the classic SuperCoach temptation: cheap, exciting, and everyone remembers the 2023 Grand Final (and the commentators will remind you every time he kicks it). But the benchmark is a big reality check. Starting him is basically saying you expect genuine defender scoring straight away, and that’s a big call for someone coming off long-term injury, especially with the new five-man bench rule potentially encouraging clubs to manage loads and minutes even more aggressively.

At 60 he’s not a disaster pick, but he’s still under the benchmark. Which means you’re not really picking “value”, you’re picking “cheap and hopeful”. Sometimes that works. Sometimes it turns into a forced trade.
A quick note on rookies (before we compare numbers)

Rookies are a special case. Champion Data basically projects rookies by price because they have no AFL data. That means the basement rookies get projected around 22 points, which is hilarious, but also understandable. CD isn’t trying to split Jagga Smith from Cody Anderson. At that price point, a rookie is a rookie until proven otherwise.

That means this formula isn’t designed to select rookies. But it is useful for one thing: showing how much extra output you need from a more expensive rookie.

A basement rookie like Liam Riedy ($119,900) has a benchmark of 52. A slightly more expensive rookie like Dyson Sharp ($149,500) has a benchmark of 56. The difference isn’t huge, but the concept matters, if you’re paying more for a rookie, you’re expecting more.

It’s also worth pointing out that Champion Data will sometimes give a more proper projection for rookie-priced players who have played AFL before (the recycled types). In those cases CD isn’t using the generic rookie projection; they’ll use historical data. But in practice those numbers are still heavily sub-affected and often meaningless in the real SuperCoach world. If a rookie-priced player scores 22–35, they’re not a cash cow, they’re a red dot waiting to happen. You only pick them if you believe they can score 50–65 and become playable.

That’s where this simple benchmark actually helps. It doesn’t magically predict rookies, but it does highlight that if you’re paying extra for an “expensive rookie”, you need them to lift their game and justify it.

Derek Tweek: running the numbers on popular defenders

To make this practical, I grabbed a simple sample: the most popular defenders in the game right now. No cherry picking. No “this is my secret smokey.” Just the guys everyone is clicking into their teams.

Then I ran the numbers three ways: the formula benchmark, Champion Data projection, and my own “Derek tweaked projection”.
Player
Price
Formula expected score
CD projection
Derek projected
Difference
Zeke Uwland
$199,000
62.4
25
60
-2.4
Connor Rozee
$568,500
108.6
114
114
+5.4
Nas Wanganeen-Milera
$622,300
115.3
110
110
-5.3
Jai Serong
$119,900
52.5
19
50
-2.5
Josh Lindsay
$122,500
52.8
23
60
+7.2
Keidean Coleman
$233,800
66.7
34
60
-6.7
Sam Grlj
$172,000
59.0
32
60
+1.0
Colby McKercher
$449,600
93.7
87
90
-3.7
Miles Bergman
$447,400
93.4
77
85
-8.4
Nic Newman
$439,300
92.4
93
93
+0.6
Jordon Clark
$568,000
108.5
96
96
-12.5
Lachie Whitfield
$599,200
112.4
135
120
+7.6
The key thing this table does is force you to separate “good player” from “good value.”
On my numbers, the standout value green flags are Whitfield (+7.6), Josh Lindsay (+7.2) and Rozee (+5.4). On the flip side, the big warning sign is Jordon Clark (−12.5). Not because he’s a bad player. Not because he’s a bad scorer. But because at that price, this method wants him scoring 108+, and both Champion Data and my own projection have him mid-90s. That’s the classic SuperCoach trap: you’ll enjoy watching him play, but you won’t enjoy watching your rank.
This isn’t just a preseason tool (it works during the season too)
One other thing that makes this formula handy is it’s not just for picking your Round 1 team.
You can use the exact same approach during the season when players’ prices start moving.

We all know SuperCoach prices rise and fall based on performance (and Champion Data gives us the break-evens), so what happens is simple: a player who starts the season overpriced might drop $60k–$100k and suddenly become interesting. Another bloke might be flying early, go up $120k, and no longer be value even though he’s still scoring well.

That’s where the benchmark helps. Every time a player’s price changes, your “value score” changes too. You can rerun the formula quickly and ask the same question: at this new price, what does he need to average over the next three weeks to actually be value?

It’s especially useful for premiums who have a couple of bad games, get cheaper, then hit a soft patch of fixtures. That’s often the best time to jump, when other coaches are rage-trading them out and you’re calmly buying a discounted premium as value.

So yes, I use this to pick a starting team… but I also use it as a simple “is he cheap enough yet?” tool throughout the year.

Final thoughts
This is just what I do. I’m not claiming to be a SuperCoach genius, but most years I finish inside the top 1,000 overall, and I’ve found that simple tools like this keep you grounded.
The formula won’t pick your team for you, and it won’t stop you from doing something silly. SuperCoach always finds a way. But it does give you a benchmark, and it forces you to compare players properly. It stops you falling in love with the story and forgetting the score.
3
0

1 thought on “Finding Value in SuperCoach: The Simple Preseason Formula I Use (and Why It Works)”

Leave a Comment

/** Infolinks