Suppose a team starts the season 3-0, but you had them entering the season to end 81-81. What is your new forecast? If that 3-0 record means nothing at all, then you'd assume they play .500 the rest of the way (in 159 games), and you add 3 wins to that, and you get 82.5 wins. In other words, they gain +1.5 wins in the final season forecast, based strictly that 3-0 is +1.5 ahead of 1.5-1.5.
But, the pre-season forecast can't carry a weight of an infinite number of prior games, such that adding 3-0 to that means nothing at all. Suppose that the pre-season forecast has the weight of three full seasons. Here's what happens to the final season forecast after streaks of 1 to 20 games, and how much information those extra wins gives us.
This three-season weight is an ILLUSTRATION. We need to figure out that weight. My expectation is that that weight is going to be somewhere between 1 and 3 seasons of weight. Aspiring Saberists, Assemble.
So, this was a pretty odd play. It's the top of the 9th, tie game. The nominal win% for the home team is .500. (It should be .520, all other things equal, and naturally we need to consider the identity of the players, but let's make it easy, and use the base charts available here.)
The first batter walks. That's normally worth .030 wins, but in this high leverage scenario, it's worth .082 wins. Now the home team chance of winning goes down to .418. The next batter has an almost easy DP, but the 2B misplays it for an error, with runners now at the corners. Terrible fielding by the home team drops their chances of winning all the way down to .194.
Having a runner on 3B with less than two outs is a very powerful thing for the batting team. They are in control. So what do they try? A bunt. And not a safety-squeeze, no. But the other kind. Let's see what the batting team might have been thinking.
Remember, the chance of the fielding/home team winning is down to .194. If they pull off this bunt, we have a runner scoring, and runners on 1B and 2B. That puts the winning % for the home team down to .103. But if the worst happens (as it did), the home team chance of winning goes skyrocketing to .498. In other words, that walk that happened, that error that happened? All of that vanishes, and we are back to where we were when we started the inning.
So, you have a chance to gain +.09 wins with the perfect bunt, and you have a chance to lose .30 wins with a missed bunt. You have to be 77% sure that you will get that bunt.
And in fact, it should be higher. Because one of the outcomes is to score the run, but give up the out, and that sets the win% for the home team to .142, so only +.05 wins for the home team. It's also possible you don't get the worst-case scenario, but just a really bad scenario: you lose the runner, but get the batter on base for a home win% of .421 and so a cost of .23 wins.
All in all, it's probably a play you need to make 85% of the time. Which is why you don't see this play performed. I'd like to say ever? An Aspiring Saberist out there can do the legwork.
See, there are places where guts and instincts have a role. When there's enough uncertainty in the math, if there are enough variables to consider, you can push up an extra .01 or .02 wins here, and push down an extra .02 or .03 wins there. There's uncertainty. But in this particular case, when the breakeven point is already ridiculously high, there's not enough uncertainty in the numbers to allow for guts and instincts to possibly play a role.
There's probably three dozen choices a manager can make throughout the game that could go contrary to the numbers, but be justified based on uncertainty of the numbers, and the guts/instincts of the manager. This play was nowhere close to being one of those that required the manager to step in.
Home win% in regulation games is 54%, but falls to 52% in extra innings for totally normal reasons, having nothing at all to do with who bats first or last.
This chart shows the win% season by season since 1969. I've included the Random Variation lines, which I have nominally set at 2 standard deviations. This implies we should expect to see 5% of these 55 data points (aka 3) to land outside these two lines. We see alot more than that. Why, I don't know. I used a flat 52% expected win%, and maybe it should be 51.5% one year and 52.5% another year. I'll leave that to the aspiring saberists. (click to embiggen)
Of course, the most striking thing in the chart is what's happened since 2020, with the extra inning placed runner (XIPR). Though inconveniently, the pattern started the season prior to that. Anyway, here's how it looks when we group in chunks of five seasons. The data point for "2010" you see at the bottom refers to seasons "2010 - 2014". And "2015" is "2015 - 2019".
We should only see, maybe, one point outside the 2SD lines, but we see three, with the one from 2020-present way outside even this standard. Something is definitely going on with how teams are approaching playing with the XIPR. I'm sure an aspiring saberist can look into this. Are there any teams that have figured it out? I'll leave that up to y'all to show.
Let's create a simple basketball game, where regulation is made up of 100 free throws by each team. The free throw line is several feet further out than the current rules, such that the average free throw percentage is 50%. Home teams average 51%, while away teams average 49%.
In regulation play, after 100 throws from each, the home team will win 58.4% of the time, and tie 5.4% of the time.
Let's create an Overtime game that is 10 free throws by each team, to act as the tie-breaker. With much fewer confrontations, the chance that the home team will win outright is much lower than during regulation. It will actually be 44.7% to win outright, 37.7% to lose outright, and another 17.5% to get into another Overtime period. The OT win% for such a home team will end up being 54.3%
And that means that the overall win% for home teams will be 58.4% plus 5.4% times 54.3%, or 61.4%.
This is how you can take a team that has a seemingly small home-site advantage (making 51% of free throws instead of 50% at the individual play level), balloon all the way to 61.4% at the game-level. And that it's only 54.3% in Overtime. Scoring Confrontations. That's the reason.
And this explains why in MLB, the home team win 54% of their games, and yet wins only 52% of their Extra Inning games. In baseball, instead of having 100 scoring confrontations each, they have 9 innings each. And the home team scores 52% of the runs in each inning. And when you do that, you end up winning 54% of games. But still 52% of an inning, like an extra inning.
You can apply this concept to hockey and soccer and you will see you can create a simple model to explain it. Football is different because of the possession rule.
Whether you use Pythag wins from Bill James or Patriot or the more basic one from Pete Palmer (0.1 wins per run), they all treat runs the same: you add it up at the seasonal level, and proceed. None of them do it game by game.
When we use the Palmer method, it works like this, if you try to apply it at the game-level: Win the game by 1 run, and that's worth +.10 wins above average or 0.60 wins. Similarly, win by outscoring your opponent by 2 runs means that game is worth 0.70 wins. Winning by 3 means it's worth 0.80 wins, and so on. Winning by 5 is worth 1.00 wins. Winning by 10 is worth 1.50 wins.
Losses operate the same way. Losing by 1 run earns you 0.40 wins, losing by 2 earns you 0.30 wins. Losing by 5 is 0 wins. Losing by 10 is negative 0.5 wins.
Add all of these up, and over 162 games, you get your equivalent wins. Of course, there's no point to doing this game by game, since doing it at the seasonal level as runDiff*0.10 + Games*0.500 gives you the identical answer.
We are of course missing the context of each individual game. In the Palmer method, winning 9 games by 1 run, and losing one game by 9 runs gives you the same answer: total run differential is 0, and so, converts to the equivalent 5 wins and 5 losses.
But, what if winning by one run is not worth 0.60 wins like Palmer implies, but instead is worth 0.83 wins. And winning by two runs doesn't add 0.10 wins like Palmer implies (worth 0.70 wins), but 0.09 wins, to give us 0.92 wins. And winning by 3 runs adds 0.08 wins to give us 1.00 wins. So, winning by THREE is a full win, while Palmer implies winning by FIVE is a full win.
Here are the gradient wins each team gets, game by game:
When we do that, how does 2023 look? The Orioles you may remember had a modest run differential of +129 runs implying +13 wins above average or 94 wins. The Gradient Wins approach gives them 100 wins. They actually won 101 games. What we are doing is giving them more credit for their close games, and not giving full credit to every run in a blowout win or loss.
The Marlins allowed 57 more runs than they scored, implying 75 equivalent wins. The Gradient Wins approach gives them 80 wins. They actually won 84.
That illustrative team of 9 games winning by 1, and 1 game losing by 9 is 9 actual wins and 1 actual loss, while Palmer said it's 5 wins and 5 losses. In the Gradient approach, it comes out to 7.2 wins.
Is this necessarily better than what Palmer or the Pythag methods suggest? No. Or at least, I don't know yet. But it opens the door for better handling blowouts and close games. We know winning by one run has to be more than 0.6 wins. We can't treat each run the same. We also know that winning by one and by ten runs can't be the same, even though you get one actual win. Does it make more sense to give a one-run win 0.83 wins and a ten-run win 1.28 wins? I know I like that more than giving it 0.60 and 1.50 wins as Palmer suggests. And I know I like it more than giving 1 win and 1 win as actual wins says.
We made an update in process, with a big payoff at the pitch-level, with an overall modest impact to the catcher framing. The current method broke up the regions over the plate into 5 regions, with the prominent one being the Shadow-In (80% called strike rate) and Shadow-Out (80% called ball rate), with adjustments for pitcher and venue. The new method updates the Shadow Zone process so it is a continuous probability from 0 to 100%, using the specific plate location, with adjustments for bat-side and pitch-hand. Statcast Data Whiz Taylor did the bulk of the work here.
At the aggregated seasonal level, you won't see much difference. Current Savant and Steamer at Fangraphs, for 2023, have a correlation of r=0.94. This will increase to r=0.98 with the new model. The current Savant process would apply adjustments at the aggregated level. We did this because we never thought that we'd need to show the strike probability on a pitch by pitch basis. And since Catcher Framing was one of the very first metrics we created, it languished in this regard. But thanks to Taylor and their team, a process was built to apply adjustments at each pitch. By doing that, it will allow us to slice/dice the data the way we do with other data, like Catcher Blocking and Throwing, etc.
Here is how the binned data (100 bins) looks like, comparing the predicted strike rate with the actual called rate. (click to embiggen)
As my side-project into NaiveWAR continues, I'd like to also highlight the work of Sean Smith, the progenitor of WAR at Baseball Reference.
I currently have two versions of NaiveWAR. The first based solely on a pitcher's Won-Loss record. And the second based solely on the pitcher's Runs Allowed and IP. Whether in my version, or from Sean Smith, we present it in the form of Individualized Won-Loss Records (aka The Indys). My biggest failing in presenting WAR was not including The Indys. And based on what Sean is doing, he seems to perhaps agree as well.
There's a good reason this is needed because the discussion over the replacement level was actually mostly noise to what is actually WAR. That is my fault, as that conversation got away from me, and I didn't have a way to control that.
Anyway, you can see my two versions on the left (and since this is deGrom, you'll be able to guess which version is which). And Sean's version is on the right. Sean of course is doing alot more than what my Naive approach is doing. And, you can see a tremendous amount of overlap. Which really means that all that tremendous extra work, necessary work, is ALSO noise to the main discussion point of WAR. Make no mistake about it: not only is Sean right for doing what he is doing, but I will also be doing an enhanced version (eventually, whenever I have the time).
But more importantly: the Naive approach is necessary to bring everyone to the wading pool, before we jump into the deep end. WAR has taken on a life of its own, too easy to dismiss because it's too easy not to learn what it is. That's why the Naive approach is necessary. We need folks to get into the wading pool, and then into the shallow end, before we get into the deep end. And what we see with deGrom above is that the difference between the shallow end (Version 2) and the deep end (Sean's version) may not be that big of a dive.
Just the first step in looking at this. The left column is the height of the left knee, the top row is the right knee.
The most common position for the catcher is for the left knee (the glove knee) to be 3-5 inches off the ground, while the right knee is 17 to 20 inches off the ground.
We do see the catcher often enough with their right knee 3-5 inches off the ground, with the left knee 11-20 inches off the ground. I should probably split this by bat-side (and maybe pitch-hand).
Having both knees up, 14-19 inches off the ground is the least popular of the setups.
I'll be looking as well to see how the called strike rate is affected based on the catcher stance.
In my spare time, I'm working on an open-source WAR, that I call NaiveWAR. Those of you who have been following me know some of the background on NaiveWAR, notably that it is tied (indirectly to start with) to Win/Losses of teams (aka The Individualized Won/Loss Records). My biggest failing in developing the WAR framework was not also providing the mechanism for W/L at the same time. That will be rectified.
The most important part of all this is that it's all based on Retrosheet data, and everyone would be able to recreate what I do. And it would be totally transparent, with plenty of step by step discussion, so everyone can follow along. I was also thinking of potentially using this as a way to teach coders SQL. That's way out in the distance, still have to work things out, but just something I've been thinking about as I'm coding this. I even have the perfect name for this course, which I'll divulge if/when this comes to fruition.
Interestingly, RallyMonkey, who is the progenitor of the WAR you see on Baseball Reference seems to be embarking on a somewhat similar campaign. You can see alot of the overlap, with tying things to W/L records, with the emphasis on Retrosheet. The important part of doing that is we'd be able to do it EACH way, with/without tying it to W/L, so you can see the impact, at the seasonal, and career, level. In some respects, he'll go further than I will with regards to fielding, mostly because I have so little interest in trying to make sense of that historical data, given the level of access Statcast provides me. But also partly because by me not doing it, it opens the doors for the Aspiring Saberists to make their mark, that somewhere between my presentation and Rally's presentation, they'll find that inspiration.
The concept of Replacement Level (though I prefer the term Readily Available Talent, which you will see makes more sense) is pretty straightforward. What kind of contribution can you get for the minimal cost? If you have no farm system at all, that level is roughly a .300 win% level. That's the Readily Available Talent. By spending the absolute minimum on the free agent market, you will field a .300 team. At least theoretically.
In reality, all clubs have a minor league system. And they spend millions of dollars on players and player development and player acquisition. Because those players are now Readily Available Talent for no ADDITIONAL cost (the money spent is already sunk), suddenly, the baseline level player is not a .300 win% talent, but probably closer to .350 win% talent. While this player would cost you just the league minimum, it did cost you in terms of your minor league setup.
This is why it gets tricky when you try to decide what the baseline level is. Furthermore, if you decide to field an entire team only from the minor leagues, well, not all the players will be .350 win% talent. After your very top prospects, you will start to go below that .350 win% level quite quickly. So a team of your best 40 minor league players is likely going to win you fewer than .300 win%, probably even down to .250 or .200 win%.
Therefore, while the concept of Readily Available Talent is real, as its where all the decision-making happens, the actual level really requires different baselines for different uses. Sorry to make unclear something that lacked clarity to begin with.
Young plays 1st half of every game completing 75-100% of his passes in each, averaging 12 yds per completion. Yet never throws a TD, his RB never score a TD
Montana plays 2nd halfs completing 20-40% of passes, averaging 9 yds per completion. Season total 32 TD
Four years ago, in a series of tweets, I introduced NaiveWAR, essentially the simplest uber-metric possible. I finally coded it up last night. Here are the results for 2023 (click to embiggen).
True to its name, I used the absolute most minimum information I could, and still give plausible, naive, results. That data is exclusively limited to players who participated in: Runs, Outs, Plate Appearances. And that is it. I couldn't have made it any more naive and still give plausible results. Cashmere Ohtani is that red dot.
On his 30% weakest swings, LHH Luis Garcia (Nationals) generated 2 runs per 100 swings above average. On his 30% hardest swings, he generated 7 runs per 100 swings below average. He led MLB in terms of that gap in performance. Can we say he overswings? I don't know, we'd have to look at each of his swings to see why the results came out as they did. But he clearly performed better when his swings were the weakest.
On the flip side are batters who far far exceeded their performance on their hardest swings compared to their weakest swings. Among this group are Ohtani and Yordan Alvarez, who are each around 13 runs above average on their hardest swings and 4 runs below average on their weakest swings. (League average is +0.5 and -5.0 runs per 100 swings, respectively.)
Of course, you have to be careful here, since a batter is going to potentially check his swing (unsuccessfully), and so the swing speed is not necessarily some sort of independent variable to his approach.
Click to embiggen.
UPDATE: Here is the distribution in speed, as well as the run values, for Garcia and Ohtani. Obviously, Ohtani is in blue. At 81+ is when Ohtani is doing the damage. Garcia you can see had some success at under 68. However, given the combo of 67+68 is a net negative, it may very well be that that is just before-the-fact cherry-picking. That said, Garcia at 74+ or 76+ is a net negative, and it may very well be that he overswings.
Suppose the league OBP is .300, and the number of runs scored per game is 4.0
If a team's OBP is .330, that is 10% higher than the .300 league, or 110 in OBP+ parlance. So, 10% more runners roughly means 10% more runs. And so, 10% more runs than 4.0 is 4.4. (Assume team SLG matches league SLG.)
Suppose the league SLG is .400, and the number of runs scored per game is 4.0
If a team's SLG is .440, that is 10% higher than .400, or 110 in SLG+ parlance. So, 10% more total bases roughly means 10% more runs. And so, 10% more runs than 4.0 is 4.4. (Assume team OBP matches league OBP.)
Now, if you have BOTH 10% more runners AND 10% more total bases, we'll actually end up with roughly 20% more runs.
If you do OBP+ plus SLG+ minus 100, you get 110 + 110 - 100 = 120 in OPS+ parlance
If you did OPS/lgOPS, you'll get .770/.700 = 1.1 or 110 in OPS+ parlance
What's better/right?
To the extent you want to be pedantic, OPS+ in this illustration should be 110.
To the extent what you care about is associating OPS to runs in a 1:1 manner, then OPS+ should be 120.
wRC+ uses the same process in terms of converting wOBA into runs: 200*wOBA/lgWOBA - 100. The only difference is in the name, with wRC+ being clearer as to its intent, and not directly being linked to wOBA by name (even if it is under the hood), other than that lowercase w.
I Cut, You Choose? It's not exactly that, but it's close to that.
I'm going to come up with some random numbers. I don't follow football enough to give you good numbers, so I'll just try some random numbers.
In this iteration, I'll assume the chance of NOT scoring is 60%. And when you score, it's just as likely you will TD as FG.
So, let's start. Team 1 has the ball, and 20% of the time has 3 points on the board, and 20% of the time they put 7. Now, let's follow each of those three branches, starting with the scoreless one.
If Team 2 is also scoreless, it goes into sudden death. We'll assume Team 2 is more likely to score, so let's make it scoreless 55%, and scoring 45%.
With the FG branch: we'll assume here Team 2 is more likely to try for the TD. So, scoreless 65% of the time, FG 10%, TD 25%.
Finally the TD branch: Team 2 has to be more aggressive, so chance of scoreless is 70%, with 0% for FG, 15% for 6 points (and a loss) and 15% for 8 points (and a win).
The sudden death calculation is a simple calculation. At a 60% scoreless chance for both teams, then it's 62.5% chance for Team 1 to win their sudden death.
All of this now becomes a straightforward probability distribution calculation. And in this illustration, the win% is 52% for Team 1.
Now, what happens if I change the chance of scoreless down to 50%, and adjust everything off that? Now the chance of Team 1 winning is 51%.
If chance of scoreless is down to 40% for any drive, then team 1 winning is 49%.
Indeed, this is how it looks based on the scoreless rate, from 10% to 90%:
So, it is easy enough to see that when you have to input two specific teams, things can change from this baseline, and so what may show here as 47% can in reality be 52%.
That's the baseline. Now, all we need is for someone to come up with something a bit more intricate, and we'll see... probably the same thing.
So, whoever over at NFL ops who came up with this scheme likely proposed this setup because it's around 50/50, all depending on whatever actual teams are involved.
Everyone has their own VOZ method, the Value over Zero. The zero-point is the point at which that thing has no value. This is most clearly demonstrated with Fantasy Leagues. If you play Fantasy sports games, congratulations, you have a VOZ method. In a world where you have several hundred players, but only a few hundred will get selected, all the unselected players have a value of zero. You are only going to spend money on players who have value above the zero-baseline.
That zero-baseline is different for every position. A below league-average batter at catcher has value, while the same batting line for a 1B has almost no value. This concept is quite clear in Fantasy sports. It's a little murkier with real baseball players, but it's real nonetheless. All we need to do is establish what that zero-baseline is.
On Twitter, I asked what a 200 IP, 11-11 pitcher was equal to in value, and the most popular response was a 100 IP 8-3 pitcher. Now, follow me here, this is the important part. 11 wins and 11 losses has the exact same value, according to the voters, as someone with 8 wins and 3 losses. (In this illustration, the W/L record is a proxy for a pitcher's overall performance.) Again 11-11 = 8-3. If the two pitchers are equal, then the difference between the two pitchers is zero. In other words, this is what the voters are saying:
11-11 = 8-3 + 3-8
This is obvious, right? 8 wins and 3 losses, plus 3 wins and 8 losses is 11 wins and 11 losses. And since 11-11 = 8-3, then implies that 3-8 = 0
In other words, a pitcher who has 3 wins and 8 losses, or a win% of 3/11, or .273, is worth zero. That is the zero-baseline: .273, at least in this illustration.
A fairly high number actually chose 7-4 as being equal to 11-11. This implies the zero-baseline for this group of folks was 4-7, or a .364 win%.
The smallest group chose 9-2 as being equal to 11-11, which implies a .182 win%.
To summarize: 51% implied .273, 34.5% implied .364, and 14.5% implied .182. Collectively that comes out to .291 win%. In other words, the zero-baseline level, the point at which a player has no value, is a win% of .291. This is what is commonly called the replacement level, but my preferred term is the Readily Available Talent level. And so, value over zero, or in this case Wins Over Zero (WOZ) is set so that we subtract .291 wins per game for every player.
An 11-11 pitcher is compared to a .291 pitcher given 22 decisions. And .291 x 22 is 6.4 wins and 15.6 losses. So, subtracting 11 wins by 6.4 wins is +4.6 wins, or 4.6 WOZ.
And that 8-3 pitcher? Well, .291 given 11 decisions is 3.2 wins and 7.8 losses. And 8 wins minus 3.2 wins is 4.8 wins, or 4.8 WOZ. The 7-4 pitcher has 3.8 WOZ. So, somewhere between 8-3 and 7-4, but closer to 8-3, is where you find your pitcher equivalent to 11-11.
So Ben Clemens did terrific research on something that comes up every now and every then. And everyone that looks at it comes away with the same conclusion. So, it's good that Ben does this work, but after I comment on this, I'll show you something that is even more important.
The issue is: can't we include the Spray Direction with xwOBA, and not just rely on Launch Speed and Angle? The issue comes down to whether we want to explain the PLAY or the PLAYER. If you want to explain the PLAY, then naturally you need to know the spray direction, since 370 feet pop fly down the line is a HR while 370 feet pop fly straightaway is an easy out.
But do you know why we remove BABIP from a pitcher, and use only FIP? Right, because by and large we care about PLAYERS not individual PLAYS. BABIP contains far more noise than signal, which is why in an all-or-nothing situation, you want to using nothing of BABIP. If you want to weight it, you'd want maybe 20% of BABIP, but that removes the cleanliness of FIP. This is why FIP exists, to provide that clean break. If someone wanted to merge FIP and BABIP, they can do so, giving full weight to FIP and say 20% weight to BABIP.
So, about that spray angle: obviously we have pull hitters and spray hitters. They must have different value right? An xwOBA metric that totally ignores the spray direction must have some bias?
Well, sorta, kinda, if you look at it myopically, and not at all if you look at it holistically.
In Ben's article, he did something very smart, which is break up players into 4 groups based on their spray tendency, from heavy pull to heavy spray. And he did it even smarter by focusing on airballs. A pulled groundball for example is not what we are talkign about in terms of xwOBA missing out on HR down the line.
I asked him for two pieces of information. The first is a summary chart of his last chart for all batters, not just the group he noted. And, you can see a bias here. The pull hitters, when we look at their Air balls, have a .487 wOBA, while the xwOBA was only .473. That's a 15 point shortfall. And we see a larger effect for spray hitters, who, on air balls have a .474 wOBA, while their xwOBA was .492.
So, yes, he did find something. Myopically. Remember, we focused on airballs here. What we care about however is ALL batted balls. Are pull-hitters being biased against by xwOBA because we ignore their spray pattern?
I asked Ben for a chart for ALL batted balls as well. Well, here you go (looks like this is all their plate appearances, but no matter, since the K and BB values are equal in both). That bias shrinks all the way down to 2 or 3 points of wOBA, which is 1 or 2 runs. In other words, this is the FIP/BABIP story. BABIP has a ton of noise that in an all-or-nothing choice, you want to know none of it. And if you want to know some of it, it has really a small weight. And the same applies here: the spray direction has far more noise than signal, and so, you do not want to use it to evaluate players. Unless you severerly underweight that data. And that's why xwOBA doesn't need the spray direction to evaluate PLAYERS.
Recent comments
Older comments
Page 1 of 149 pages 1 2 3 > Last ›Complete Archive – By Category
Complete Archive – By Date
FORUM TOPICS
Jul 12 15:22 MarcelsApr 16 14:31 Pitch Count Estimators
Mar 12 16:30 Appendix to THE BOOK - THE GORY DETAILS
Jan 29 09:41 NFL Overtime Idea
Jan 22 14:48 Weighting Years for NFL Player Projections
Jan 21 09:18 positional runs in pythagenpat
Oct 20 15:57 DRS: FG vs. BB-Ref
Apr 12 09:43 What if baseball was like survivor? You are eliminated ...
Nov 24 09:57 Win Attribution to offense, pitching, and fielding at the game level (prototype method)
Jul 13 10:20 How to watch great past games without spoilers