What's interesting is that they've added a bunch of subjective inputs to this year's model to add uncertainty. It's not clear at all that they're warranted, but they do have the impact of making the race look a lot closer than it would be otherwise. 538 added 20% to the normal uncertainty on election day by itself, for example.
Nate Cohn's critiques of the model seems very fair -- especially the last point.
They've also removed (or I can't find) the view that shows what the model would say on election day if the polling margin was the same as it was today.
It feels a lot like Silver doesn't want to put something out that shows like 95% for Biden with three months to go (which is about what the model would have showed 4 years ago), and
really really doesn't want to put out something that shows 99% for Biden if the election were today.
Given the "2020, man" circumstances we're in that's not entirely crazy, but there's also no chance that all of his fingers-on-the-scale stuff isn't also to avoid being "wrong". 29%, 7:3 shots happen sometimes. 1% hitting discredits you (as Sam Wang found out in 2016). The changes are pretty arbitrary and (as Cohn points out above) ALL move in the direction of more uncertainty, when there are also reasons to think their may actually be less uncertainty as well.