In reply to Bob Kemp:
My take is the same as for modelling in the early days. It's not predictive in any trustworthy capacity, but gives a good way to interpret and understand data.
There are so many unknowns in our situation that any model is full of "free parameters" (in modelling parlance). Free parameters are of two sorts:
- Explicit ones - values that aren't known but are used by the model (e.g. what fraction of people have cross-immunity pre-dating the emergence of the virus).
- Implicit ones - human decisions of what mathematical terms to include in the model and what to exclude.
In producing a model, the free parameters are adjusted until the model fits the data. With too many free parameters a "wrong" model can be adjusted until it fits the existing data perfectly, but its predictions don't go on to match reality as it pans out. This was clearly the case with the Ferguson model of the early days where the social mixing matrix alone represented more free parameters than the number of days of data the model was being fit to. With Friston's models, he's still got critical free parameters in there and whilst there is more data to fit too, I'm not convinced it explores the state space of the model well enough (state space meaning all the different sorts of things that could be going on, loosely speaking) that it can be usefully predictive.
His forecast for deaths from a second wave seems optimistic, depending on what happens with policy and public compliance.
Ultimately, I maintain that if we knew enough to make a trustworthy model prediction, we would know enough to know exactly which risk control measures would stop the virus dead in its tracks with surgical precision rather than broad lockdown, and we wouldn't need the model - the same unknowns apply to both.
Good old fashioned best practice in infection control drawn from a century of epidemiology is what we need now.
Post edited at 18:37