The standard model is less a theory than a particle toolbox that can be tailored to accommodate results after the fact. When anti-matter was discovered it just added new columns and when family generations came along it added new rows. When mesons were found someone said “Who ordered that?” then the standard model made them bosons that carried no force! When new facts arrive, the standard model accommodates them in an existing structure or builds a new wing.
It is hard to fault a model that absorbs knowledge rather than generates it, e.g. it includes gravitons that a long search hasn’t found so was that a fail? It predicted proton decay but twenty years of study have pushed their lifetime to that of the universe so was that a fail? It sees matter and anti-matter as symmetric so does that our universe is only matter constitute a fail? It expected massless neutrinos until experiments gave them mass and penta-quarks and strange quarks until a two-decade search found neither, and the list goes on.
Today it predicts that weakly interacting particles (WIMPs) will explain dark matter but again a long search has found nothing. The standard model is like a hydra in that when the facts cut off one “head” it just grows another. Indeed it is unclear what exactly it would take to falsify a model whose failures are called “unsolved problems in physics”. All this in the name of mere equations.
The standard model’s claim to fame is that it accurately calculates results to many decimal places but in science, accuracy doesn’t define a theory’s validity. An equation that accurately interpolates between a known set of data points is not a theory that can extrapolate to new points. This is why an equation is not a theory but today, generations of physicists fed on equations not science (Kuhn, 1970) think that equations are theory. Yet as Georgi says:
“Students should learn the difference between physics and mathematics from the start” (Woit, 2007) p85.
Theories are expected to predict new situations not just accurately calculate known situations. If a theory construct isn’t valid, i.e. represent what it is supposed to, it doesn’t matter how reliable it is. The virtual particles of the standard model aren’t valid because ultimately, they don’t represent anything that can be verified to exist at all.
When it comes to prediction, the standard model’s validity is dubious, e.g. it claims it predicted top and charm quarks before they were found but after finding three generations of leptons and two of quarks, “predicting” a third quark generation is like predicting the last move in a tic-tac-toe game. It also claims to have predicted gluons, W bosons and the Higgs but inventing magical agents based on data-fitted equations isn’t prediction. Fitting equations to data then matching their terms to ephemeral resonances in billions of accelerator collisions is the research version of tea-leaf reading – look hard enough and you’ll find something. The standard model illustrates Wyszkowski’s Second Law, that anything can be made to work if you fiddle with it long enough.
The standard model reflects the data we know, it doesn’t generate knowledge. Hence its answer to why a top quark is 300,000 times heavier than an electron is “because it is”. What baffled physics fifty years ago still baffles it today because equations can’t go beyond the data set that created them, only valid theories can. The last time such a barren and invalid model dominated thought so completely was before Newton.