The standard model is a particle toolbox that generates new particles to explain results after the fact. For example, when anti-matter was discovered, it just added new columns and when family generations came along, it added new rows. When mesons were found someone said “Who ordered that?” until the standard model called them bosons that carried no force! When new facts arrive, the standard model accommodates them in its existing structure or builds a new wing.
It is hard to fault a model that absorbs rather than generates knowledge. It includes gravitons that a long search hasn’t found, so was that a fail? It predicted proton decay but twenty years of study have pushed their lifetime to that of the universe, so was that a fail? It sees matter and anti-matter as symmetric so is that our universe is only matter a fail? It expected massless neutrinos until experiments gave them mass and penta-quarks and strange quarks until a two-decade search found neither, and the list goes on. Today it “predicts” that weakly interacting particles (WIMPs) will explain dark matter but again a long search has found nothing. The standard model is like a hydra because when the facts cut off one “head”, it just grows another. Indeed, it is unclear what exactly it would take to falsify a model whose failures are called “unsolved problems in physics“.
The standard model’s claim to fame is that the equations associated with it calculate results to many decimal places but in science, accuracy isn’t validity. An equation that accurately interpolates between a known set of data points isn’t the same as a theory that extrapolates to new points. Equations are judged on accuracy but theories are judged on their ability to predict. An equation isn’t a theory but today generations of physicists, fed on equations not science (Kuh,1970), think they are the same, so as Georgi says:
“Students should learn the difference between physics and mathematics from the start” (Woit,2007) p85.
Equations aren’t theories because theories should predict new things not just accurately calculate known situations. If a theory isn’t valid, i.e. represent what it true, it doesn’t matter how reliable it is. The virtual particles of the standard model aren’t valid because ultimately, they don’t represent anything that can be verified at all. If the standard model isn’t valid, it doesn’t matter how accurate it is.
When it comes to prediction, the standard model’s success is dubious. It claims to have predicted top and charm quarks before they were found but to “predict” a third quark generation after finding three generations of leptons and two of quarks is like predicting the last move in a tic-tac-toe game. It also claims it predicted gluons, W bosons and the Higgs but inventing agents based on data-fitted equations isn’t prediction. Fitting equations to data then matching their terms to ephemeral resonances in billions of accelerator collisions is the research version of tea-leaf reading – look hard enough and you’ll find something. The standard model illustrates Wyszkowski’s Second Law, that anything can be made to work if you fiddle with it long enough.
The standard model describes the data we know but doesn’t create new knowledge. Its answer to why a top quark is 300,000 times heavier than an electron is “because it is”. What baffled physics fifty years ago still baffles it today because equations can’t go beyond the data set that created them, only theories can. The last time such a barren model dominated thought so completely was before Newton.