The standard model is essentially a theoretical device that invents virtual particles based on fields to explain results after the fact, like a particle toolbox. For example, when anti-matter was discovered, it just added a new column, and when family generations came along, it added new rows. When mesons were found they were so unexpected that the Nobel laureate Israel Rabi quipped “Who ordered that?”, but the standard model just called them bosons that carried no force, and carried on. When new facts arrive, the standard model fits them into its existing structure, or adds a new room.
Science is based on falsifiability, but how does one falsify a model that absorbs rather than generates knowledge? It proposed gravitons that a long search hasn’t found, so was that a fail? It predicted proton decay, but twenty years of study have pushed their lifetime to that of the universe, so was that a fail? It expected matter and anti-matter to be equal, so is that our universe is only matter a fail? It also expected massless neutrinos, until experiments found they had mass, and penta-quarks and strange quarks, until a two-decade search found neither, and the list goes on. It predicted that weakly interacting particles (WIMPs) will explain dark matter, but again a long search found nothing. The standard model is like a hydra, as when the facts cut off one head, it just grows another. Indeed, it is unclear what it would take to falsify a model whose failures are called “unsolved problems in physics“.
The standard model’s claim to fame is that its equations can calculate results to many decimal places, but in science, accuracy isn’t validity. That an equation can accurately interpolate between known data points isn’t the same as a theory that extrapolates to new points. Equations are judged by accuracy but theories are judged by their ability to predict but today’s physicists, fed on equations not science (Kuh,1970), think they are the same, so as Georgi says:
“Students should learn the difference between physics and mathematics from the start” (Woit,2007), p85.
The difference is that theories aren’t equations, because they are based on validity not accuracy. A theory is valid if it represents what it true, and if it isn’t valid, it doesn’t matter how reliable it is. Hence, if the standard model isn’t valid because it can’t predict, it doesn’t matter how accurate it is.
When it comes to prediction, the standard model’s success is dubious. It claims to have predicted top and charm quarks before they were found, but predicting three quark generations after finding three generations of leptons is like predicting the last move in a tic-tac-toe game, inevitable. It also claims to have predicted gluons, W bosons, and the Higgs, but predicting invented agents isn’t prediction. Fitting equations to data then matching their terms to ephemeral resonances in billions of accelerator collisions is the research version of tea-leaf reading – look hard enough and you’ll find something. It illustrates Wyszkowski’s Second Law, that anything can be made to work if you fiddle with it long enough.
The standard model’s answer to why a top quark is 300,000 times heavier than an electron is “because it is”. What baffled physics fifty years ago still baffles it today because equations can’t go beyond the data set that created them, only theories can. The last time such a barren model dominated thought so completely was before Newton.