A model that invents virtual particles to explain results after they are found is just a toolbox that produces particle causes, so when anti-matter was discovered, it just added a new column, and when family generations were found, it just added new rows. When mesons were discovered, they were so unexpected that Nobel laureate Israel Rabi quipped “Who ordered that?”, but the standard model just called them bosons and carried on. When new facts arrive, the standard model accommodates them in its existing structure or adds a new room.
Scientific theories should be falsifiable, but how can one falsify a model that absorbs rather than adds knowledge? It proposed gravitons that a long search hasn’t found, so was that a fail? It predicted proton decay, but twenty years of study pushed their lifetime to that of the universe, so was that a fail? It expected matter and anti-matter to exist in equal amounts, so is our universe of matter a fail? It expected massless neutrinos, until experiments found they had mass, and penta-quarks and strange quarks, until a two-decade search found neither, and the list goes on. It expected weakly interacting particles (WIMPs) to explain dark matter, but again a long search found nothing. The standard model is like a hydra, as when the facts cut off one head it just grows another. What will it take to falsify a model whose failures are called unsolved problems in physics?
The standard model’s claim to fame is that it can calculate results to many decimal places but in science, accuracy isn’t validity. An equation that accurately interpolates between known data points isn’t a theory that extrapolates to new points. Theories are judged by their predictions not accuracy yet today’s physicists, fed on equations not science (Kuhn, 1970), confuse them, for as Georgi said:
“Students should learn the difference between physics and mathematics from the start” (Woit,2007), p85.
The difference is that physics needs valid theories not accurate equations. A theory is valid if it is true, so if a model can’t predict, it doesn’t matter how accurate it is.
The standard model claims to have predicted top and charm quarks before they were found, but predicting quark generations after finding lepton generations is like predicting the last move in a tic-tac-toe game, inevitable. After all, it didn’t predict family generations in the first place. It also claims to have predicted gluons, weak particles, and the Higgs, but a theory predicting what it invents isn’t predicting. Fitting equations to data then matching their terms to ephemeral flashes in accelerator events is like reading tea-leaves as if you look hard enough, you will find something. According to Wyszkowski’s Second Law, anything can be made to work if you fiddle with it long enough.
For example, why is a top quark 300,000 times heavier than an electron? The standard model answer is that it just is, so no wonder what baffled physics fifty years ago still baffles it today. Equations summarize the data that made them but theories should do more, so where are they? Currently, only the standard model exists, and it isn’t producing any new knowledge. The last time such a barren model dominated thought so completely was before Newton.