It's not that old. It was only about 150 years ago that scientists adopted the hypothesis that:
Nature obeys fixed laws, exactly, no exceptions, and the laws are the same everywhere and for all time.
Within a few decades, this went from a bold land-grab by the scientists, to a litmus test for whether you really believe in science, to an assumption that everyone made, a kind of synthetic a priori that "must" be true for science to "work". (Feynman put this particular bogie man to bed in his typically succinct and quotable style*.)
I call it the Zeroth Law of Science, but once it is stated explicitly, it becomes obvious that it is a statement about the way the world works, testable, as a good scientific hypothesis should be. We can ask, "Is it true?", and we can design experiments to try to falsify it. (Yes, "falsification" is fundamental to the epistemology of experimental science; you can never prove a hypothesis, but you can try your darndest to prove it wrong, and if you fail repeatedly, the hypothesis starts to look pretty good, and we call it a "theory".)
Well, the Zeroth Law only lasted a few decades before it was blatantly and shockingly falsified by quantum mechanics. The quantum world does not obey fixed laws, but behaves unpredictably. Place a piece of uranium next to a Geiger counter, and the timing of the clicks (that tell us that somewhere inside it an atom of uranium has turned to lead) appears not fixed, but completely random.
So the Zeroth Law was amended by the quantum gurus, Planck, Bohr, Schrödinger, Heisenberg, and Dirac:
The laws of physics at the most fundamental level are half completely fixed and determined, and half pure randomness. The fixed part is the same everywhere and for all time. The random part passes every mathematical test for randomness, and is in principle unpredictable, unrelated to anything, anywhere in the universe, at any time.
Einstein protested that the universe couldn't be this ornery. "God doesn't play dice." Einstein wanted to restore the original Zeroth Law from the 19th Century. The common wisdom in science was that Einstein was wrong, and that remains the standard paradigm to this day.
If we dared to challenge the Zeroth Law with empirical tests, how would we do it? The Law as it now stands has two parts, and we might test each of them separately. For the first part, we would work with macroscopic systems where the quantum randomness is predicted to average itself out of the picture. We would arrange to repeat a simple experiment and see if we can fully account for the quantitative differences in results from one experiment to the next. For the second part, we would do the opposite--measure microscopic events at the level of the single quantum, trying to create patterns in experimental results that are predicted to be purely random.
Part I -- Are the fixed laws really fixed?
First Part: In biology, this is very far from being true. I worked in a worm laboratory last year, participating in statistical analysis of thousands of protein abundances. The first question I asked was about repeatability. The experiment was done twice as a 'biological replicate'. One week later, same lab, same person doing the experiment, same equipment, averaging over hundreds of worms, all of which are genetically identical. But the results were far from identical. The correlation between Week 1 and Week 2 was only R=0.4. The results were more different than they were the same. People who were more experienced than me told me this is the way it is with data from a bio lab. It is routine procedure to perform the experiment several times, then average the results, though they are very different.
This is commonly explained by the fact that no two living things are the same, so it's not really the same experimental condition, not at the level of atoms and molecules. Biology is a derived science. A better test would be to repeat a physics experiment. On the surface, everyone who does experiments in any science knows that the equipment is touchy, and it commonly takes several tries to "get it right". It is routine to throw away many experimental trials for each one that we keep. This is explained as human error, and undoubtedly a great deal of it is human error, in too many diverse forms to catalogue. But were there some real issue with repeatability, it would be camouflaged by the human error all around, and we might never know. Measurement of fundamental constants is an area where physicists are motivated to repeat experiments in labs around the world and attempt to identify all sources of experimental error and quantify them. I believe it is routine for more discrepancies to appear than can be accounted for with the catalogued uncertainties. Below is an example where things work pretty well. The bars represent 7 independent measures of a fundamental constant of nature called the Fine Structure Constant, α ~ 1/137. The error bars are supposed to be such that ..." of the time the right answer is within the error bars, and 95% of the time the right answer is within a span of two error bars. The graphs don't defy this prediction.
(The illustration is from Parker et al, 2018)
Here, in contrast, are measures of the gravitational constant.
In the second diagram, the discrepancies are clearly not within expected limits. There are 14 measurements, and we would expect 10 of them to include the accepted value within their error bars, but only 2 actually do. We would expect 13 of 14 to include the accepted value within two error bar lengths, but only 8 of 14 do. Clearly, there are sources of error here that are unaccounted for, but in the culture of today's science, no one would adduce this as evidence against the Zeroth Law.
(Note: You can view every article as one long page if you sign up as an Advocate Member, or higher).