He talks about the way theoretical physics challenges the relentlessly positivist assumption of Popper's philosophy, in particular the “falsifiability" criterion for scientific claims, the idea that for a scientific claim to be valid it has to be formulated in such a way that it can be clearly proven true or false by experimentation.
Pigliucci discusses the argument Popper made in his essay, Science as Falsification (1963).
Stephen Thornton describes Popper's outlook this way (Karl Popper Stanford Encyclopedia of Philosophy 2013):
As Popper represents it, the central problem in the philosophy of science is that of demarcation, i.e., of distinguishing between science and what he terms ‘non-science’, under which heading he ranks, amongst others, logic, metaphysics, psychoanalysis, and Adler's individual psychology. Popper is unusual amongst contemporary philosophers in that he accepts the validity of the Humean critique of induction, and indeed, goes beyond it in arguing that induction is never actually used in science. However, he does not concede that this entails the scepticism which is associated with Hume, and argues that the Baconian/Newtonian insistence on the primacy of ‘pure’ observation, as the initial step in the formation of theories, is completely misguided: all observation is selective and theory-laden—there are no pure or theory-free observations. In this way he destabilises the traditional view that science can be distinguished from non-science on the basis of its inductive methodology; in contradistinction to this, Popper holds that there is no unique methodology specific to science. Science, like virtually every other human, and indeed organic, activity, Popper believes, consists largely of problem-solving.Popper is also known for his positivist and conservative political work, The Open Society and Its Enemies (1945). Among others, the Frankfurt School philosopher Herbert Marcuse analyzed the problems of that work in his Studies in Critical Philosophy (1972), "Karl Popper and the Problem of Historical Laws," which originally appeared in Partisan Review 26:1 (1959).
Popper accordingly repudiates induction and rejects the view that it is the characteristic method of scientific investigation and inference, substituting falsifiability in its place. It is easy, he argues, to obtain evidence in favour of virtually any theory, and he consequently holds that such ‘corroboration’, as he terms it, should count scientifically only if it is the positive result of a genuinely ‘risky’ prediction, which might conceivably have been false. For Popper, a theory is scientific only if it is refutable by a conceivable event. Every genuine test of a scientific theory, then, is logically an attempt to refute or to falsify it, and one genuine counter-instance falsifies the whole theory. In a critical sense, Popper's theory of demarcation is based upon his perception of the logical asymmetry which holds between verification and falsification: it is logically impossible to conclusively verify a universal proposition by reference to experience (as Hume saw clearly), but a single counter-instance conclusively falsifies the corresponding universal law. In a word, an exception, far from ‘proving’ a rule, conclusively refutes it.
But Popper's falsifiability criterion is a much more solid concept and is widely cited in the philosophy of science. And in popular treatments of pseudoscience like Skeptical Inquirer, to which Massimo Pigliucci frequently contributes, it is often cited. For example, Keay Davidson in "The Universe and Karl Sagan" SI 23:6 (1999):
How does one distinguish a bona fide scientific hypothesis from a pseudoscientific one? The classic response is d1ar of philosopher Karl Popper, that no hypothesis can be considered ~scientific" (which is nor necessarily the same thing as saying it is "rrue~) unless ir generates predicrions rhar are conceivably disprovable ("falsifiable," in Popper's term).Mary Frances McKenna observes, "Academic work that does not utilize the scientific method, as Karl Popper noted, is not 'insignificant' or 'meaningless,' but it is not based on empirical evidence even if it is the result of 'observation.'" ("The Role of the Judeo-Christian Tradition in the Development and Continuing Evolution of the Western Synthesis" Telos 168:2014)
It's that latter issue that Pigliucci addresses in the context of string theory in physics. When established theories come into conflict, which has been the cases for decades with quantum mechanics and Einstein's relativity, scientists elaborate alternative theories that might provide a solution to the discrepancies and seek to find observations or set up experiments to validate or invalidate the theories.
But some kinds of science are more amenable to controlled experiments than others. Investigating a particular reaction of a small number of chemicals together can be done by controlled experiments in which the quantities involved and the conditions under which they are combined are defined. And different sets of scientists can replicate the experiment and see if they get the same results.
The more variables involved, though, the more difficult it is to interpret the results of such testing. The protocols for testing new medicines to be used on people involve testing and confirmation based on the general principle of Popper's falsifiability criterion. But treating diseases in the human body involve a huge number of variables. There are well-established methods for determining levels of probability of a new medicine's effectiveness. But even the best medicines may be ineffective for some patients. And even the most effective ones can involve major side effects. And the exact reasons a medicine is effective may also not be 100% clearly established.
For sciences like paleontology or astronomy, falsifiable experiments are more problematic. Paleontologists can make detailed observations and comparisons of physical evidence on the development of various species. But controlled trials on natural selection are more difficult. Setting up multiple parallels trials of how a species develops over millions of years is obviously not feasible. Much less setting up such experiments on the development of galaxies. They have to rely much more heavily on observation.
And this is a hazard of a too narrow and dogmatic application of the falsifiability principle can also play into the hands of pseudoscience. Creationists, for instance, have been known to cite the lack of experimental viability are a reason to reject the Darwininian theory of evolution by natural selection.
Pigliucci argues that Popper himself was quite so dogmatically Popperian in this sense:
Popper himself changed his mind throughout his career about a number of issues related to falsification and demarcation, as any thoughtful thinker would do when exposed to criticisms and counterexamples from his colleagues. For instance, he initially rejected any role for verification in establishing scientific theories, thinking that it was far too easy to ‘verify’ a notion if one were actively looking for confirmatory evidence. Sure enough, modern psychologists have a name for this tendency, common to laypeople as well as scientists: confirmation bias.
Nonetheless, later on Popper conceded that verification – especially of very daring and novel predictions – is part of a sound scientific approach. After all, the reason Einstein became a scientific celebrity overnight after the 1919 total eclipse is precisely because astronomers had verified the predictions of his theory all over the planet and found them in satisfactory agreement with the empirical data. For Popper this did not mean that the theory of general relativity was ‘true,’ but only that it survived to fight another day. Indeed, nowadays we don’t think the theory is true, because of the above mentioned conflicts, in certain domains, with quantum mechanics. But it has withstood a very good number of high stakes challenges over the intervening century, and its most recent confirmation came just a few months ago, with the first detection of gravitational waves.
Popper also changed his mind about the potential, at the least, for a viable Marxist theory of history (and about the status of the Darwinian theory of evolution, concerning which he was initially skeptical, thinking – erroneously – that the idea was based on a tautology). He conceded that even the best scientific theories are often somewhat shielded from falsification because of their connection to ancillary hypotheses and background assumptions. When one tests Einstein’s theory using telescopes and photographic plates directed at the Sun, one is really simultaneously putting to the test the focal theory, plus the theory of optics that goes into designing the telescopes, plus the assumptions behind the mathematical calculations needed to analyse the data, plus a lot of other things that scientists simply take for granted and assume to be true in the background, while their attention is trained on the main theory. But if something goes wrong and there is a mismatch between the theory of interest and the pertinent observations, this isn’t enough to immediately rule out the theory, since a failure in one of the ancillary assumptions might be to blame instead. That is why scientific hypotheses need to be tested repeatedly and under a variety of conditions before we can be reasonably confident of the results. [my emphasis in bold]