In philosophy of science and in epistemology, instrumentalism is a methodological view that ideas are useful instruments, and that the worth of an idea is based on how effective it is in explaining and predicting phenomena.
According to instrumentalists, a successful scientific theory reveals nothing known either true or false about nature's unobservable objects, properties or processes. Scientific theory is merely a tool whereby humans predict observations in a particular domain of nature by formulating laws, which state or summarize regularities, while theories themselves do not reveal supposedly hidden aspects of nature that somehow explain these laws. Instrumentalism is a perspective originally introduced by Pierre Duhem in 1906.
Rejecting scientific realism's ambitions to uncover metaphysical truth about nature, instrumentalism is usually categorized as an antirealism, although its mere lack of commitment to scientific theory's realism can be termed nonrealism. Instrumentalism merely bypasses debate concerning whether, for example, a particle spoken about in particle physics is a discrete entity enjoying individual existence, or is an excitation mode of a region of a field, or is something else altogether. Instrumentalism holds that theoretical terms need only be useful to predict the phenomena, the observed outcomes.
There are multiple versions of instrumentalism.
Main article: British empiricism
Newton's theory of motion, whereby any object instantly interacts with all other objects across the universe, motivated the founder of British empiricism, John Locke, to speculate that matter is capable of thought. The next leading British empiricist, George Berkeley, argued that an object's putative primary qualities as recognized by scientists, such as shape, extension, and impenetrability, are inconceivable without the putative secondary qualities of color, hardness, warmth, and so on. He also posed the question how or why an object could be properly conceived to exist independently of any perception of it. Berkeley did not object to everyday talk about the reality of objects, but instead took issue with philosophers' talk, who spoke as if they knew something beyond sensory impressions that ordinary folk did not.
For Berkeley, a scientific theory does not state causes or explanations, but simply identifies perceived types of objects and traces their typical regularities. Berkeley thus anticipated the basis of what Auguste Comte in the 1830s called positivism, although Comtean positivism added other principles concerning the scope, method, and uses of science that Berkeley would have disavowed. Berkeley also noted the usefulness of a scientific theory having terms that merely serve to aid calculations without their having to refer to anything in particular, so long as they proved useful in practice. Berkeley thus predated the insight that logical positivists—who originated in the late 1920s, but who, by the 1950s, had softened into logical empiricists—would be compelled to accept: theoretical terms in science do not always translate into observational terms.
The last great British empiricist, David Hume, posed a number of challenges to Francis Bacon's inductivism, which had been the prevailing, or at least the professed view concerning the attainment of scientific knowledge. Regarding himself as having placed his own theory of knowledge on par with Newton's theory of motion, Hume supposed that he had championed inductivism over scientific realism. Upon reading Hume's work, Immanuel Kant was "awakened from dogmatic slumber", and thus sought to neutralise any threat to science posed by Humean empiricism. Kant would develop the first stark philosophy of physics.
Main article: German idealism
To save Newton's law of universal gravitation, Immanuel Kant reasoned that the mind is the precondition of experience and so, as the bridge from the noumena, which are how the world's things exist in themselves, to the phenomena, which are humans' recognized experiences. And so mind itself contains the structure that determines space, time, and substance, how mind's own categorization of noumena renders space Euclidean, time constant, and objects' motions exhibiting the very determinism predicted by Newtonian physics. Kant apparently presumed that the human mind, rather than a phenomenon itself that had evolved, had been predetermined and set forth upon the formation of humankind. In any event, the mind also was the veil of appearance that scientific methods could never lift. And yet the mind could ponder itself and discover such truths, although not on a theoretical level, but only by means of ethics. Kant's metaphysics, then, transcendental idealism, secured science from doubt—in that it was a case of "synthetic a priori" knowledge ("universal, necessary and informative")—and yet discarded hope of scientific realism. Meanwhile, it was a watershed for idealist metaphysics, and launched German idealism, most influentially Hegel's absolute idealism or objective idealism, or at least interpretations, often misinterpretations of it.
Main article: Logical empiricism
Since the mind has virtually no power to know anything beyond direct sensory experience, Ernst Mach's early version of logical positivism (empirio-criticism) verged on idealism. It was alleged to even be a surreptitious solipsism, whereby all that exists is one's own mind. Mach's positivism also strongly asserted the ultimate unity of the empirical sciences. Mach's positivism asserted phenomenalism as to new basis of scientific theory, all scientific terms to refer to either actual or potential sensations, thus eliminating hypotheses while permitting such seemingly disparate scientific theories as physical and psychological to share terms and forms. Phenomenalism was insuperably difficult to implement, yet heavily influenced a new generation of philosophers of science, who emerged in the 1920s while terming themselves logical positivists while pursuing a program termed verificationism. Logical positivists aimed not to instruct or restrict scientists, but to enlighten and structure philosophical discourse to render scientific philosophy that would verify philosophical statements as well as scientific theories, and align all human knowledge into a scientific worldview, freeing humankind from so many of its problems due to confused or unclear language.
The verificationists expected a strict gap between theory versus observation, mirrored by a theory's theoretical terms versus observable terms. Believing a theory's posited unobservables to always correspond to observations, the verificationists viewed a scientific theory's theoretical terms, such as electron, as metaphorical or elliptical at observations, such as white streak in cloud chamber. They believed that scientific terms lacked meanings unto themselves, but acquired meanings from the logical structure that was the entire theory that in turn matched patterns of experience. So by translating theoretical terms into observational terms and then decoding the theory's mathematical/logical structure, one could check whether the statement indeed matched patterns of experience, and thereby verify the scientific theory false or true. Such verification would be possible, as never before in science, since translation of theoretical terms into observational terms would make the scientific theory purely empirical, none metaphysical. Yet the logical positivists ran into insuperable difficulties. Moritz Schlick debated with Otto Neurath over foundationalism—the traditional view traced to Descartes as founder of modern Western philosophy—whereupon only nonfoundationalism was found tenable. Science, then, could not find a secure foundation of indubitable truth.
And since science aims to reveal not private but public truths, verificationists switched from phenomenalism to physicalism, whereby scientific theory refers to objects observable in space and at least in principle already recognizable by physicists. Finding strict empiricism untenable, verificationism underwent "liberalization of empiricism". Rudolf Carnap even suggested that empiricism's basis was pragmatic. Recognizing that verification—proving a theory false or true—was unattainable, they discarded that demand and focused on confirmation theory. Carnap sought simply to quantify a universal law's degree of confirmation—its probable truth—but, despite his great mathematical and logical skill, discovered equations never operable to yield over zero degree of confirmation. Carl Hempel found the paradox of confirmation. By the 1950s, the verificationists had established philosophy of science as subdiscipline within academia's philosophy departments. By 1962, verificationists had asked and endeavored to answer seemingly all the great questions about scientific theory. Their discoveries showed that the idealized scientific worldview was naively mistaken. By then the leader of the legendary venture, Hempel raised the white flag that signaled verificationism's demise. Suddenly striking Western society, then, was Kuhn's landmark thesis, introduced by none other than Carnap, verificationism's greatest firebrand. Instrumentalism exhibited by scientists often does not even discern unobservable from observable entities.
Main article: Historical turn
From the 1930s until Thomas Kuhn's 1962 The Structure of Scientific Revolutions, there were roughly two prevailing views about the nature of science. The popular view was scientific realism, which usually involved a belief that science was progressively unveiling a truer view, and building a better understanding, of nature. The professional approach was logical empiricism, wherein a scientific theory was held to be a logical structure whose terms all ultimately refer to some form of observation, while an objective process neutrally arbiters theory choice, compelling scientists to decide which scientific theory was superior. Physicists knew better, but, busy developing the Standard Model, were so steeped in developing quantum field theory, that their talk, largely metaphorical, perhaps even metaphysical, was unintelligible to the public, while the steep mathematics warded off philosophers of physics. By the 1980s, physicists regarded not particles, but fields as the more fundamental, and no longer even hoped to discover what entities and processes might be truly fundamental to nature, perhaps not even the field. Kuhn had not claimed to have developed a novel thesis, but instead hoped to synthesize more usefully recent developments in the philosophy of science.
In 1906, Duhem had introduced the problem of the underdetermination of theory by data, since any dataset could be consistent with several different explanations, how the success of any prediction does not, by affirming the consequent, a deductive fallacy, logically confirm the truth of the theory in question. In the 1930s, Ludwik Fleck had explained the role of perspectivism (logology) in science whereby scientists are trained in thought collectives to adopt particular thought styles setting expectations for a proper scientific question, scientific experiment, and scientific data. Scientists manipulate experimental conditions to obtain results that cohere with their own expectations—what the scientists presuppose is realistic—and as a result might be tempted to invoke the experimenter's regress in order to reject unexpected results. They would then redo these experiments under what were supposedly better and more conducive conditions. By the 1960s, physicists recognized two, differing roles of physical theory, formalism and interpretation. Formalism involved mathematical equations and axioms that, upon input of physical data, yielded certain predictions. Interpretation sought to explain why they succeeded.
Widely read, Kuhn's 1962 thesis seemed to shatter logical empiricism, whose paradigmatic science was physics and which championed instrumentalism. Yet scientific realists, who were far more tenacious, responded by attacking Kuhn's thesis, perennially depicted thereafter as either illuminated or infamous. Kuhn later indicated that his thesis had been so widely misunderstood that he himself was not a Kuhnian. With logical empiricism's demise, Karl Popper's falsificationism was in the ascendancy, and Popper was knighted in 1965. Yet in 1961, the molecular biology research program had made its first major empirical breakthrough in cracking the genetic code. By the 1970s, molecular genetics' research tools could also be used for genetic engineering. In 1975, philosopher of science Hilary Putnam famously resurrected scientific realism with his no miracles argument, whereby the best scientific theories' predictive successes would appear miraculous if those theories were not at least approximately true about reality as it exists in and of itself beyond human perception. Antirealist arguments were formulated in response.
Main article: Scientific realism
By rejecting all variants of positivism via its focus on sensations rather than realism, Karl Popper asserted his commitment to scientific realism, merely via the necessary uncertainty of his own falsificationism. Popper alleged that instrumentalism reduces basic science to what is merely applied science. In his book "The fabric of reality", the British physicist David Deutsch followed Popper's critique of instrumentalism and argued that a scientific theory stripped of its explanatory content would be of strictly limited utility.
Main article: Constructive empiricism
Bas van Fraassen's (1980) project of constructive empiricism focuses on belief in the domain of the observable, so for this reason it is described as a form of instrumentalism.
In the philosophy of mind, Instrumentalism is the view that propositional attitudes like beliefs are not actually concepts on which we can base scientific investigations of mind and brain, but that acting as if other beings have beliefs is a successful strategy.
Instrumentalism is closely related to pragmatism, the position that practical consequences is an essential basis for determining meaning, truth or value.
|journal=(help), §4 "Antirealism: Foils for scientific realism: §4.1: "Empiricism", in Edward N. Zalta, ed, The Stanford Encyclopedia of Philosophy, Summer 2013 edn: "Traditionally, instrumentalists maintain that terms for unobservables, by themselves, have no meaning; construed literally, statements involving them are not even candidates for truth or falsity. The most influential advocates of instrumentalism were the logical empiricists (or logical positivists), including Carnap and Hempel, famously associated with the Vienna Circle group of philosophers and scientists as well as important contributors elsewhere. In order to rationalize the ubiquitous use of terms which might otherwise be taken to refer to unobservables in scientific discourse, they adopted a non-literal semantics according to which these terms acquire meaning by being associated with terms for observables (for example, 'electron' might mean 'white streak in a cloud chamber'), or with demonstrable laboratory procedures (a view called 'operationalism'). Insuperable difficulties with this semantics led ultimately (in large measure) to the demise of logical empiricism and the growth of realism. The contrast here is not merely in semantics and epistemology: a number of logical empiricists also held the neo-Kantian view that ontological questions 'external' to the frameworks for knowledge represented by theories are also meaningless (the choice of a framework is made solely on pragmatic grounds), thereby rejecting the metaphysical dimension of realism (as in Carnap 1950)".