Recent studies suggest that the methods used to design, maintain and compare realizations have a direct bearing on the practical application of concepts of quantity, unit and scale, no less than the definitions of those concepts Riordan ; Tal The relationship between the definition and realizations of a unit becomes especially complex when the definition is stated in theoretical terms.
Several of the base units of the International System SI — including the meter, kilogram, ampere, kelvin and mole — are no longer defined by reference to any specific kind of physical system, but by fixing the numerical value of a fundamental physical constant. The kilogram, for example, was redefined in as the unit of mass such that the numerical value of the Planck constant is exactly 6. Realizing the kilogram under this definition is a highly theory-laden task. The study of the practical realization of such units has shed new light on the evolving relationships between measurement and theory Tal ; de Courtenay et al ; Wolff b.
As already discussed above Sections 7 and 8. On the historical side, the development of theory and measurement proceeds through iterative and mutual refinements. On the conceptual side, the specification of measurement procedures shapes the empirical content of theoretical concepts, while theory provides a systematic interpretation for the indications of measuring instruments.
This interdependence of measurement and theory may seem like a threat to the evidential role that measurement is supposed to play in the scientific enterprise. After all, measurement outcomes are thought to be able to test theoretical hypotheses, and this seems to require some degree of independence of measurement from theory.
This threat is especially clear when the theoretical hypothesis being tested is already presupposed as part of the model of the measuring instrument. To cite an example from Franklin et al.
There would seem to be, at first glance, a vicious circularity if one were to use a mercury thermometer to measure the temperature of objects as part of an experiment to test whether or not objects expand as their temperature increases. Nonetheless, Franklin et al. The mercury thermometer could be calibrated against another thermometer whose principle of operation does not presuppose the law of thermal expansion, such as a constant-volume gas thermometer, thereby establishing the reliability of the mercury thermometer on independent grounds.
To put the point more generally, in the context of local hypothesis-testing the threat of circularity can usually be avoided by appealing to other kinds of instruments and other parts of theory.
A different sort of worry about the evidential function of measurement arises on the global scale, when the testing of entire theories is concerned. As Thomas Kuhn argues, scientific theories are usually accepted long before quantitative methods for testing them become available. The reliability of newly introduced measurement methods is typically tested against the predictions of the theory rather than the other way around. Hence, Kuhn argues, the function of measurement in the physical sciences is not to test the theory but to apply it with increasing scope and precision, and eventually to allow persistent anomalies to surface that would precipitate the next crisis and scientific revolution.
Note that Kuhn is not claiming that measurement has no evidential role to play in science. The theory-ladenness of measurement was correctly perceived as a threat to the possibility of a clear demarcation between the two languages.
Contemporary discussions, by contrast, no longer present theory-ladenness as an epistemological threat but take for granted that some level of theory-ladenness is a prerequisite for measurements to have any evidential power.
Without some minimal substantive assumptions about the quantity being measured, such as its amenability to manipulation and its relations to other quantities, it would be impossible to interpret the indications of measuring instruments and hence impossible to ascertain the evidential relevance of those indications. This point was already made by Pierre Duhem —6; see also Carrier 9— Moreover, contemporary authors emphasize that theoretical assumptions play crucial roles in correcting for measurement errors and evaluating measurement uncertainties.
Indeed, physical measurement procedures become more accurate when the model underlying them is de-idealized, a process which involves increasing the theoretical richness of the model Tal This problem is especially clear when one attempts to account for the increasing use of computational methods for performing tasks that were traditionally accomplished by measuring instruments.
As Margaret Morrison and Wendy Parker argue, there are cases where reliable quantitative information is gathered about a target system with the aid of a computer simulation, but in a manner that satisfies some of the central desiderata for measurement such as being empirically grounded and backward-looking see also Lusk Such information does not rely on signals transmitted from the particular object of interest to the instrument, but on the use of theoretical and statistical models to process empirical data about related objects.
For example, data assimilation methods are customarily used to estimate past atmospheric temperatures in regions where thermometer readings are not available.
These estimations are then used in various ways, including as data for evaluating forward-looking climate models. Two key aspects of the reliability of measurement outcomes are accuracy and precision. Consider a series of repeated weight measurements performed on a particular object with an equal-arms balance. JCGM 2. Though intuitive, the error-based way of carving the distinction raises an epistemological difficulty.
It is commonly thought that the exact true values of most quantities of interest to science are unknowable, at least when those quantities are measured on continuous scales.
If this assumption is granted, the accuracy with which such quantities are measured cannot be known with exactitude, but only estimated by comparing inaccurate measurements to each other. And yet it is unclear why convergence among inaccurate measurements should be taken as an indication of truth. After all, the measurements could be plagued by a common bias that prevents their individual inaccuracies from cancelling each other out when averaged.
In the absence of cognitive access to true values, how is the evaluation of measurement accuracy possible? At least five different senses have been identified: metaphysical, epistemic, operational, comparative and pragmatic Tal —5. Instead, the accuracy of a measurement outcome is taken to be the closeness of agreement among values reasonably attributed to a quantity given available empirical data and background knowledge cf.
Thus construed, measurement accuracy can be evaluated by establishing robustness among the consequences of models representing different measurement processes Basso ; Tal b; Bokulich ; Staley Under the uncertainty-based conception, imprecision is a special type of inaccuracy.
The imprecision of these measurements is the component of inaccuracy arising from uncontrolled variations to the indications of the balance over repeated trials. Other sources of inaccuracy besides imprecision include imperfect corrections to systematic errors, inaccurately known physical constants, and vague measurand definitions, among others see Section 7.
Paul Teller raises a different objection to the error-based conception of measurement accuracy. Teller argues that this assumption is false insofar as it concerns the quantities habitually measured in physics, because any specification of definite values or value ranges for such quantities involves idealization and hence cannot refer to anything in reality.
Removing these idealizations completely would require adding infinite amount of detail to each specification. As Teller argues, measurement accuracy should itself be understood as a useful idealization, namely as a concept that allows scientists to assess coherence and consistency among measurement outcomes as if the linguistic expression of these outcomes latched onto anything in the world.
The author is also indebted to Joel Michell and Oliver Schliemann for useful bibliographical advice, and to John Wiley and Sons Publishers for permission to reproduce excerpt from Tal Overview 2. Quantity and Magnitude: A Brief History 3. Operationalism and Conventionalism 5. Realist Accounts of Measurement 6.
Information-Theoretic Accounts of Measurement 7. Model-Based Accounts of Measurement 7. The Epistemology of Measurement 8. Overview Modern philosophical discussions about measurement—spanning from the late nineteenth century to the present day—may be divided into several strands of scholarship. The following is a very rough overview of these perspectives: Mathematical theories of measurement view measurement as the mapping of qualitative empirical relations to relations among numbers or other mathematical entities.
Information-theoretic accounts view measurement as the gathering and interpretation of information about a system. Quantity and Magnitude: A Brief History Although the philosophy of measurement formed as a distinct area of inquiry only during the second half of the nineteenth century, fundamental concepts of measurement such as magnitude and quantity have been discussed since antiquity.
Bertrand Russell similarly stated that measurement is any method by which a unique and reciprocal correspondence is established between all or some of the magnitudes of a kind and all or some of the numbers, integral, rational or real. Operationalism and Conventionalism Above we saw that mathematical theories of measurement are primarily concerned with the mathematical properties of measurement scales and the conditions of their application. The strongest expression of operationalism appears in the early work of Percy Bridgman , who argued that we mean by any concept nothing more than a set of operations; the concept is synonymous with the corresponding set of operations.
Realist Accounts of Measurement Realists about measurement maintain that measurement is best understood as the empirical estimation of an objective property or relation.
Information-Theoretic Accounts of Measurement Information-theoretic accounts of measurement are based on an analogy between measuring systems and communication systems.
Model-Based Accounts of Measurement Since the early s a new wave of philosophical scholarship has emerged that emphasizes the relationships between measurement and theoretical and statistical modeling Morgan ; Boumans a, ; Mari b; Mari and Giordani ; Tal , ; Parker ; Miyake Indications may be represented by numbers, but such numbers describe states of the instrument and should not be confused with measurement outcomes, which concern states of the object being measured.
As Luca Mari puts it, any measurement result reports information that is meaningful only in the context of a metrological model, such a model being required to include a specification for all the entities that explicitly or implicitly appear in the expression of the measurement result.
Bibliography Alder, K. Alexandrova, A. Angner, E. Bruni, F. Comim, and M. Pugno eds. Barnes ed. Baird, D. Barwich, A. Basso, A. Biagioli, F. Bokulich, A. Boring, E. Bridgman, H. Feigl, H. Israel, C. C Pratt, and B. Borsboom, D. Boumans, M. Westerstahl eds. Leonelli, and K. Eigner, Pittsburgh: University of Pittsburgh Press, pp. Hartmann, and S. Okasha eds. Bridgman, P. Brillouin, L. Byerly, H. Campbell, N. Campbell, D. Schlaudt eds. Carnap, R. Martin ed.
Carrier, M. Cartwright, N. Cartwright and E. Montuschi eds. Chang, H. Zalta ed. Psillos and M. Curd eds. Clagett, M. Cohen, M. Crease, R. Darrigol, O. Darrigol, and O. Diehl, C. Dissertation, Princeton University. Dingle, H. Duhem, P. Wiener trans. Ellis, B. Heath trans. Fechner, G. Adler trans. Feest, U. Ferguson, A. Myers, R. Bartlett, H. Banister, F. Bartlett, W. Brown, N. Campbell, K. Craik, J. Drever, J. Guild, R. Houstoun, J. Irwin, G. Kaye, S. Philpott, L. Richardson, J. Shaxby, T.
Smith, R. Thouless, and W. The final report of a committee appointed by the British Association for the Advancement of Science in to consider the possibility of measuring intensities of sensation. See Michell , Ch 6. Finkelstein, L. Frank, P. Boston: Beacon Press. How is the international cooperation required to be forged and sustained? As Evelyn Fox Keller and I argue in The Seasons Alter , all these questions need to be posed, distinguished, and answered if the human population is to extricate itself from the mess some of its members have made often unwittingly, though today in full consciousness.
It would surely be easier to tackle them, though, if we stopped bickering about the causes and effects of climate change—the science that has been settled by consensus.
It might also deliver, as a bonus, happily vaccinated children, shoppers who do not automatically flinch at the thought of food containing GMOs, and citizens who appreciate the Darwinian view of life. For Oreskes, the objectivity of science is consensual: it depends on critical debate within a diverse community of investigators.
She reviews that history in her first chapter. Historical studies of scientific practice, including early writings by the microbiologist Ludwig Fleck and the physicist Pierre Duhem as well as later work by the historically oriented thinkers Thomas Kuhn and Paul Feyerabend, revealed the inadequacy of those attempts.
Those studies, in turn, paved the way for a sociology of scientific knowledge, whose initial thrust—in the work of what came to be known as the Edinburgh school, by Barry Barnes, David Bloor, and Steven Shapin—was often read as suggesting that the beliefs advanced by scientists were no more credible than those maintained by anyone else.
Eventually, however, more subtle proposals emerged, fueled by feminist scholars such as Helen Longino. They restored the objectivity of science, Oreskes writes, by viewing it as a collective achievement. She praises feminist work on science, in particular, for showing that the objectivity of scientific research depends on critical debate and exchange within a diverse community of investigators.
Oreskes is right to celebrate these social studies of science, and she is guarded in her acceptance of what they have offered. This work has indeed revolted against oversimplified claims that evidence and reasoning suffice, on their own, to bring about scientific consensus. Yet, rejecting this view of the scientific method has sometimes led scholars to replace one bad picture with another.
Yet to my mind, she adopts a diluted version of what feminist thinkers such as Longino have to offer. Unless collective investigation uses reliable methods for gathering evidence and for analyzing the findings, it can easily end in an impasse.
About this, Oreskes says too little. Or, more exactly, she says too little in those passages in which her discussions operate at the general level—where she is trying to say how science, in all its healthy forms, works to earn public trust. Excellent historian that she is, she provides lucid and convincing case studies of how particular pieces of scientific research—from critiques of continental drift to eugenics and the relation between hormonal birth control and depression—go awry and how, precisely, they diverge from those that go well.
Still, her general account, including all the caveats and codicils, leaves it quite unclear how she thinks the mental life of scientists actually goes. Unless more is said to explain how the cognitive and the social fit together, a precise answer to the title question proves elusive. In recognizing the inadequacies of earlier attempts to address methodological questions, without considering in full detail how the new perspective might supply better answers, Oreskes deprives herself of the resources to complete her central project.
The commentaries on her chapters offer helpful clues for developing it more precisely. Some would argue science is trustworthy just because it works. The history of science, she points out, is full of claims that particular hypotheses and theories are successful, and thus on the track of truth. The vast majority of those hypotheses and theories are, by current lights, false—we might even say, radically false.
Hence, we should be loath to suppose that our own scientific beliefs, successful as they appear to be, will endure. Very probably, our successors will regard them as error-ridden. Many ways of responding to this argument have emerged in recent decades.
For the purposes of evaluating the trustworthiness of science, the best is to take a pragmatic approach to the historical record. Whether or not the hypotheses of the past turned out to be correct, those who adopted them on the basis of their successes—the problems they helped to solve and the predictions they helped to make—were entirely warranted in doing so. As are we in our similar situation today. I treat them as true. My adoption of them is provisional.
But taking them seriously and using them as I do—treating them as true—is a good strategy. For I see two possible outcomes. Maybe I shall be lucky and the hypotheses will endure as part of science in the indefinite future of human inquiry.
Or maybe they will be replaced by something superior. Yet the history of science also shows that superior theories tend overwhelmingly to emerge from attempts to push apparently successful hypotheses as far as possible.
Precisely because people take them seriously—treat them as true—they serve as stepping stones to better science. You can now see why pursuing this strategy is a good one: heads I win, tails I win. Sometimes applying a particular claim might have considerable impact on human welfare.
Success consists in solving particular problems. But any attempt to defend the trustworthiness of science along these lines has further work to do.
In particular, it has to explore what counts as success. Here, the vision of science as a collective venture—meaning not just one that involves critical exchange, but also one that is integrated into policies that affect human lives—must enter. Ultimately, as I argue in Science in a Democratic Society , problems and solutions make for success if they advance the wider interests of humanity. Following the detection of the Higgs Boson at the Large Hadron Collider, journalists around the world struggled to explain one of the most complex discoveries in all of science.
Sharma and his colleagues had every reason to believe that they were closing in on the Great White Whale of modern science: the Higgs boson, a particle whose existence would explain all the others then known and how they fit together into the jigsaw puzzle of reality. Rather, Overbye chose words that were touchpoints for a lay audience. He made complexity approachable rather than daunting. One way to visualize the relationship between precision and understanding is to consider the concentric rings of a dartboard.
The outer ring encompasses all potential readers, from experts to the science-interested lay public. The next innermost ring speaks only to college graduates with some familiarity of the topic. As precision increases, the audience shrinks until you are speaking only to psychological scientists.
But you still want to land squarely on the dart board—making an accurate, just not overly precise, throw. What are your questions about communicating psychological science? Email news psychologicalscience.
APS regularly opens certain online articles for discussion on our website. After an extensive search, the oceanographer finds the buoy 50 meters m from the boat. While on the way back to shore, the oceanographer throws in a fishing line to see if she can catch anything for dinner.
She is lucky enough to catch a mahi-mahi. When she pulls it out of the water, her colleagues estimate the weight of the fish. Their estimates are When they weigh the fish upon returning to shore, the actual weight is Write your own scenario illustrating the difference between accuracy and precision.
Swap your scenario with a classmate. A dart player can see how accurate his or her dart throws are by comparing the location of the thrown darts to the target, the bulls-eye of the dartboard.
How is this model different from scientists who are measuring a natural phenomenon?
0コメント