This is a history of the use of Bayes theoremfrom its discovery by Thomas Bayes to the rise of the statistical competitors in the first part of the twentieth century. The book focuses particularly on the development of one of the fundamental aspects of Bayesian statistics, and in this new edition readers will find new sections on contributors to the theory. In addition, this edition includes amplified discussion of relevant work.
This book offers a detailed history of parametric statistical inference. Covering the period between James Bernoulli and R.A. Fisher, it examines: binomial statistical inference; statistical inference by inverse probability; the central limit theorem and linear minimum variance estimation by Laplace and Gauss; error theory, skew distributions, correlation, sampling distributions; and the Fisherian Revolution. Lively biographical sketches of many of the main characters are featured throughout, including Laplace, Gauss, Edgeworth, Fisher, and Karl Pearson. Also examined are the roles played by DeMoivre, James Bernoulli, and Lagrange.
This book provides a selection of pioneering papers or extracts ranging from Pascal (1654) to R.A. Fisher (1930). The editors'annotations put the articles in perspective for the modern reader. A special feature of the book is the large number of translations, nearly all made by the authors. There are several reasons for studying the history of statistics: intrinsic interest in how the field of statistics developed, learning from often brilliant ideas and not reinventing the wheel, and livening up general courses in statistics by reference to important contributors.
Publisher description: This book is a reference for librarians, mathematicians, and statisticians involved in college and research level mathematics and statistics in the 21st century. Part I is a historical survey of the past 15 years tracking this huge transition in scholarly communications in mathematics. Part II of the book is the bibliography of resources recommended to support the disciplines of mathematics and statistics. These resources are grouped by material type. Publication dates range from the 1800's onwards. Hundreds of electronic resources-some online, both dynamic and static, some in fixed media, are listed among the paper resources. A majority of listed electronic resources are free.
"This account of how a once reviled theory, Baye’s rule, came to underpin modern life is both approachable and engrossing" (Sunday Times). A New York Times Book Review Editors’ Choice Bayes' rule appears to be a straightforward, one-line theorem: by updating our initial beliefs with objective new information, we get a new and improved belief. To its adherents, it is an elegant statement about learning from experience. To its opponents, it is subjectivity run amok. In the first-ever account of Bayes' rule for general readers, Sharon Bertsch McGrayne explores this controversial theorem and the generations-long human drama surrounding it. McGrayne traces the rule’s discovery by an 18th century amateur mathematician through its development by French scientist Pierre Simon Laplace. She reveals why respected statisticians rendered it professionally taboo for 150 years—while practitioners relied on it to solve crises involving great uncertainty and scanty information, such as Alan Turing's work breaking Germany's Enigma code during World War II. McGrayne also explains how the advent of computer technology in the 1980s proved to be a game-changer. Today, Bayes' rule is used everywhere from DNA de-coding to Homeland Security. Drawing on primary source material and interviews with statisticians and other scientists, The Theory That Would Not Die is the riveting account of how a seemingly simple theorem ignited one of the greatest controversies of all time.
The term probability can be used in two main senses. In the frequency interpretation it is a limiting ratio in a sequence of repeatable events. In the Bayesian view, probability is a mental construct representing uncertainty. This 2002 book is about these two types of probability and investigates how, despite being adopted by scientists and statisticians in the eighteenth and nineteenth centuries, Bayesianism was discredited as a theory of scientific inference during the 1920s and 1930s. Through the examination of a dispute between two British scientists, the author argues that a choice between the two interpretations is not forced by pure logic or the mathematics of the situation, but depends on the experiences and aims of the individuals involved. The book should be of interest to students and scientists interested in statistics and probability theories and to general readers with an interest in the history, sociology and philosophy of science.
Pierre-Simon Laplace (1749-1827) is remembered among probabilitists today particularly for his "Theorie analytique des probabilites", published in 1812. The "Essai philosophique dur les probabilites" is his introduction for the second edition of this work. Here Laplace provided a popular exposition on his "Theorie". The "Essai", based on a lecture on probability given by Laplace in 1794, underwent sweeping changes, almost doubling in size, in the various editions published during Laplace's lifetime. Translations of various editions in different languages have apeared over the years. The only English translation of 1902 reads awkwardly today. This is a thorough and modern translation based on the recent re-issue, with its voluminous notes, of the fifth edition of 1826, with preface by Rene Thom and postscript by Bernard Bru. In the second part of the book, the reader is provided with an extensive commentary by the translator including valuable histographical and mathematical remarks and various proofs.
The aim of this book is to provide an introduction to probability logic-based formalization of uncertain reasoning. The authors' primary interest is mathematical techniques for infinitary probability logics used to obtain results about proof-theoretical and model-theoretical issues such as axiomatizations, completeness, compactness, and decidability, including solutions of some problems from the literature. An extensive bibliography is provided to point to related work, and this book may serve as a basis for further research projects, as a reference for researchers using probability logic, and also as a textbook for graduate courses in logic.
Can established humanities methods coexist with computational thinking? It is one of the major questions in humanities research today, as scholars increasingly adopt sophisticated data science for their work. James E. Dobson explores the opportunities and complications faced by humanists in this new era. Though the study and interpretation of texts alongside sophisticated computational tools can serve scholarship, these methods cannot replace existing frameworks. As Dobson shows, ideas of scientific validity cannot easily nor should be adapted for humanities research because digital humanities, unlike science, lack a leading-edge horizon charting the frontiers of inquiry. Instead, the methods of digital humanities require a constant rereading. At the same time, suspicious and critical readings of digital methodologies make it unwise for scholars to defer to computational methods. Humanists must examine the tools--including the assumptions that went into the codes and algorithms--and questions surrounding their own use of digital technology in research. Insightful and forward thinking, Critical Digital Humanities lays out a new path of humanistic inquiry that merges critical theory and computational science.
This book analyzes the origins of statistical thinking as well as its related philosophical questions, such as causality, determinism or chance. Bayesian and frequentist approaches are subjected to a historical, cognitive and epistemological analysis, making it possible to not only compare the two competing theories, but to also find a potential solution. The work pursues a naturalistic approach, proceeding from the existence of numerosity in natural environments to the existence of contemporary formulas and methodologies to heuristic pragmatism, a concept introduced in the book’s final section. This monograph will be of interest to philosophers and historians of science and students in related fields. Despite the mathematical nature of the topic, no statistical background is required, making the book a valuable read for anyone interested in the history of statistics and human cognition.