Understanding causal structure is a central task of human cognition. Causal learning underpins the development of our concepts and categories, our intuitive theories, and our capacities for planning, imagination and inference. During the last few years, there has been an interdisciplinary revolution in our understanding of learning and reasoning: Researchers in philosophy, psychology, and computation have discovered new mechanisms for learning the causal structure of the world. This new work provides a rigorous, formal basis for theory theories of concepts and cognitive development, and moreover, the causal learning mechanisms it has uncovered go dramatically beyond the traditional mechanisms of both nativist theories, such as modularity theories, and empiricist ones, such as association or connectionism.
The Psychology of Learning and Motivation publishes empirical and theoretical contributions in cognitive and experimental psychology, ranging from classical and instrumental conditions to complex learning and problem solving. This guest-edited special volume is devoted to current research and discussion on associative versus cognitive accounts of learning. Written by major investigators in the field, topics include all aspects of causal learning in an open forum in which different approaches are brought together. Up-to-date review of the literature Discusses recent controversies Presents major advances in understanding causal learning Synthesizes contrasting approaches Includes important empirical contributions Written by leading researchers in the field
Causal reasoning is one of our most central cognitive competencies, enabling us to adapt to our world. Causal knowledge allows us to predict future events, or diagnose the causes of observed facts. We plan actions and solve problems using knowledge about cause-effect relations. Although causal reasoning is a component of most of our cognitive functions, it has been neglected in cognitive psychology for many decades. The Oxford Handbook of Causal Reasoning offers a state-of-the-art review of the growing field, and its contribution to the world of cognitive science. The Handbook begins with an introduction of competing theories of causal learning and reasoning. In the next section, it presents research about basic cognitive functions involved in causal cognition, such as perception, categorization, argumentation, decision-making, and induction. The following section examines research on domains that embody causal relations, including intuitive physics, legal and moral reasoning, psychopathology, language, social cognition, and the roles of space and time. The final section presents research from neighboring fields that study developmental, phylogenetic, and cultural differences in causal cognition. The chapters, each written by renowned researchers in their field, fill in the gaps of many cognitive psychology textbooks, emphasizing the crucial role of causal structures in our everyday lives. This Handbook is an essential read for students and researchers of the cognitive sciences, including cognitive, developmental, social, comparative, and cross-cultural psychology; philosophy; methodology; statistics; artificial intelligence; and machine learning.
Professor Judea Pearl won the 2011 Turing Award “for fundamental contributions to artificial intelligence through the development of a calculus for probabilistic and causal reasoning.” This book contains the original articles that led to the award, as well as other seminal works, divided into four parts: heuristic search, probabilistic reasoning, causality, first period (1988–2001), and causality, recent period (2002–2020). Each of these parts starts with an introduction written by Judea Pearl. The volume also contains original, contributed articles by leading researchers that analyze, extend, or assess the influence of Pearl’s work in different fields: from AI, Machine Learning, and Statistics to Cognitive Science, Philosophy, and the Social Sciences. The first part of the volume includes a biography, a transcript of his Turing Award Lecture, two interviews, and a selected bibliography annotated by him.
A concise and self-contained introduction to causal inference, increasingly important in data science and machine learning. The mathematization of causality is a relatively recent development, and has become increasingly important in data science and machine learning. This book offers a self-contained and concise introduction to causal models and how to learn them from data. After explaining the need for causal models and discussing some of the principles underlying causal inference, the book teaches readers how to use causal models: how to compute intervention distributions, how to infer causal models from observational and interventional data, and how causal ideas could be exploited for classical machine learning problems. All of these topics are discussed first in terms of two variables and then in the more general multivariate case. The bivariate case turns out to be a particularly hard problem for causal learning because there are no conditional independences as used by classical methods for solving multivariate cases. The authors consider analyzing statistical asymmetries between cause and effect to be highly instructive, and they report on their decade of intensive research into this problem. The book is accessible to readers with a background in machine learning or statistics, and can be used in graduate courses or as a reference for researchers. The text includes code snippets that can be copied and pasted, exercises, and an appendix with a summary of the most important technical concepts.
Artificial Intelligence and Causal Inference address the recent development of relationships between artificial intelligence (AI) and causal inference. Despite significant progress in AI, a great challenge in AI development we are still facing is to understand mechanism underlying intelligence, including reasoning, planning and imagination. Understanding, transfer and generalization are major principles that give rise intelligence. One of a key component for understanding is causal inference. Causal inference includes intervention, domain shift learning, temporal structure and counterfactual thinking as major concepts to understand causation and reasoning. Unfortunately, these essential components of the causality are often overlooked by machine learning, which leads to some failure of the deep learning. AI and causal inference involve (1) using AI techniques as major tools for causal analysis and (2) applying the causal concepts and causal analysis methods to solving AI problems. The purpose of this book is to fill the gap between the AI and modern causal analysis for further facilitating the AI revolution. This book is ideal for graduate students and researchers in AI, data science, causal inference, statistics, genomics, bioinformatics and precision medicine. Key Features: Cover three types of neural networks, formulate deep learning as an optimal control problem and use Pontryagin’s Maximum Principle for network training. Deep learning for nonlinear mediation and instrumental variable causal analysis. Construction of causal networks is formulated as a continuous optimization problem. Transformer and attention are used to encode-decode graphics. RL is used to infer large causal networks. Use VAE, GAN, neural differential equations, recurrent neural network (RNN) and RL to estimate counterfactual outcomes. AI-based methods for estimation of individualized treatment effect in the presence of network interference.
In the past decade, the field of comparative cognition has grown and thrived. No less rigorous than purely behavioristic investigations, examinations of animal intelligence are useful for scientists and psychologists alike in their quest to understand the nature and mechanisms of intelligence. Extensive field research of various species has yielded exciting new areas of research, integrating findings from psychology, behavioral ecology, and ethology in a unique and wide-ranging synthesis of theory and research on animal cognition. The Oxford Handbook of Comparative Cognition contains sections on perception and illusion, attention and search, memory processes, spatial cognition, conceptualization and categorization, problem solving and behavioral flexibility, and social cognition processes including findings in primate tool usage, pattern learning, and counting. The authors have incorporated findings and theoretical approaches that reflect the current state of the field. This comprehensive volume will be a must-read for students and scientists who want to know about the state of the art of the modern science of comparative cognition.
This book constitutes the refereed proceedings of the Second International Symposium on Benchmarking, Measuring, and Optimization, Bench 2019, held in Denver, CO, USA, in November 2019. The 20 full papers and 11 short papers presented were carefully reviewed and selected from 79 submissions. The papers are organized in topical sections named: Best Paper Session; AI Challenges on Cambircon using AIBenc; AI Challenges on RISC-V using AIBench; AI Challenges on X86 using AIBench; AI Challenges on 3D Face Recognition using AIBench; Benchmark; AI and Edge; Big Data; Datacenter; Performance Analysis; Scientific Computing.
Studies of tool use have been used to examine an exceptionally wide range of aspects of cognition, such as planning, problem-solving and insight, naive physics, social relationship between action and perception.
It's hard to conceive of a topic of more broad and personal interest than the study of the mind. In addition to its traditional investigation by the disciplines of psychology, psychiatry, and neuroscience, the mind has also been a focus of study in the fields of philosophy, economics, anthropology, linguistics, computer science, molecular biology, education, and literature. In all these approaches, there is an almost universal fascination with how the mind works and how it affects our lives and our behavior. Studies of the mind and brain have crossed many exciting thresholds in recent years, and the study of mind now represents a thoroughly cross-disciplinary effort. Researchers from a wide range of disciplines seek answers to such questions as: What is mind? How does it operate? What is consciousness? This encyclopedia brings together scholars from the entire range of mind-related academic disciplines from across the arts and humanities, social sciences, life sciences, and computer science and engineering to explore the multidimensional nature of the human mind.
Many of our thoughts and decisions occur without us being conscious of them taking place; connectionism attempts to reveal the internal hidden dynamics that drive the thoughts and actions of both individuals and groups. Connectionist modeling is a radically innovative approach to theorising in psychology, and more recently in the field of social psychology. The connectionist perspective interprets human cognition as a dynamic and adaptive system that learns from its own direct experiences or through indirect communication from others. Social Connectionism offers an overview of the most recent theoretical developments of connectionist models in social psychology. The volume is divided into four sections, beginning with an introduction and overview of social connectionism. This is followed by chapters on causal attribution, person and group impression formation, and attitudes. Each chapter is followed by simulation exercises that can be carried out using the FIT simulation program; these guided exercises allow the reader to reproduce published results. Social Connectionism will be invaluable to graduate students and researchers primarily in the field of social psychology, but also in cognitive psychology and connectionist modeling.
The 39-volume set, comprising the LNCS books 13661 until 13699, constitutes the refereed proceedings of the 17th European Conference on Computer Vision, ECCV 2022, held in Tel Aviv, Israel, during October 23–27, 2022. The 1645 papers presented in these proceedings were carefully reviewed and selected from a total of 5804 submissions. The papers deal with topics such as computer vision; machine learning; deep neural networks; reinforcement learning; object recognition; image classification; image processing; object detection; semantic segmentation; human pose estimation; 3d reconstruction; stereo vision; computational photography; neural networks; image coding; image reconstruction; object recognition; motion estimation.