Accurate sample size calculation ensures that clinical studies have adequate power to detect clinically meaningful effects. This results in the efficient use of resources and avoids exposing a disproportionate number of patients to experimental treatments caused by an overpowered study. Sample Size Calculations for Clustered and Longitudinal Outcomes in Clinical Research explains how to determine sample size for studies with correlated outcomes, which are widely implemented in medical, epidemiological, and behavioral studies. The book focuses on issues specific to the two types of correlated outcomes: longitudinal and clustered. For clustered studies, the authors provide sample size formulas that accommodate variable cluster sizes and within-cluster correlation. For longitudinal studies, they present sample size formulas to account for within-subject correlation among repeated measurements and various missing data patterns. For multiple levels of clustering, the level at which to perform randomization actually becomes a design parameter. The authors show how this can greatly impact trial administration, analysis, and sample size requirement. Addressing the overarching theme of sample size determination for correlated outcomes, this book provides a useful resource for biostatisticians, clinical investigators, epidemiologists, and social scientists whose research involves trials with correlated outcomes. Each chapter is self-contained so readers can explore topics relevant to their research projects without having to refer to other chapters.
Accurate sample size calculation ensures that clinical studies have adequate power to detect clinically meaningful effects. This results in the efficient use of resources and avoids exposing a disproportionate number of patients to experimental treatments caused by an overpowered study. Sample Size Calculations for Clustered and Longitudinal Outcom
There is an increasing need for educational resources for statisticians and investigators. Reflecting this, the goal of this book is to provide readers with a sound foundation in the statistical design, conduct, and analysis of clinical trials. Furthermore, it is intended as a guide for statisticians and investigators with minimal clinical trial experience who are interested in pursuing a career in this area. The advancement in genetic and molecular technologies have revolutionized drug development. In recent years, clinical trials have become increasingly sophisticated as they incorporate genomic studies, and efficient designs (such as basket and umbrella trials) have permeated the field. This book offers the requisite background and expert guidance for the innovative statistical design and analysis of clinical trials in oncology. Key Features: Cutting-edge topics with appropriate technical background Built around case studies which give the work a "hands-on" approach Real examples of flaws in previously reported clinical trials and how to avoid them Access to statistical code on the book’s website Chapters written by internationally recognized statisticians from academia and pharmaceutical companies Carefully edited to ensure consistency in style, level, and approach Topics covered include innovating phase I and II designs, trials in immune-oncology and rare diseases, among many others
Cluster Randomised Trials, Second Edition discusses the design, conduct, and analysis of trials that randomise groups of individuals to different treatments. It explores the advantages of cluster randomisation, with special attention given to evaluating the effects of interventions against infectious diseases. Avoiding unnecessary mathematical detail, the book covers basic concepts underlying the use of cluster randomisation, such as direct, indirect, and total effects. In the time since the publication of the first edition, the use of cluster randomised trials (CRTs) has increased substantially, which is reflected in the updates to this edition. There are greatly expanded sections on randomisation, sample size estimation, and alternative designs, including new material on stepped wedge designs. There is a new section on handling ordinal outcome data, and an appendix with descriptions and/or generating code of the example data sets. Although the book mainly focuses on medical and public health applications, it shows that the rigorous evidence of intervention effects provided by CRTs has the potential to inform public policy in a wide range of other areas. The book encourages readers to apply the methods to their own trials, reproduce the analyses presented, and explore alternative approaches.
In a global clinical development strategy, multiregional clinical trials (MRCTs) are vital in the development of innovative medicines. Multiregional Clinical Trials for Simultaneous Global New Drug Development presents a comprehensive overview on the current status of conducting MRCTs in clinical development. International experts from academia, in
Comparative effectiveness research (CER) is the generation and synthesis of evidence that compares the benefits and harms of alternative methods to prevent, diagnose, treat, and monitor a clinical condition or to improve the delivery of care (IOM 2009). CER is conducted to develop evidence that will aid patients, clinicians, purchasers, and health policy makers in making informed decisions at both the individual and population levels. CER encompasses a very broad range of types of studies—experimental, observational, prospective, retrospective, and research synthesis. This volume covers the main areas of quantitative methodology for the design and analysis of CER studies. The volume has four major sections—causal inference; clinical trials; research synthesis; and specialized topics. The audience includes CER methodologists, quantitative-trained researchers interested in CER, and graduate students in statistics, epidemiology, and health services and outcomes research. The book assumes a masters-level course in regression analysis and familiarity with clinical research.
Statistical Testing Strategies in the Health Sciences provides a compendium of statistical approaches for decision making, ranging from graphical methods and classical procedures through computationally intensive bootstrap strategies to advanced empirical likelihood techniques. It bridges the gap between theoretical statistical methods and practical procedures applied to the planning and analysis of health-related experiments. The book is organized primarily based on the type of questions to be answered by inference procedures or according to the general type of mathematical derivation. It establishes the theoretical framework for each method, with a substantial amount of chapter notes included for additional reference. It then focuses on the practical application for each concept, providing real-world examples that can be easily implemented using corresponding statistical software code in R and SAS. The book also explains the basic elements and methods for constructing correct and powerful statistical decision-making processes to be adapted for complex statistical applications. With techniques spanning robust statistical methods to more computationally intensive approaches, this book shows how to apply correct and efficient testing mechanisms to various problems encountered in medical and epidemiological studies, including clinical trials. Theoretical statisticians, medical researchers, and other practitioners in epidemiology and clinical research will appreciate the book’s novel theoretical and applied results. The book is also suitable for graduate students in biostatistics, epidemiology, health-related sciences, and areas pertaining to formal decision-making mechanisms.
Proven Methods for Big Data Analysis As big data has become standard in many application areas, challenges have arisen related to methodology and software development, including how to discover meaningful patterns in the vast amounts of data. Addressing these problems, Applied Biclustering Methods for Big and High-Dimensional Data Using R shows how to apply biclustering methods to find local patterns in a big data matrix. The book presents an overview of data analysis using biclustering methods from a practical point of view. Real case studies in drug discovery, genetics, marketing research, biology, toxicity, and sports illustrate the use of several biclustering methods. References to technical details of the methods are provided for readers who wish to investigate the full theoretical background. All the methods are accompanied with R examples that show how to conduct the analyses. The examples, software, and other materials are available on a supplementary website.
Healthcare is important to everyone, yet large variations in its quality have been well documented both between and within many countries. With demand and expenditure rising, it’s more crucial than ever to know how well the healthcare system and all its components – from staff member to regional network – are performing. This requires data, which inevitably differ in form and quality. It also requires statistical methods, the output of which needs to be presented so that it can be understood by whoever needs it to make decisions. Statistical Methods for Healthcare Performance Monitoring covers measuring quality, types of data, risk adjustment, defining good and bad performance, statistical monitoring, presenting the results to different audiences and evaluating the monitoring system itself. Using examples from around the world, it brings all the issues and perspectives together in a largely non-technical way for clinicians, managers and methodologists. Statistical Methods for Healthcare Performance Monitoring is aimed at statisticians and researchers who need to know how to measure and compare performance, health service regulators, health service managers with responsibilities for monitoring performance, and quality improvement scientists, including those involved in clinical audits.
The aim of this book is to equip biostatisticians and other quantitative scientists with the necessary skills, knowledge, and habits to collaborate effectively with clinicians in the healthcare field. The book provides valuable insight on where to look for information and material on sample size and statistical techniques commonly used in clinical research, and on how best to communicate with clinicians. It also covers the best practices to adopt in terms of project, time, and data management; relationship with collaborators; etc.
Translational Research in Coronary Artery Disease: Pathophysiology to Treatment covers the entire spectrum of basic science, genetics, drug treatment, and interventions for coronary artery disease. With an emphasis on vascular biology, this reference fully explains the fundamental aspects of coronary artery disease pathophysiology. Included are important topics, including endothelial function, endothelial injury, and endothelial repair in various disease states, vascular smooth muscle function and its interaction with the endothelium, and the interrelationship between inflammatory biology and vascular function. By providing this synthesis of current research literature, this reference allows the cardiovascular scientist and practitioner to access everything they need from one source. Provides a concise summary of recent developments in coronary and vascular research, including previously unpublished data Summarizes in-depth discussions of the pathobiology and novel treatment strategies for coronary artery disease Provides access to an accompanying website that contains photos and videos of noninvasive diagnostic modalities for evaluation of coronary artery disease
In this important new Handbook, the editors have gathered together a range of leading contributors to introduce the theory and practice of multilevel modeling. The Handbook establishes the connections in multilevel modeling, bringing together leading experts from around the world to provide a roadmap for applied researchers linking theory and practice, as well as a unique arsenal of state-of-the-art tools. It forges vital connections that cross traditional disciplinary divides and introduces best practice in the field. Part I establishes the framework for estimation and inference, including chapters dedicated to notation, model selection, fixed and random effects, and causal inference. Part II develops variations and extensions, such as nonlinear, semiparametric and latent class models. Part III includes discussion of missing data and robust methods, assessment of fit and software. Part IV consists of exemplary modeling and data analyses written by methodologists working in specific disciplines. Combining practical pieces with overviews of the field, this Handbook is essential reading for any student or researcher looking to apply multilevel techniques in their own research.