2011 Biostat Clinics

GET STARTED
1
Request Info
2
Visit
3
Apply

Dr. Shari MessingerBiostatistical Collaboration in Clinical and Translational Research

February 10, 2011

Dr. Shari Messinger Associate Professor and Director of the Biostatistics Collaboration and Consulting Core, Division of Biostatistics, Department of Public Health Sciences

This talk describes the roles and responsibilities of Biostatisticians collaborating in Clinical and Translational Research. We will describe how effective Biostatistical Collaboration throughout all stages of an investigation can facilitate and improve the quality of research. We will additionally address specific expectations that investigators should have of Biostatisticians, as well as expectations Biostatisticians have of investigators in order to be most effective in their research collaborations.


Dr. Hua Li

Statistics 101

March 10, 2011

Dr. Hua Li, Assistant Scientist and Biostatistician, Biostatistics Collaboration and Consulting Core, Division of Biostatistics, Department of Public Health Sciences

This talk will generally review basic statistics used for continuous and categorical variables. Participants will acquire knowledge in the following topics: collecting, organizing, analyzing data and presenting results; measures of central tendency and dispersion; confidence interval estimation; hypothesis testing; non-parametric tests and sample size calculation on medical data. This is about statistical practice rather than presenting theory and methods as they appear in standard textbooks.


Kaming Lo

Sample Size And Power Considerations In Clinical And Translational Research

April 14, 2011

Kaming Lo, Biostatistician, Biostatistics Collaboration and Consulting Core, Division of Biostatistics, Department of Public Health Sciences

Sample size issues are the most common reason why Clinical and Translational Investigators initially request statistical support. This presentation will address why sample size calculation is important in research and how sample size, variability, and effect size affect power (an important term that is closely related to sample size). We will discuss the approaches to some common designs in clinical and translational studies. In addition, we will describe the benefits of collaboration with a statistician in determining the sample size as part of the design phase of an investigation, as well as what an investigator should prepare before the meeting with a statistician to obtain the most reliable results.


Dr. Robert DuncanExperimental Design

May 11, 2011

Dr. Robert Duncan, Professor, Division of Biostatistics, Department of Public Health Sciences

This presentation addresses the different types of designs, which include the completely randomized design, the N-way cross classification design, and nested designs. Key points will be discussed in reference to unbalanced data. The difference between experiments and designs will be discussed with respect to factorial and repeated measures experiments and longitudinal studies. Randomization will be presented in terms of the population of inference, unconstrained, and constrained randomization (stratification, matching, etc.). Furthermore this presentation will discuss statistical analysis plans, including the analysis of design variables only, the inclusion of concomitant variables, and the analysis of covariance.


Dr. Tulay Koru-Sengul

Out of Sight, Not Out of Mind: Missing Data

September 13, 2011

Dr. Tulay Koru-Sengul, Assistant Professor, Division of Biostatistics, Department of Public Health Sciences

Researchers are frequently faced with the problem of analyzing data with missing values. Missing values are practically unavoidable in studies especially in medicine and biomedical sciences. Incomplete datasets make the statistical analyses very difficult. In this talk, I will discuss the missing-data problem, different patterns of missingness, missing data mechanisms, implications of missing values for data analysis and interpretation. Various simple and advanced statistical methodologies for handling missing data will be reviewed by focusing on their advantages and disadvantages.


C. Hendricks Brown

Conducting Implementation Research with Rigorous Randomized Trials

October 18, 2011

C. Hendricks Brown , Professor, Epidemiology and Public Health, Director, Center for Prevention Implementation Methodology for Drug Abuse and Sexual Risk Behavior, Director, Social Systems Informatics, Center for Computational Science

Implementation research involves “the use of strategies to adopt and integrate evidence-based health interventions and change practice patterns within specific settings“ (Chambers, 2008). These implementation strategies are major elements in the translation of research findings to practice. The field of implementation science is just beginning to be formed, and we are now beginning to frame the research questions and methodologies that will lay the foundation of this work. The goal of the newly funded Center for Prevention Implementation Methodology (Ce-PIM) is to provide methodology for measuring, modeling, and testing of implementation strategies, concentrating on evidence-based programs that have affected drug abuse or HIV sexual risk behavior. We present a frame for conducting implementation research and discuss how randomized “roll-out” trials can be conducted to evaluate implementation strategies. These methods are illustrated using a 53 county randomized implementation trial involving the implementation of an evidence-based program in foster care. Distinctions between implementation trials and efficacy or effectiveness trials are provided as well.


Dr. Maria LlabreConducting Implementation Research with Rigorous Randomized Trials

October 18, 2011

C. Hendricks Brown , Professor, Epidemiology and Public Health, Director, Center for Prevention Implementation Methodology for Drug Abuse and Sexual Risk Behavior, Director, Social Systems Informatics, Center for Computational Science

Implementation research involves “the use of strategies to adopt and integrate evidence-based health interventions and change practice patterns within specific settings“ (Chambers, 2008). These implementation strategies are major elements in the translation of research findings to practice. The field of implementation science is just beginning to be formed, and we are now beginning to frame the research questions and methodologies that will lay the foundation of this work. The goal of the newly funded Center for Prevention Implementation Methodology (Ce-PIM) is to provide methodology for measuring, modeling, and testing of implementation strategies, concentrating on evidence-based programs that have affected drug abuse or HIV sexual risk behavior. We present a frame for conducting implementation research and discuss how randomized “roll-out” trials can be conducted to evaluate implementation strategies. These methods are illustrated using a 53 county randomized implementation trial involving the implementation of an evidence-based program in foster care. Distinctions between implementation trials and efficacy or effectiveness trials are provided as well.