Past Seminars

The seminars listed below were sponsored by the BECS or selected by the BECS for their relevance to good practices in health research. Links to slides and/or webinar recordings are included where available.

The Joint GW-CNHS Biostatistics, Epidemiology, and Research Design (BERD) Tutorial Series 

  • October 18, 2019: Sensitivity Analyses in Observational Research​Scott Quinlan, PhD 

In this 50-minute tutorial, Dr. Quinlan provides an introduction to the uses of sensitivity analyses in public health research. View slides or watch and listen below.

VIEW SLIDES

  • March 29, 2019: The Use of Instrumental Variables in Observational ResearchScott Quinlan, PhD 

Instrumental variables represent a unique approach to address both known and unknown confounders. In this 50-minute tutorial, Dr. Quinlan introduces the basic approach with some examples and discusses the opportunities and challenges of this method for observational research. View slides or watch and listen below. 

VIEW SLIDES

  • October 19, 2018: ​An Introduction to the Use of Propensity Score Methods, Scott Quinlan, PhD 

This 50-minute tutorial provides an introduction to the use of propensity scores methods including how to create, assess, use, and report on propensity scores, as well as how they can contribute to the validity of research and limitations to their appropriate use.

VIEW SLIDES

  • March 20, 2018: ​An Introduction to the Challenges and Opportunities of Studying Drug and Vaccine Safety in the Real WorldScott Quinlan, PhD 

This 50-minute tutorial provides an introductory review of how drug and vaccine safety issues are identified and studied after FDA approval and initial market entry. The intended audience includes students and researchers with a knowledge of basic statistical methods who are interested in learning more about this growing area of drug and vaccine safety research. Watch below.

  • March 10, 2017: Reproducible Analyses, Naji Younes, PhD

Data analyses often stretch over time as results are reviewed and updated. Along the way the data may change as errors are corrected and additional information is obtained. The process of running analyses and repeatedly updating manuscripts or reports can be cumbersome. We'll discuss simple tools to greatly simplify the analyst's work, reduce errors and produce analyses that are easily reproduced and extended by others. This talk focuses on the R language which provide a collection of tools for that purpose, but the ideas can be extended to other languages as well.

  • October 3, 2014: REDCap: A Focus on Good Data Management Practices, Adrienne Arrieta, MS

Good data management practices are important to conceptualize before the beginning of data collection in any research study. Establishing how you will conduct, document, organize, manage, preserve, and analyze your data at the beginning of your research project has many benefits. This talk focuses on several good data management techniques and practices and how REDCap (Research Electronic Data Capture) , a free and secure web application for building and managing online surveys and databases, can assist researchers in implementing these techniques to create overall better research and science. 

 

Other Seminars

  • March 26, 2018: Evaluation of Public Health Interventions: Recent Developments in Cluster Randomized Trials and Related Designs, Liz Turner, PhD

VIEW SLIDES

  • October, 2017: The Month of R Series, Naji Younes, PhD

Introduction to using R for analysis of health data.

  • October 2, 2015: Pilot Studies and Statistics: A Murky Mix, Sam Simmens, PhD

Pilot studies are nearly universally recommended or required before proceeding with larger confirmatory studies, but the role and value of statistical analyses in these studies is often vague. In this session, Dr. Simmens will review some of the contrasting definitions and recommendations in the literature and lead a discussion of how statistics in pilot or feasibility studies can be useful--or not--for specific kinds of studies. 

  • June 5, 2015: Strategies for Dealing with Multiple Comparisons, Naji Younes, PhD

Statistical analysis and reports often include many tests of hypotheses. Since each individual test has a chance of yielding a false positive, conducting multiple tests increases the chances of getting spurious results. This talk explores several simple strategies for managing multiple tests, including p-value adjustments, methods that control the false discovery rate and graphical techniques for deriving testing strategies.