Workshops 2019/2020

In MAE4051 Selected Topics in Educational Measurement, you choose two of the workshops that are scheduled in the academic year. Send an e-mail to the administration to let us know which workshops you plan to attend.

Analysis of Large-Scale Assessment Data

26., 27., 29. August and 12. September 2019

Prof. Dr. Andreas Frey from the University of Jena, Germany

This workshop offers the opportunity to deepen Educational Measurement knowledge in the field of International Large-Scale Assessments (ILSAs) such as PISA, PIRLS, or TIMSS. The focus of the course is on the methodological aspects of this special kind of study. These aspects are introduced, discussed, and applied using computers of the students. The following content areas are covered:

  1. Foundations of International Large-Scale Assessments
  2. Large-Scale Assessment Methodology (Sampling & weighting, resampling methods, scaling, plausible values)
  3. Analysis of Large-Scale Assessment Data: Basic Statistics (data preparation, computation of standard errors, analyses with plausible values, trend analysis)
  4. Analysis of Large-Scale Assessment Data: Advanced Statistical Modeling (path analysis, structural equation modeling, two-level hierarchical regression)
  5. Current Methodological Challenges of International Large-Scale Assessments

Students will learn about the structure, the methodology, and pathways for statistical analyses of International large-scale assessments.

After a successful participation of the course students:

  • can replicate reported ILSA results,
  • can handle the data structure of ILSAs,
  • can perform own valid analyses based on ILSA data and interpret them appropriately.

This course is a condensed seminar spanning four days. After the first three days, students are required to hand in an outline of a written assignment. Students will be given feedback on these outlines. These outlines have to be approved in order for the student to be allowed to hand in the final versions of the papers.

The course combines lectures and hands-on data analysis exercises.

Apart from the formal prerequisite knowledge described on the course page, solid knowledge of basic statistical methods as typically acquired in a B.Sc. study program in the social sciences (descriptive statistics, statistical inference, correlation, regression analysis, factor analysis, path analysis) is expected. Additionally, a basic command of the statistical software R would be advantageous. The students are expected to bring a laptop with administrator rights and the newest version of R installed.

Reading list

Program Evaluation

21., 22., 24., 25., 28., 29. and 31. October 2019

Associate Professor David Rutkowski from Indiana University Bloomington, USA

This introductory course in program evaluation intends to present and discuss the modern field of formal program evaluation. More specifically we will look at models and methods of evaluating programs, processes, and products in a variety of settings with specific emphasis on education. The course is designed to allow you to acquire a basic understanding of the field of evaluation. Additionally, you will gain practical knowledge of how to begin a formal evaluation by designing a small evaluation.

The course will cover the difference between evaluation and research, differing theories of evaluation, and the reason why evaluations are carried out. Students will also learn how an evaluation is designed, the techniques used to collect information, data analysis techniques and interpretations and how to report results. In addition to this, we will look into who the evaluation audiences are and how evaluation use can be encouraged.

Reading list

Methods for Causal inference in Educational Research

8 full workshop days from late January through February 2020. The workshops will consist of lectures and hands-on applications, generally in R. A detailed schedule will be available on the Spring 2020 semester page in late November.

Professor Jan-Eric Gustafsson, University of Gothenburg, Sweden and Professor II, University of Oslo

The main purpose of the workshop is to give an introduction to techniques for making credible causal inferences from observational data and how such techniques can be used in educational research. While the randomized experiment is recognized as a superior design for determining the causal effect of a treatment on an outcome, it is often the case that ethical, practical, and economic reasons prevent such designs to be used to answer causal questions. However, within different disciplinary fields alternative techniques have been developed which under certain assumptions allow causal inferences to be made from observational data. Some examples of such methods are instrumental variables (IV) regression, regression discontinuity (RD) designs, difference-in-differences (DD), and propensity scoring matching. The workshop aims at developing participants’ skills to choose and apply appropriate techniques for answering causal questions on the basis of observational data, as well as to critically review educational research which aims to make causal inferences.

In the first part of the workshop the distinction between causal and non-causal research questions is introduced, and the reasons why it is generally impossible to answer causal questions through analyzing associations among observed variables are made explicit. The so called potential outcomes framework is introduced as a set of tools to understand causal inference in terms of counterfactual comparisons. Issues in causal inference are demonstrated through simulating and analyzing data with traditional regression techniques. In the second part of the course three frequently used approaches for answering causal questions are treated in some detail, namely RD designs, IV regression analysis and DD. The logic upon which these approaches are based is presented, and the use and interpretation of the techniques is illustrated with both simulated and real data in the R system. In the third part of the course structural equation modeling and propensity score matching techniques are presented as methods which use conditioning on observed and latent variables to prevent bias in estimates of causal effects.

The workshop will start late January 2020, and will run for four weeks. It will involve eight days of teaching (9.00 – 16.00) in the form of lectures and hands-on applications, generally in R.

Literature

Angrist, J. D., & Pischke, J.-S. (2015). Mastering metrics. The path from cause to effect. Princeton: Princeton University Press. (282 p.)

Murnane, R. J., & Willett, J.B. (2011). Methods matter. Improving causal inference in educational and social science research. New York: Oxford University Press. (397 p.)

Electronic articles will also be supplied when the course begins. Students will read different articles depending on the subject of the paper they will be writing.

 

Published May 23, 2019 1:45 PM - Last modified Nov. 20, 2019 10:40 AM