Workshops 2022/23

In MAE4051 Selected Topics in Educational Measurement, you choose two of the workshops that are scheduled in the academic year. Fill out the webform that's been sent out to let us know which workshops you want to attend (webform will be sent out in June). 

1. Analysis of Large-Scale Assessment Data

Autumn 2022 

Jelena Veletić

This workshop aims to communicate both knowledge and hands-on analytical skills in the field of international large-scale assessment (ILSA) data. ILSAs like PIRLS (Progress in International Reading Literacy Study), TIMSS (Trends in International Mathematics and Science Study), or PISA (Programme for International Student Assessment) provide unique opportunities to investigate both substantive and methodological research questions. ILSAs are conducted in recurring cycles and in numerous countries. They assess data on different levels, often including education systems (e.g., curriculum information, quantity of schooling), schools (e.g., school resources, composition), teachers (e.g., qualification for teaching, teaching practices), students (e.g., achievement in standardized tests, learning motivations), or familial backgrounds (e.g., socio-economic status, home support for learning). The aim of the international design of the tests and questionnaires is to make educational inputs, frameworks, and outputs comparable across countries and time. By implication, these studies face numerous methodological challenges and provide a plethora of interesting research opportunities.

This workshop addresses the following ILSA-related topics:

  • Central aims
  • Sampling 
  • Target populations
  • Sampling procedures and representativeness of the data
  • Data collection 
  • Instrument development
  • Data collection procedures
  • Quality assurance measures
  • ILSA data 
  • Achievement tests
  • Questionnaire data
  • Comparison of educational systems
  • Central limitations and challenges

The workshop encompasses lecture and interaction formats, group work, as well as practical exercises in the R environment. The required literature has to be read before the respective sessions. Further optional literature is recommended. 

After a successful completion of the course, the students

  • have a broad knowledge about ILSAs’ backgrounds, methods, and scope,
  • can critically read and interpret results of ILSA reports and ILSA-based studies,
  • can handle methodological peculiarities of ILSA data in own analyses, and
  • can develop own ILSA-based research questions and approaches.

The workshop covers 13 sessions. After the course, students are required to hand in an outline of a written assignment for which individual feedback will be provided. These outlines have to be approved before the students submit the final versions of their assignment papers.

Participation in the course requires the formal criteria described on the course page as well as good knowledge about basic statistical methods. The participants are expected to bring laptops with administrator rights and the newest version of R installed.

Click here to read about the requirements for assignments in this workshop

2. Methods for Causal Inference in Educational Research

Autumn 2022  

José Manuel Arencibia Alemán

This workshop aims to communicate theoretical knowledge about causal and non-causal inference, illustrate methods for causal inference from experimental and observational data, and exemplify hands-on applications of the methods in R.

Causal research questions aim to isolate effects of a treatment on those who got the treatment, independent of other effects and differences between the treated and untreated. The most straightforward way to answer causal research questions is to conduct randomized trials like in pharmaceutical studies, for instance. In many fields including education, randomized trials are however very difficult to conduct, due to practical, ethical, or financial reasons, among others. Therefore, educational research often has to rely on observational data, i.e. information that stems from pure observations of educational processes and outcomes without an interference of the researchers. Under specific circumstances, it is however possible to isolate causal effects anyway. This course will cover some of these methods both from a theoretical and practical perspective. Real example studies from the field of education will help to understand the assumptions and prerequisites behind these methods and to evaluate their scope critically.

The workshop will cover the following topics:

  1. The potential outcome framework
  2. Randomized trials
  3. Regression models
  4. Instrumental variable approaches
  5. Propensity score matching
  6. Regression discontinuity designs
  7. Differences-in-differences approaches

The workshop encompasses lecture and interaction formats, group work, as well as practical exercises in the R environment. After a successful completion of the course, the students

  • have a deep understanding of the potential outcome framework,
  • have a broad knowledge about the assumptions, prerequisites, and procedures of randomized trials, regression models, propensity score matching, regression discontinuity designs, differences-in-differences approaches, and instrumental variable approaches,
  • can critically read and interpret results of causal inference studies,
  • can apply methodological peculiarities of the causal inference methods in own analyses, and
  • can develop own causal inference research questions and approaches.

Click here to read about the requirements for assignments in this workshop

3. The Uses of Process Data in Educational Assessment

Spring 2023

Dr Bryan Maddox, Professor of Educational Assessment at the University of East Anglia, England.

This workshop will explore the contemporary uses of process data across the educational assessment cycle, from test design and user experience studies to validation.  It will introduce the theory, methods and techniques for the collection and use of process data. We will examine how process data – including data on response times and clickstream can be used as 'extensions of the test' (i.e., as performance data), and to provide information on test taker engagement.  We will also consider how the uses of process data are validated. The participants will gain practical skills in the capture and analysis of process data and an understanding of how it can complement and support conventional ‘product’ data. 

Information about the assignment will be given in Canvas.

Published May 23, 2022 11:23 AM - Last modified Nov. 22, 2022 12:55 PM