MacKenzie, I. S., Read, J. C., & Horton, M. (2024). Empirical research methods for human-computer interaction. Extended Abstracts of the ACM SIGCHI Conference on Human Factors in Computing Systems - CHI 2024, Article No. 596, pp. 1-3. New York: ACM. doi:10.1145/3613905.3636267. [PDF]

Empirical Research Methods for Human-Computer Interaction

I. Scott MacKenzie1, Janet C. Read2, & Matthew Horton2

1York University, Toronto, Canada

2University of Central Lancashire, Preston, UK,

Figure 1: Empirical Research Methods. From left, Observational, Correlational, Experimental.

Most attendees at CHI conferences will agree that an experiment (user study) is the hallmark of good research in human-computer interaction. But what constitutes an experiment? And how does one go from an experiment to a CHI paper?

This course will teach how to pose testable research questions, how to make and measure observations, and how to design and conduct an experiment. Specifically, attendees will participate in a real experiment to gain experience as both an investigator and as a participant. The second session covers the statistical tools typically used to analyze data. Most notably, attendees will learn how to organize experiment results and write a CHI paper.

Human-centered computing;

Empirical research; user study; experiment design; quantitative methods; writing a CHI paper.


In this two-session course, attendees will learn how to conduct em- pirical research in human-computer interaction (HCI). This course delivers an A-to-Z tutorial on designing and doing a user study and demonstrates how to write a successful CHI paper. It would benefit anyone interested in conducting a user study or writing a CHI paper. Only a general HCI knowledge is required.


This course caters to attendees who are motivated to learn about, and use, empirical research methods in HCI research. Specifically, it is for those in academia or industry who evaluate interaction tech- niques using quantitative methods, or those who make decisions based on usability tests, and, in particular, user studies following an experimental methodology.

Approximately 75 attendees is the maximum practical size for this course. If the number of registrations is large, the instructors may consider teaching the course multiple times.


No specific background is required other than a general knowledge of human-computer interaction as conveyed, for example, through an undergraduate HCI course or attendance at CHI conferences. Knowing how to enter formulae in a Microsoft Excel spreadsheet to compute means, standard deviations, etc., would be an asset. Knowledge of advanced statistics, such as the analysis of variance, is NOT required. Additionally, there is no linkage between this and any other CHI course.


This course was offered at CHI 2007 (San Jose), CHI 2008 (Florence), CHI 2009 (Boston), CHI 2010 (Atlanta), CHI 2011 (Vancouver), CHI 2012 (Austin), CHI 2013 (Paris), CHI 2014 (Toronto), CHI 2016 (San Jose), CHI 2017 (Denver), CHI 2018 (Montreal), CHI 2019 (Glasgow), and CHI 2023 (Hamburg). In addition, extended versions of this course have been given at the University of Tampere (Finland), the University of Central Lancashire (UK), the University of Oslo (Norway), ETH Zürch (Switzerland), the University of the Balearic Islands (Spain), the IT University (Copenhagen, Denmark), Techni- cal University of Denmark (Lyngby, Denmark), and the University of Aalborg (Denmark).1


This course presents selected topics from Chapter 4 (Scientific Foun- dations), Chapter 5 (Designing HCI Experiments), Chapter 6 (Hy- pothesis Testing), and Chapter 8 (Writing and Publishing a Research Paper) in Human-Computer Interaction: An Empirical Research Per- spective [1].

Session 1 topics:

Session 2 topics:


Early in session 1, participants are divided into groups of two and participate in an experiment. A hand-out is distributed for the in- class experiment. See Fig. 2.

Following brief instructions, the in-class experiment proceeds. During the experiment, participants take turns acting as a "partici- pant" and as an "investigator." The participant does an experimental task – entering a text phrase five times with a non-marking stylus on the image of a soft keyboard – while the investigator measures the time to enter each phrase. This is done twice, once for keyboard layout "A" and once for keyboard layout "B". See Fig. 3. The data are entered in a log sheet. When finished, the participant and inves- tigator switch roles and the process is repeated. This time the order of using the keyboard layouts is reversed, "B" first, then "A". This is an example of counterbalancing, as explained during the course.

As well as performance data, demographic information is entered on the log sheet. The in-class experiment takes about 20 minutes.

Student volunteers (SVs) collect the hand-out sheets, leave the room, and transcribe the data from the handout sheets into a boil- erplate spreadsheet, provided by the instructors. This is done as the course continues. Transcribing the data takes about 20-30 minutes with two SVs; i.e., one reads-out the data while the other inputs the data. This procedure has proved successful in previous offerings of this course.

During session 2, the course continues but now uses the method- ology and results of the in-class user study to reinforce topics in the course. Examples of the results are shown in Fig. 4. The particular results are not important here. However, it is extremely useful from a pedagogical perspective that the results discussed are from an experiment in which the course attendees have just participated.

Results of an analysis of variance are also presented.

figure 2a figure 2b
Figure 2: Two-page handout for the in-class experiment. (click to enlarge)

figure 3
Figure 3: In-class experiment for this course at a previous CHI conference.

figure 4a
figure 4b
figure 4c
Figure 4: Results from this course at a previous CHI conference. See text for discussion.


Scott MacKenzie's research is in human-computer interaction with an emphasis on human performance, experimental methods and evaluation, interaction devices and techniques, etc. He has more than 200 peer-reviewed publications in the field of Human-Computer Interaction (including more than 50 from the ACM's annual SIGCHI conference). In 2015, he was elected into the ACM SIGCHI Academy. Full details:

Janet Read and Matt Horton have previously delivered courses at CHI on Child-Computer Interaction. For the last 15 years Janet has taught a course on research methods where she has used some of the aspects that are delivered in this tutorial and Matt has taught an advanced level course in user studies in HCI where he has expected students to plan experimental user studies. Full details:


Attendees needn't bring any resources. Hand-outs will be disseminated during the course.


Attendees in need of accessibility arrangements are encouraged to contact the course organizers. Appropriate assistance will be provided in consultation with the conference organizers.


[1]    I. Scott MacKenzie. 2024. Human-Computer Interaction: An Empirical Research Perspective (2nd ed.). Morgan Kaufmann (an imprent of Elsevier), Amsterdam.


1Please contact Scott MacKenzie, mack@yorku,ca, to discuss possibilities for your lab or institute.