Director: Dr. Chris Hammerly
Author: Anna Stacey
This repo contains the necessary pieces for visual world eye-tracking experiments. The experimental code is written in Python and run in PsychoPy. We use a Tobii eyetracker (Tobii Pro Fusion), recording the gaze data in Tobii Pro Lab (TPL) via the Titta package.
The experiment that this code was created for involves a number of pictures being displayed on the screen, audio being played, and the participant selecting one of the images. This process repeats for the desired number of trials. However, this code was designed to be reuseable and can be used to create different experimental flows. For this purpose, the repo is structured into modules (display_resources.py, eye_tracking_resources.py) which have reuseable functions that can be pieced together as you wish. Then wiigwaas.py provides an example of how to construct an experiment using these pieces. You may wish to use wiigwaas.py as a starting point, and modify it to create the specific experimental flow you're after.
Currently, the code is designed to work with input from either a mouse or touch screen (tested using X monitor). The code could be modified to work with other input sources, and indeed we are planning on adding Cedrus support in future.
This code requires the Titta package. At present (because some changes we have made have not been made into pull requests yet), you need to install our fork of the package, and install it directly into your local VisualWorldTools directory (e.g., just using git clone).
You may also need to set configuration variables to suit your needs (e.g., specifying your screen dimensions).
This is the code that manages the overall experimental flow. It is specific to our example experiment, but provides an example of how to make use of the various functionality provided by the other modules.
It's the control centre from which each trial is structured, but it doesn't contain any reuseable functions.
If you're including the eye-tracking component (you can turn this off here) of wiigwaas.py, you'll need to use Tobii Pro Lab for recording.
As noted in the Titta docs, if you want to record in Tobii Pro Lab, there are a few steps involved to get the software ready.
- Open Tobii Pro Lab.
- Create New Project > External Presenter Project
- You can name the project whatever; we just stick with the default numbered names.
- Switch to the Record tab.
- Now you can run the experiment from PsychoPy.
- When the experiment is done, swtich to the Analyze tab to view the recording.
Note that if you are not changing the participant number each time you run the experiment, you will need to always be creating a new project or Tobii Pro Lab will complain that that participant already exists in the current project.
You may need to edit some of the settings defined in config.py. For example, you should specify your monitor size to make the visuals fit your screen properly.
Now, with TPL open and the configuration variables set, you are ready to run the experiment! Open PsychoPy, and then in the Coder window, open the file wiigwaas.py. Hit the play button to run the experiment.
This example experiment involves a brief set-up stage (including calibration), followed by repeated calls to trial(). The flow of each trial, from the participant perspective, is as follows:
- Display a blank screen for a brief period.
- Display a buffer screen until the user clicks/taps.
- Display a fixation cross and perform a dritft check.
- If failed, recalibrate.
- Show the stimuli and play the audio. At this point, there are a few different options for user input:
- A click/tap on a stimulus image leads to a box displayed around the selected image and an associated checkmark confirmation button appearing. Only one stimulus can be selected at a time, so a click/tap on a different image will move the box and checkmark.
- A click/tap on the repeat icon will make the audio replay.
- A click/tap on a checkmark button will confirm their choice and end this trial.
This module contains functions related to what shows up on the screen.
| Function | Inputs | Outputs |
|---|---|---|
check_for_input_on_images: a general function (works for any input type) to check if the user has selected one of the images. |
|
|
_check_for_click_on_images: a private function (to be called by the more general check_for_input_on_images) to check if the user has selected one of the images via mouse input. |
|
|
_check_for_tap_on_images: a private function (to be called by the more general check_for_input_on_images) to check if the user has selected one of the images via touch input. |
|
|
check_for_input_anywhere: a general function (works for any input type) to check if the user has selected one of the images. |
|
|
_check_for_click_anywhere: a private function (to be called by the more general check_for_input_on_images) to check if the user has selected one of the images via mouse input. |
|
|
_check_for_tap_anywhere: a private function (to be called by the more general check_for_input_on_images) to check if the user has selected one of the images via touch input. |
|
|
clear_clicks_and_events: a function that resets PsychoPy's tracking of clicks and other events. Psychopy recommends this at stimulus onset. |
|
|
handle_input_on_stimulus: a function that deals with the user selection of a stimulus. Given the predetermined selected ImageStim, it draws the selection box around that image and adds the checkmark button to that image. This function has a lot of input parameters because it involves re-drawing the whole stimuli screen. |
|
|
| Function | Inputs | Outputs |
|---|---|---|
display_blank_screen: a function that changes the display to just a blank screen. |
|
|
display_buffer_screen: a function that changes the display to just a buffer screen, where the experiment will remain until user input is received. The buffer screen is used as a kind of pause in between trials. |
|
|
display_fixation_cross_screen: a function that changes the display to a fixation cross. Note that any drift check is not controlled from here - this function simply changes the display. |
|
|
display_stimuli_screen: a function that changes the display to show the stimuli objects |
|
|
display_text_screen: a function that changes the display to show some text |
|
| Function | Inputs | Outputs |
|---|---|---|
display_subj_ID_dialog: a function that brings up a GUI input box to enter the participant's ID number in to |
|
|
get_images: a function that, given the image file names, retrives them and gets them into a PsychoPy-appropriate format |
|
checkmarks: a list of the checkmarks as ImageStim objects. Though they appear identical, we need one checkmark ImageStim for each stimulus image. |
get_random_image_order: a function that gives a random ordering for the given number of stimuli images. For example, if there are three stimuli to be displayed, it will randomly return one of [0,1,2], [0,2,1], [1,0,2], [1,2,0], [2,0,1] or [2,1,0]. |
|
|
listen_for_quit: a function that checks if the quit button has been pressed. The idea is that you would have a loop that calls this function repeatedly (maybe along with listening for other inputs). |
|
|
listen_for_repeat: a function that checks if the repeat button has been pressed. The idea is that you would have a loop that calls this function repeatedly (maybe along with listening for other inputs). |
|
|
play_sound: a function for playing audio |
|
|
set_image_positions: a function that assigns the appropriate positions on screen for a set of images |
checkmarks: a list of the checkmarks as ImageStim objects. Though they appear identical, we need one checkmark ImageStim for each stimulus image. |
checkmarks: a list of the checkmarks as ImageStim objects, but with their positions set now |
switch_displays: a function for changing the current display content. This should only be called if EYE_TRACKING_ON is True. This function changes the module-level variables that track what the current display is and when it started being displayed. We track these so that we can update TPL on what is on-screen. |
|
This module contains functions related to the eye-tracking process, including communication with Titta (and by extension, Tobii Pro Lab).
In the Analysis tab of TPL, we can play out the gaze recording such that we watch how the gaze moved around the screen. In order to make this more useful, we can tell TPL about when different displays were being shown. This is also useful because it will add events to our data export file that indicate whenever the display changed. However, we should note that TPL only accepts images as occupying the whole display. For this reason, we do not tell TPL about individual stimulus images, but instead give it images which are essentially screenshots approximating what is shown on screen at different times. In this example experiment, we took screenshots of the fixation cross screen, an example buffer screen, and an example stimuli screen. So we have to share these screenshot images with TPL and keep it updated whenever they change.
| Function | Inputs | Outputs |
|---|---|---|
add_AOI: a function that adds an AOI (Area of Interest) to a particular display in TPL. The purpose of this is that it will automatically add columns to the data export file that indicate whether or not the gaze was in this area of the screen at a given time. |
|
|
add_image_to_recorder: a function that tells TPL about a display image we are using. |
|
|
calibrate_recorder: a function that initiates eyetracker calibration |
|
|
close_recorder: a function that cleanly ends the connection with TPL |
|
|
drift_check: a function that checks whether there has been a drift in the gaze calibration, requiring recalibration. Assuming that there is a fixation point being displayed and the participant is looking at it, we want to confirm whether the eyetracker is finding that their gaze is on that point. We therefore check whether the gaze is within a certain zone on screen for a sufficient amount of time. If a larger timeframe expires without this happening, then the drift check is failed. |
|
|
compare_gaze_and_target: a private method that helps with the drift check. It accesses the current gaze data and checks whether it's within the target zone. |
|
|
finish_display: a function that tells TPL we are finished with the current display |
|
|
record_event: a function that tells TPL about any kind of event, in order to have it marked in the data export file (plus in the Analysis tab in TPL, if that's helpful) |
|
|
set_up_recorder: a function that initializes and saves a connection with TPL |
|
|
start_recording_gaze: a function that starts recording via the established connection to TPL |
|
|
stop_recording_gaze: a function that stops the current recording by the established connection to TPL |
|
This is where project-level constants are defined. Various modules are expecting the following values to be defined:
| Constant name | Value |
|---|---|
EYE_TRACKING_ON |
a boolean indicating whether you want the experiment to run with or without including the eye-tracking steps. This essentially allows a test mode where you can check how other parts of the experiment are working without worrying about the eye-tracking side. |
WINDOW_WIDTH |
an integer indicating the width of the screen the display will be shown on, in pixels. |
WINDOW_HEIGHT |
an integer indicating the height of the screen the display will be shown on, in pixels. |
USER_INPUT_DEVICE |
a string indicating the source of user input. Currently supported values are 'mouse' or 'touch'. |