Once the experiment is “running” without error, a thorough process of testing must take place to reduce the threat of problems during data collection that may lead to incomplete or lost data.
|Stage 7: Testing the Experiment|
|1) Run Experiment to Verify Correct and Error Responses|
|2) Checking Scoring and Data Collection|
|3) Checking Timing Accuracy|
|4) Running Pilot Participants|
• Verifying the understanding of instructions
• Checking the data
Stage 7, Step 1: Run experiment to verify correct and error responses
Participants cannot be expected to respond correctly on every trial, or to respond correctly on any trial for that matter. The experiment should account for cases in which the participant responds correctly, responds incorrectly, presses keys that are not valid response keys, or fails to respond at all.
Stage 7, Step 2: Checking scoring and data collection
During the process of testing correct and error responses, the scoring process should be verified as well. The correct feedback should appear for every possible scenario, including correct responses, incorrect responses, and non-responses. It is a good practice to run through a block of trials, keeping track with a paper list of what happened during the run, and to verify that the paper list matches with the data in E-DataAid. Most likely, it is not necessary to run a large number of trials in order to be convinced that the data is being logged accurately. Five to ten stimulus-response trials may suffice.
- Examine the data file in E-DataAid to verify that the data is logging correctly at all levels of the experiment.
- At the lowest level, the responses and reaction times should be logged and scored correctly, and
should be within the expected range of values.
- Verify that the correct number of trials and blocks are being run, and that the session level information is logging correctly (i.e., subject number, session number, etc.).
- Determine that the data file reflects the correct number of observations for each cell in the design. For example, if a 2x3 design is running 5 repetitions of each cell, the data file should display 30 trials including 5 trials for each of the 6 conditions.
- Finally, and most importantly, double check that all of the measures necessary for the analysis are being logged.
Because E-DataAid offers filtering capabilities, it is recommended that the user err on the side of logging too much information rather than failing to log some crucial variable.
Stage 7, Step 3: Checking timing accuracy
The logging options within E-Prime allow for detailed monitoring of the timing of the events within an experiment. It is important to log, examine and monitor the timing information for experimental events. On the Logging tab for each object, the Time Audit and Time Audit (Extended) variables provide the means to verify the launch-time, display-time, finish-time, and possible delay or error associated with each object.
Stage 7, Step 4: Running pilot participants
Usually, the person programming the experiment is too familiar with the instructions or the task to be able to pick up errors or inconsistencies in the experiment. It is important to run pilot participants to iron out the wrinkles in the experiment prior to actual data collection.
Verifying the understanding of instructions
For the participant, the instructions may not be as clear as they were for the experimenter, who certainly has more information concerning the purpose of the experiment. During the running of pilot participants, patterns of responding with inappropriate keys, low accuracy levels, or long reaction times may be indicative that the participants did not understand the task. Or, perhaps not enough instruction was supplied (e.g., often experiments are run without written instructions, and the experimenter verbally communicates the task to the participant).
It is generally a good idea to present participants with illustrations of the sequence of displays they will see on the computer and describe the task verbally before running it on the computer. Participants seem to be more likely to ask questions when presented with a scenario in paper form. Put test questions into the instructions (e.g., “Now if you saw this sequence of stimuli, how would you respond?”).
Poor instructions result in irritated participants and inaccurate responses. Pilot test the instructions, being sure to run someone who does not have experience with the experiment. Clear instructions will not only add to the consistency of the experiment, but will serve as a record of the method used within the experiment when the experiment is passed on to a colleague or a student.
Checking the data
After each pilot participant is run, the individual data files should be reviewed for accuracy prior to merging them into a master data file. Monitoring of individual data files will help to eliminate potential conflicts during merge operations.