Participants
Healthy participants (N = 27, 9 males, mean age = 23.7 years, SD = 3.48 years, range = 18 - 32 years) within a healthy range of BMI (18.9 – 24.9 kg/m²) were recruited from the Tübingen student population. All participants had normal or corrected-to-normal vision and were right-handed according to Oldfield’s handedness inventory [25]. They reported moderate levels of hunger in the beginning of the experiment [mean = 6.00 (SD = 0.96), range = 5-8 on a scale from 1-10]. Participants provided informed consent and received either course credit or a monetary compensation for their participation. Ethical approval for the study was obtained by University Hospital Tübingen Ethical Commission (No. of approval: 207/2015BO2).
Apparatus
To immerse participants in the VR, they were equipped with an Oculus Rift© DK2 stereoscopic head-mounted display (HMD, Oculus VR LLC, Menlo Park, California). Motion tracking of hand movements was realized with a LeapMotion© near-infrared sensor (LeapMotion Inc., San Francisco, California, USA, SDK version 3.1.3). The LeapMotion© sensor provides positional information regarding the palm, wrist, and phalanges. This data can be used to render a hand model in VR, as in previous studies [23, 24, 26]. The whole experiment was implemented within the Unity® engine 4.5 using the C# interface provided by the application programming interface. Instructions and feedback were presented on different text-fields, aligned at eye-height.
Procedure
Prior to the actual experimental testing, participants received a verbal instruction regarding proper handling of the VR equipment. Then they were equipped with the HMD and the experiment started with practice trials. An experimenter was present throughout the entire time of the experiment. All participants completed the same virtual experience and all independent variables were within-subject manipulations.
The supplementary video 1 shows examples of the trial sequence. Each trial consisted of two parts: (1) Preconditions had to be met for an interaction to begin: Participants had to move their right hand into a predefined and fixed starting position, indicated by red, semi-transparent spheres, at a comfortable height close to the participant. If the palms were within the positions and the hand was open, the semi-transparent spheres turned green. Furthermore, participants had to center their field of view on a fixation cross located at the outer bound of the task space. (2) Once the center of the visual field had been directed towards the fixation cross for at least 500 ms, the spheres and the fixation cross disappeared (stimulus-onset asynchrony: 200 ms), and a colored cue indicated the upcoming position of a target and indicated the requested action (e.g., blue cue for pushing and purple cue for grasping movements, 400 ms). This cue was then replaced with the target object, which appeared with slight motion directed towards the participant. Objects always appeared approximately 20 cm in front of the participants close to the position of the (removed) fixation cross, but exact location, rotation, and speed were jittered to make the task more challenging (please see supplementary video 1, for demonstration of these subtle variations). In the case of grasping, participants were requested to close their hand surrounding the virtual object, move it towards themselves, and place it in a box in front of them. In the case of pushing, participants were requested to hit the target object with their hand, which would then fly away after the collision. The three most recently collected objects remained in the box adjunct to the participants’ feet, whereas pushed objects were cleared always before the next trial.
Trials were cancelled if the movement initiation took longer than two seconds. In case of such time-outs, early movements (earlier than 250 ms), or wrong responses, participants received according feedback in the form of a semi-transparent text-field. The whole experiment was self-paced, since trials only started when participants took the initial position and fixated the fixation cross. Hence, participants could (and they were encouraged to) take breaks between trials at any time. After half of all trials (i.e., approximately 30 minutes of VR), participants were asked to take off the HMD for a slightly longer break of some minutes.
In extension to ball and high calorie virtual food objects that were already studied before in a different sample of participants [23], the two further categories of low calorie food and complex objects were created with 3D objects from the Unity® asset store. Screenshots of the object categories are shown in Figure 1. Grasp and push interactions were requested in randomized order across the whole experiment for each of the 12 objects. Each object was presented 30 times, resulting in 360 trials in total. The experiment was subdivided into a total of 10 blocks, each comprising 36 trials, for a total duration of approximately 45 minutes. After the first part of 5 blocks, participants were encouraged to take a break before the second part of 5 blocks started.
Questionnaires & Ratings
Before putting on the HMD, participants had to indicate their hunger on a 1-10 Likert-like scale. After the experiment, all participants were asked to rate the VR exposure regarding experienced presence [27] and possible symptoms of simulator sickness [28]. All participants also disclosed their weight and height for computation of body-mass index (BMI). The hunger and BMI scores were entered as covariates to the statistical analysis to control for their likely contribution to an approach food bias [8, 29, 30]. To evaluate the composition of object categories, participants also had to rate all objects for valence (negative – positive) and arousal (not at all – very arousing) on a visual analog scale.
Dependent Measures and Data Treatment
Response times were recorded within the VR at three different stages of an experimental trial, according to the next update after a critical event in the scene. Movement onset was defined as the hand leaving the starting position, object contact was triggered by the collision of the virtual hand with the target object, either due to grasping or pushing the object away. Finally, in grasp trials only, collection time was recorded once the object had been placed in the box next to the participant.
Data from correct trials were aggregated to the four categories for each type of interaction (grasping vs. pushing) and each part of the experiment (first part, second part) as median response times. The median was used as summary statistics, because the raw data were not normally distributed. Incorrect trials were excluded because of erroneous responding (14.2 %) or detection of early hand movements prior to onset of the actual object (3.3 % of all trials). Repeated measures analyses of variance (ANOVA) were conducted for the dependent variables with Stimulus (high-calorie food, low-calorie food, balls, complex tools), Direction (grasp, push), and Time (part 1, part 2) as independent variables. For the collection time, the factor Direction was not meaningful as pushed objects were not collected. Greenhouse Geisser (GG) corrections are reported for violation of sphericity. As errors could reflect wrong responses or failures to perform the simulated grasping, error rates were not suitable for statistical analysis.