A recruitment email was sent to the fourth-year medical student class list, including 181 medical students, 14 (7.2%) of whom responded. Demographic and specialty characteristics are described in Table 1, with 53.8% (n = 7) self-identifying as female and including students applying to eight different specialties. Mean virtual interview participation was12.5 virtual (Range: 5–28), with 85.7% (n = 12) of students participating in additional on-site activities as documented in Table 2. Of note, 92.9% (n = 13) participants initially preferred virtual interviewing over in-person interviewing; however, following the interview season, only 50% (n = 7) participants preferred virtual over in-person interviewing.
Table 1. Demographics of Study Cohort
Demographics
|
N = 14
|
n (%)
|
Self-Identified Gender
|
|
|
8 (57.1)
|
|
6 (42.9)
|
|
0 (0)
|
Specialty
|
|
|
3 (21.4)
|
|
3 (21.4)
|
|
2 (14.3)
|
|
2 (14.3)
|
|
1 (7.1)
|
|
1 (7.1)
|
|
1 (7.1)
|
|
1 (7.1)
|
Table 2. Interview Characteristics
Interview Characteristics
|
N = 14
|
n (%)
|
In-Person and Virtual Participation
|
13 (92.9)
|
Initial Preference
|
|
|
13 (92.9)
|
|
1 (7.1)
|
Post-Interview Season Preference
|
|
|
7 (50.0)
|
|
6 (42.9)
|
|
1 (7.1)
|
Participants were asked about their ability to judge components of program culture fit, detailed in Table 3, using a Likert Scale from 1–5. A score of 1 represented very difficult assessment of metric, while 5 represented very easy assessment of metric. Participants rated their ability to assess how trainees got along with the greatest range at an average score of 3.11 (Range: 1–5). Participants, on average, rated ability to assess how much the program seemed to care about trainees and how satisfied trainees are with their program a 3.38 (Range: 2-4.5) and 3.0 (Range: 1-4.5), respectively. When asked how their scores would have changed had their interview been in-person rather than virtual, 78.6% (n = 11) of participants said their scores would have increased, 7.1% (n = 1) said scores would decrease and 14.2% (n = 2) said scores would remain the same.
Table 3
Likert Scores of Ability to Assess Fit in a Virtual Environment
Likert Scores of Ability to Assess Fit in a Virtual Environment |
Program Culture Fit | Mean (SD), Range |
- Overall ability to assess culture in virtual interviewing | 3.6 (0.62), 2.5-4 |
- How much the program seemed to care | 3.4 (0.68), 2-4.5 |
- How satisfied are trainees with program | 3.0 (1.09), 1-4.5 |
- How well do trainees get along | 3.1 (1.11), 1–5 |
Program Logistics/Academics | 3.1 (1.4), 1–5 |
Participant Likes and Dislikes of Virtual Interviewing
Participants were more supportive of virtual interviewing than opposed. Three main benefits were commonly noted by participants when they were asked what they liked about virtual interviewing. All participants cited cost-savings as a benefit of virtual interviewing. Participants could participate in a more significant number of interviews because they saved money on travel accommodations. The median cost budgeted out for interviews was 200 dollars, with the median amount spent being 700 dollars. One applicant noted that virtual interviewing allowed him to apply to programs that he might not initially apply to a given geographic location. Participants also cited flexibility in scheduling as a benefit to virtual interviewing, as scheduling interviews strategically during the fourth year of medical school was easier because students did not have to factor time required to travel. A third benefit of virtual interviewing was the convenience factor of virtual interviewing. Participants did not have to worry about finding parking for the interview and navigating complex hospital campuses on interview day, which alleviated stress. One participant also noted that virtual interviewing offered convenience for program residents and faculty, allowing them to "pop in" to interviews throughout the day with less disruption to their daily schedule. A fourth, less commonly noted benefit of virtual interviewing was the ability to showcase an applicant's personality by exhibiting items related to the applicant's hobbies, family, or interests outside of medicine behind the applicant's virtual interview.
Virtual interviewing did limit the applicant's ability to observe interpersonal relationships. Many participants noted that body language and simple interpersonal interactions such as "holding the door for people" were crucial in assessing a program's cultural norms and behaviors. One applicant noted that technical difficulties of using Zoom, such as cutting off other speakers due to lag, may contribute to an awkward interview environment and limit natural conversation. Assessment of program environment suffered in a virtual setting, which limited applicants' abilities to assess program culture, with one participant citing that "you don't get to see the physical place where you might be working at [and] you don't get a feel for the staff as much [and] kind of catch a passing …feel for the residents, but even then, … being in a zoom room …you don't get to seeing them in the true environment."
Notably, some participates noted that there are inherent limitations in assessing program culture due to the nature of interviewing itself: applicants and programs tend to showcase their strengths and diminish negative characteristics during interview days. Regardless of the setting in which interviews happen, there would be difficulty in ascertaining a true reflection of the program culture. Additionally, participants stated that they were concerned about possible selection bias of residents/trainees participating in interview days where residents with more positive experiences might be selected to participate over other trainees: "Very few people [will] tell you bad things, and if there are people that [will] tell you bad things, they probably got screened out before they were asked to come talk to you." This tension is further exacerbated in a virtual environment where applicants might not have the opportunity to interact with residents and “read in between the lines”. Participants states that in-person interviews might increase opportunities for informal conversations during walks to different locations or when smaller groups can break off from the scrutiny of a larger crowd.
Participant Assessment of Culture Fit
Participants were asked about their ability to assess three critical components of judging program cultural fit, including how much the program cares about trainees, how well trainees got along, and how satisfied trainees are with their program. Word clouds of participant responses to each of the three questions is demonstrated in Fig. 1. Participants approached questions regarding their ability to assess program culture through external and internal observations. External observations included interpersonal behavior between faculty and residents and tangible resources implemented for resident and applicant benefit. Internal observations included personal accounts of the program culture by residents or alumni, emphasizing the resident's emotional state. Examples of internal and external observations are listed in Table 4. Most participants would then assess the quality of these observations based on truthfulness. For example, one participant demonstrated how he might interpret the statements of trainees: "[Residents] won't say they're unhappy because they want you to come, and usually the residents that do show up for some of the meetings are the ones that are rested. So, they're either in a lighter rotation or don't have kids." After assessing the quality of these observations, participants would then compare their standards with observations to determine culture fit. Quality of observations substantially impacted how positively the program culture was viewed, with less transparent programs being viewed more negatively or ambivalently. One participant noted higher scores: "If they included a program overview during the interview session, I would give them a 4 or 5, [because] I feel like these people take care of their residents. If they didn't include it, sort of neutral…2 or 3."
Table 4
Examples of External and Internal Observations Made by Participants
Examples of External and Internal Observations Made by Participants |
External Observation Examples | Internal Observation Examples |
"how… the attendings treat the residents and medical students, and how the coresidents are with each other." | "I would call [trainees/alumni] or email them or text them to get a more honest perspective about how they felt…" |
"I would emphasize the organization of the interview day itself, which spoke a lot about their culture to me and how much they cared." | "I relied on the [pre-interview] social to, I guess, talk more about what [trainees] really enjoyed about their program and what they would like changed about their program, as well as what changes have been implemented." |
"Most programs talk about wellness and things like benefits… Sick leave vacation things like that and like. Social events that they put on for their residents." | "Though like one-on-one interviews with multiple people, it gives you a sense of culture." |
"You can see them joking. You can see what kind of jokes they make, and then you sort of see how they bounce their humor, like topic, or the question how they answer it." | "It's hard to get a sense for the roots of a program on the interview day, and that's when you kinda ask around the people who've been through that program." |
How well are applicants able to assess how much a program cares about its trainees?
How well participants could judge how much a program cares about its trainees were based mainly on external observations of how mentorship, feedback, career development, and academic discipline were approached. Participants who believe that the nature of interviews creates inherent barriers also stated that how the program director presented themselves was an important indicator of how much the program cares about the trainees: "You see all of them [program directors] at the interviews. You could see their enthusiasm and how much they paired just visually. I was able to assess that better…" Additionally, participants noted that the structure of the interview day itself impacted how much they perceived the program cares, with well-organized and thoughtful schedules reflecting more positive feelings toward that program's culture. In addition to external observations, participants would ask strategic questions to tease out genuine internal observations from residents, such as asking how resident feedback is implemented in the program. Asking neutral questions such as these allowed residents to answer truthfully, allowing applicants to tease out some critical internal observations.
How well are applicants able to assess how satisfied trainees are with their programs?
This component was assessed through mostly internal observations reported by trainees. Participants noted difficulty trusting what residents reported, with one participant stating: "I felt like we often had to push people kind of to talk about the negative things or the things they didn't like as much. Usually, people weren't as upfront…" Participants also noted that programs would incentivize residents to participate in interview day "so it was hard to tell whether the residents were there because they felt supported and wanted to express that, or they wanted the incentives that came along with being there." Participants who felt like trainees were relatively straightforward and honest suggested that increased experience with interviews and the confidence to ask more direct questions such as "How do you feel in your program? What is a weakness in your program? What is something you wish they would change about your program? Do you feel burnt out?" improved ability to assess how satisfied trainees are with their program. External observations that participants utilized to determine how satisfied trainees included observations about the amount of vacation time, call schedule, childcare, parking, and meal stipends.
How well did trainees get along with each other?
How well trainees got along was primarily assessed through external behavior observations during social gatherings. Most participants noted that seeing multiple residents log on from one virtual conference call indicated positive collegiality. Rather than asking trainees directly what they thought of their peers, most participants preferred to "be quiet and pay attention to [residents]." More informal settings allowed applicants greater ability to judge interpersonal relationships, with one participant stating that "seeing residents in a non-judgmental setting like that was helpful." When trainees logged on to the video interview from separate computers, it was very difficult for participants to assess how well trainees got along with each other.