This study used a quasi-experimental design (pre/post with comparison group) to examine the relationship of the MT-DIRC training program with subsequent scholarly output in the form of D&I related publications and US Federal funding secured for research. Our main research question sought to determine if there was a difference among fellows vs. nonfellows in likelihood to produce research outputs after participation (vs. nonparticipation) in the program.
Training program description summary. The MT-DIRC training was an R25 mentored training program funded by NCI (PAR-12-049) to increase capacity for D&I research related to the spectrum of cancer control (etiology to survivorship) (19). The full description of the program is published with preliminary results on skill building (6). Here we provide a brief summary of the program.
Eligibility requirements included a doctorate degree and full-time appointment in a research setting. Recruitment through various listservs and advertisement at D&I related events and conferences focused mostly on early-career researchers or mid-to-late career researchers looking to shift the focus of their work to D&I research. To apply, researchers submitted an informational cover page, concept paper for a D&I study to work on during the program, biosketch, and two letters of reference. Program faculty rated each application based on several areas including overall application quality, commitment to D&I research, experience working in trans-disciplinary networks, research support and potential, likelihood for career development, appropriate methods in concept paper, appropriate topic in concept paper, and potential impact of the work proposed.
Selected applicants participated in two Summer Institutes (5-day trainings) in St. Louis, Missouri, USA each June. Trainings primarily focused on competencies for D&I research (20) and in-person mentoring and interactive sessions to work on and receive feedback related to research proposals and or other projects in progress. Ongoing evidence-informed mentoring (21), often in the form of regularly scheduled calls, continued for two years. Fostering collaboration among the program’s network of fellows and mentors was a concerted focus of the program. All current and past fellows and mentors were invited to participate in quarterly webinars and to attend annual meet-up events at the D&I science conference in Washington, DC.
Data collection and processing. In total, 105 applicants applied to the program between 2014 and 2017. Three who were selected into the program later dropped out for various career/personal reasons and were excluded from this study. The total sample of program applicants included in this study is 102: 55 that were selected and participated in the MT-DIRC program (“fellows”) and 47 unselected applicants (“nonfellows”) as comparison group. Demographic information data was collected from each application.
For information on scholarly output, two main sources of publicly available data were used. To gather all published works, we utilized Scopus (www.scopus.com), a comprehensive citation database with over 75 million records (22). All applicants to the MT-DIRC program were searched in the Scopus database and after verifying academic affiliation, their citations and accompanying abstracts were extracted through Scopus’s BibTeX export tool in July 2019 and further processed in R using the ‘bibliometrix’ package (2.3.2) (23). A total of 5,189 publication citations were extracted. Initially, 11% (N=565) were missing abstracts. Upon closer examination, certain article types were responsible for much of the missing abstracts and were necessary to exclude (Erratums=30, Notes=208, Letters=129). For the remaining with missing abstracts (N=258), a member of the research team searched each citation to confirm missing abstracts (e.g. some journals do not require) or extract the abstract if found. A total 208 additional abstracts were located and remaining citations without abstracts (N=50) were excluded. The final dataset contained 4,772 citations and included only citation ID number, applicant ID number, title, and abstract for further de-identified coding.
To gather grant funding, we used the National Institutes of Health’s (NIH) Research Portfolio Online Reporting Tools (RePORTER Tool Manual, 2018). RePORTER is an electronic tool to search the US federal repository of intramural and extramural NIH-funded research projects dating back to 1985. The repository also includes funded projects by the Administration for Children and Families (ACF), the Agency for Healthcare Research and Quality (AHRQ), the Centers for Disease Control (CDC), the Health Resources and Services Administration (HRSA), the Food and Drug Administration (FDA) and the Department of Veteran Affairs (VA). The R package ‘fedreporter’ (0.2.1) (24) was used to extract grant funding information from RePORTER’s application programming interface (API) including abstracts for all applicants in September 2019. We included records where the applicant was either a coinvestigator or a principal investigator. A total of N=271 funding records were extracted. After keeping the first fiscal year of duplicate entries (e.g. multi-year funding) the total sample of unique US federally funded projects was N=97. Funding cases were reduced to grant ID, applicant ID, title, and abstract for further de-identified coding.
Coding process
For each publication and grant abstract, D&I focus was coded as “yes” or “no” similar to Baumann and colleagues criterion (16). D&I focus “yes” included an “implementation-centered hypothesis, design, or framework, focused on assessing the implementation climate of an organization, or described the implementation processes for a particular intervention.” For example, abstracts which explicitly examined implementation outcomes were coded as “yes” and those that focused only on intervention outcomes were coded as “no”. In addition, all publications featured in the Implementation Science journal were coded as D&I “yes” based upon the journal’s inherent D&I focus and scope. Two project staff (RRJ, AG) coded publication and grant abstracts. For publications, an initial random selection of citations (N=20) were double coded by both project staff and discussed to reach agreement on inclusion and exclusion criteria before proceeding with single coding by one coder (AG) for remaining citations. A random sample of 10% was selected for double coding and showed a 93% agreement. The number of grants was considerably less than the total number of publications and therefore all were double coded with any discrepancies reconciled to reach 100% consensus.
Data Analysis
After coding was complete on each publication and grant, data were aggregated to the applicant level and merged with applicant demographic data for analysis (N=102). Because RePORTER is solely for US based research, we excluded foreign applicants (N=15) from grant analyses.
We compared demographic, publication, and funding data by applicant status with independent samples t-tests (continuous data), and chi-square tests (categorical data). For modeling, two main binary outcomes were examined: any D&I publication after application year (yes or no), and because the program supported grant writing in general, “any” grant funding after application year (yes or no). Binary logistic regression models examined program status with each outcome. Attenuation or change in estimates (CIE) of program participation resulting from the inclusion of independent variables was examined (25, 26). Variables which resulted in more than 5% attenuation were included in separate and combined models for further examination. All analyses were completed in R (27) with an alpha level set at 0.05.