I. Data source
In order to effectively conduct the qualitative content analysis of news items, this protocol will rely on a news data base: 1) the ProQuest Canadian Newsstream (formerly the Canadian Newsstand Complete) and 2) Google News. The Canadian Newsstream via ProQuest web interface was chosen because it provides not only the largest newspaper, video, and magazine article data bases in Canada (full text items of nearly 300 unique newspapers and news organizations), but the timeline for many of the articles goes beyond many of the other archive bases (some items date back to the 1970s) [29-30]. Furthermore, the ProQuest Canadian Newsstream is able to provide access to other major Canadian news sources (e.g., the Globe & Mail, CBC News, and Maclean's ) while at the same time allowing many regional media sources to be easily attainable (e.g., the Hamilton Spectator, the Medicine Hat News, and the Vancouver Courier) [29].
Google News was chosen as the second data source because it is a free based news portal that is simplified based on public use, rather than solely for academic purposes [31]. More than a decade ago, it was estimated that more than 9 million people access their news from Google news each month [31]. At the same time, Google News has a broader appeal in today’s society, as more and more people are relying on social media such as YouTube and Google Chrome products to access news and up-to-date information via the clustering of news information on topics of interest [32]. This is especially important in the context of this research, as the main objective relates to mass media’s role in shaping public opinion.
In addition, one of the main advantages of Google News over all other archival data bases, such as ProQuest, Factiva, JSTOR, Periodicals Archive Online, or LexisNexis, is its ability to provide not only text items, but other graphics (e.g., photos) and background information (e.g., headline size and story placement) that are often eliminated before a story is archived in traditional data bases [31]. Finally, Google news is able to capture local news or smaller non-print articles (e.g., VICE, the Georgia Straight, the Tyee, the North Shore News, and rabble.ca) that are often missed by other data bases, such as in this case, the ProQuest Canadian Newsstream.
II. Data collection
The data pertaining to this study protocol related to openly accessible news items available to the public via Google or the membership/subscriber-based academic portal of Canadian Newsstream; therefore, no ethics approval was needed from a university or institution. The search query for both Google News and Canadian Newsstream involved the phrase, “Vancouver Area Network of Drug Users”. No restriction on the publication date of news items was considered for both data bases, because some web interfaces and search engines have no details by which they order their results [33].
The inclusion criteria for news items (e.g., newspapers, magazines, and videos) will be based on the following criteria: 1) items written or spoken in the English language; 2) items that are related directly to VANDU’s recent activism, work, or action; 3) items that involve members of VANDU (e.g., action, comments, or activism). The news items that met the initial inclusion criteria will be further reviewed in full before being added for qualitative analysis.
During the full review, each news item’s geographic area will be divided via an excel worksheet into local (e.g., British Columbia (B.C.)), national (e.g., Canada), and International categories. Moreover, the data range for each article will also be noted in the excel worksheet. All items collected via the excel worksheet will later be included in the quantitative findings to link the data to major epidemics in the region and in the Downtown Eastside neighborhood.
III. Material
In addition to Microsoft Excel which will be used for quantitative analysis linking geography and important dates to the data set, this study protocol also relies on Nvivo software (version 12) for the qualitative analysis. Version 12 of Nvivo software (QSR International Pty Ltd., 2018) has many new features that not only allow for greater exploration and visualization into the qualitative analysis (e.g., coding videos, photos, maps, interviews, pdf files), but also allows for easy cross tabulation of data and information exchange via other software. Many of these features helped in the analysis of news articles and better visualization of imported items.
IV. Data analysis
The data analysis of news items that have passed the first and second review will be imported into Nvivo for further qualitative analysis. The initial qualitative analysis will involve open coding, where, based on the review of the whole news item (e.g., the title and news content), each article will be placed into its own category. Placing a whole news item in a coding category in Nvivo is achieved based on the inductive principles of grounded theory, whereas during the data collection, there will already be interplay between coding and data review [33-36]. In effect, during the final review and subsequent uploading of the news items, there could be ‘constant simultaneous comparaison’ between items that belong to specific coding categories and whether a new coding category needs to be constructed [36]. Codes will be developed in an iterative process, as emergent news items and themes will be identified in accordance with grounded theory principles [37]. In fact, the emergent themes will be constantly compared with established codes to observe similarities and differences across categories.
To provide linkage and examples from the new inductive, open, and emergent coding [38], Nvivo software will be used once again for a deductive approach in coding the content of news items [39-41]. During this stage of analysis, latent content analysis will be employed, where the focus is not only on the interpretation of the content [42], but the main objective is to display the true usage [43,44]. Therefore, by relying on Nvivo’s word frequency query, the most frequent words and phrases will be identified. This quantification of the most frequent words or phrases is an attempt to contextualize the text of the news items, rather than infer meaning [40].
Later on, the most frequent words and phrases will be used within the context of the previous inductive codes to form new templates and code guides as a means of sorting the text within each news item [45]. The theme identified through this deductive process will help provide further interpretation in terms of latent meanings [46]. Since validity is an important concept in research regardless of the methods used, this research also presents findings with quotes from each news item to present the context for each theme and as a standalone representation of the theme.