Decoding Complexity: An Introduction to Qualitative Comparative Analysis

Qualitative Comparative Analysis (QCA) emerged over two decades ago as a methodological approach designed to bridge the divide between qualitative and quantitative research traditions. Despite its potential, perceptions of complexity and unclear advantages may have hindered its widespread adoption. This article aims to demystify QCA and showcase its practical application. Using data from fifteen institutions implementing universal tumor screening (UTS) programs for hereditary colorectal cancer risk, we illustrate how QCA can uncover unique combinations of factors contributing to program effectiveness. This example reveals specific conditions associated with successful UTS programs and offers a model for enhancing patient follow-through after positive screening results.

Keywords: Qualitative Comparative Analysis, Configurational Comparative Method, Effectiveness, Evaluation, Cross-Case Comparison, RE-AIM

The integration of qualitative and quantitative research methods, once seen as distinct and opposing, is now increasingly accepted across various social science disciplines Bazeley, 2009. However, the utilization of methodologies that genuinely blend these approaches remains less common. Qualitative Comparative Analysis (QCA) stands out as a hybrid method specifically developed to bridge the gap between case-oriented qualitative research and variable-oriented quantitative research. It provides a practical framework for analyzing complex, real-world phenomena Ragin, 1987; Benoît Rihoux & Marx, 2013. Initially conceived by Dr. Charles Ragin for small- to medium-N case study research Ragin, 1987, QCA employs Boolean algebra and minimization algorithms to systematically compare cases. This process identifies solutions comprised of patterns of conditions, whose presence or absence is uniquely linked to a specific outcome Ragin, 1987.

QCA adopts a set-theoretic perspective, grounded in the idea that case attributes are best understood holistically through set relations Ragin, 1987; Benoît Rihoux & Marx, 2013. In QCA, set membership is determined by the degree to which a case meets the criteria for each condition or outcome. The original form of QCA, now known as crisp-set QCA (csQCA), dichotomized conditions and outcomes as either present or absent. This has evolved to include multi-value QCA (mvQCA), accommodating outcomes with more than two values, and fuzzy-set QCA (fsQCA), allowing for nuanced variations in set membership Benoît Rihoux & Marx, 2013. Software tools are available to facilitate QCA, including fsQCA, developed by Charles Ragin, which is freely accessible online at http://www.fsqca.com with a user manual (Ragin et. al, 2006).

Some criticisms of Qualitative Comparative Analysis cite its perceived complexity or lack of clear advantages over traditional methods Hawley, 2007. While established qualitative methods for cross-case comparisons exist Miles & Huberman, 1994, QCA offers distinct benefits. As the number of cases grows, systematic comparisons become challenging without QCA software. Furthermore, QCA’s mathematical approach to solution identification and merit assessment resonates with journals favoring quantitative research.

QCA’s adaptability is evident in its application across diverse research designs Kahwati et al., 2011; Shanahan, Vaisey, Erickson, & Smolen, 2008; Weiner, Jacobs, Minasian, & Good, 2012. It can analyze data from individual, institutional, or national levels, accommodating small, medium, and large sample sizes. QCA also handles both unstructured data (e.g., interview transcripts) and structured data (e.g., survey responses).

The ability of Qualitative Comparative Analysis to pinpoint combinations of conditions that are ‘necessary’ and/or ‘sufficient’ for a specific outcome is invaluable for theory development and testing. For instance, health behavior adoption is rarely solely determined by knowledge. The Health Belief Model Janz & Becker, 1984 posits that a combination of factors, including knowledge, perceived threat, benefits, and low barriers, are often required. QCA excels at identifying such “causal complexity,” making it a powerful tool for theoretical model building and validation Ragin, 1987.

Structural Equation Modeling (SEM), another technique for incorporating multiple variables and testing models, is more widely used. While SEM might be considered more user-friendly Hawley, 2007, it typically demands large samples and employs a reductionist approach, examining variable influence in isolation. Unlike QCA, SEM often overlooks equifinality – the concept that different combinations of conditions can lead to the same outcome Ragin, 1987; Rihoux & Ragin, 2009. For example, knowledge and high perceived benefits might suffice for health behavior change in some women, while others may require different or additional conditions. If a factor is crucial only for a subset, its correlation with the outcome weakens, potentially leading to its dismissal as insignificant in inferential statistics. Moreover, inferential statistics assume symmetrical variable influence, while QCA recognizes that conditions for behavior adoption may differ from those causing non-adherence.

Despite the advantages of Qualitative Comparative Analysis, its diffusion across academic disciplines remains limited. This article aims to address this gap by exploring the adoption of QCA within health research and among mixed methods researchers. We present findings from a literature review of articles indexed in PubMed and the Journal of Mixed Methods Research. Furthermore, we discuss potential reasons for QCA’s diffusion rate. To promote broader dissemination, we aim to enhance understanding and reduce perceived complexity of QCA. We demonstrate csQCA using data from a multiple-case study, illustrating its benefits and limitations.

Diffusion and Adoption of QCA in Health Research

In April 2014, we conducted online searches using “qualitative comparative analysis” in PubMed and the Journal of Mixed Methods Research (JMMR). We reviewed abstracts of articles published from 1987 onwards (QCA’s inception), initially including studies using any QCA type (crisp-set, fuzzy-set, or multi-value) in original research or with hypothetical data. Due to the limited number of articles, we expanded criteria to include any articles mentioning QCA to assess its discussion contexts.

Our initial search yielded only 30 PubMed-indexed articles meeting inclusion criteria, 29 reporting original research data and one using hypothetical data. Expanding the criteria identified two additional PubMed articles. One article suggested QCA, among other “new techniques,” for advancing stress, coping, and social support research Thoits, 1995. The other discussed QCA and other methods for synthesizing qualitative and quantitative evidence Dixon-Woods, Agarwal, Jones, Young, & Sutton, 2005.

In JMMR, only one article met initial inclusion criteria, but eight met expanded criteria. The single article reported on how large-N survival analysis and small-N QCA offered insights into project delays in organizations Krohwinkel, 2014. Expanded criteria articles included a book review by Hawley (2007) and seven articles mentioning QCA in discussions on mixed methods research integration, synthesis, and triangulation Bazeley, 2009; Bazeley & Kemp, 2012; Sandelowski, Voils, Leeman, & Crandell, 2012; Wolf, 2010], qualitative data analysis tools Onwuegbuzie, Bustamante, & Nelson, 2010], data analysis as interpretation Van Ness, Fried, & Gill, 2011], and calls for experimentation with innovative methods like QCA Boeije, Slagt, & van Wesel, 2013.

This limited literature search reveals diverse contexts where Qualitative Comparative Analysis has been used, either as a primary technique or to complement others. Articles presented varying perspectives on QCA’s placement on the qualitative/quantitative spectrum. The review confirms QCA’s slow diffusion in health research but suggests increasing adoption rates, evidenced by half of identified PubMed articles being published post-2011.

Understanding the Slow Diffusion of QCA

Diffusion of Innovations Theory Rogers, 2003 offers several explanations for QCA’s slow adoption. First, innovation diffusion takes time and depends on communication channels. Developed in the late 1980s by sociologist Charles Ragin Ragin, 1987, QCA initially spread within political science and sociology, requiring broader communication channels to reach other disciplines. Second, methodological paradigm compatibility influences adoption Barbour, 1998. “Qualitative” researchers may perceive QCA as incompatible due to its Boolean algebra and software-driven solution identification using quantitative measures like consistency and coverage. “Quantitative” researchers might see it as incompatible due to its iterative data evaluation, non-random samples, and researcher interpretation at multiple analysis stages Benoît Rihoux & Ragin, 2009. Third, limited QCA training restricts knowledge and adoption. Fourth, initial QCA complexity was a barrier until software automation. Hawley (2007) also notes that QCA’s unique terminology adds to its learning curve, further complicated by the development of various QCA types Rihoux & Ragin, 2009.

Given mixed methods researchers’ pragmatic approach transcending the positivist/constructivist or quantitative/qualitative divides (Morgan, 2007), the JMMR review’s finding of low QCA adoption was unexpected. Hawley’s (2007) review suggests perceived complexity and lack of perceived advantage over other methods as reasons for slow diffusion. To address complexity, the next section provides a step-by-step guide on how QCA was instrumental in a multiple-case study evaluating universal colorectal tumor screening programs for Lynch syndrome identification.

QCA in Action: A Practical Example

Universal Tumor Screening (UTS) for Lynch Syndrome: Background

Lynch syndrome, the most common hereditary colorectal cancer (CRC) cause, significantly elevates CRC risk (50–70% lifetime) (Barrow et al., 2008; Hampel, Stephens, et al., 2005; E. Stoffel et al., 2009) and risks of other cancers (Barrow et al., 2009; Hampel, et al., 2005; Stoffel et al., 2009; Watson et al., 2008). Universal tumor screening (UTS) involves screening tumors from all new CRC patients to identify potential Lynch syndrome cases Bellcross et al., 2012. Over 35 US cancer centers and hospitals have implemented UTS, but protocols and procedures vary considerably Beamer et al., 2012; Cohen, 2013]. Outcomes also differ, with large variations in patient follow-through with genetic counseling and germline testing after positive screens Beamer et al., 2012; Lynch, 2011; South et al., 2009; Cragun et al., 2014]. Patient follow-through is crucial for identifying at-risk family members and enabling cancer prevention or early detection Bellcross et al., 2012. A multiple-case study was initiated to identify institutional factors contributing to patient follow-through variability.

Study Design: A Multiple-Case Approach

Approved by the University of South Florida Institutional Review Board, a multiple-case study began in Fall 2012. This design was chosen (adapted from Yin, 2008) because: (a) the aim was to deeply understand a complex phenomenon (UTS program implementation and patient follow-through) with limited data; (b) the study sought to answer “how” and “why” questions; (c) participant behavior could not be manipulated; and (d) contextual conditions were hypothesized to influence patient follow-through. This article uses data from an online survey of institutional representatives, supplemented by six-month follow-up surveys and interviews with UTS personnel. Full study design details are published elsewhere Cragun et al., 2014.

Conceptual Frameworks: RE-AIM and CFIR

The RE-AIM evaluation framework and the Consolidated Framework for Implementation Research (CFIR) Damschroder et al., 2009; Glasgow, Vogt, & Boles, 1999 guided the multiple-case study. RE-AIM encompasses five dimensions (Reach, Effectiveness, Adoption, Implementation, and Maintenance) for multi-level evaluations Glasgow, Klesges, Dzewaltowski, Estabrooks, & Vogt, 2006. In this study, RE-AIM dimensions were defined as:

  • Reach: Number, proportion, and representativeness of CRC patients screened.
  • Effectiveness: UTS impact on outcomes (patient follow-through, potential negative effects).
  • Adoption: Number, proportion, and representativeness of adopting institutions and staff.
  • Implementation: UTS program delivery consistency, time, cost, and adaptations.
  • Maintenance: Long-term UTS effects for institutions and patients.

RE-AIM was chosen to enhance the quality, speed, and impact of UTS translation into practice. CFIR provided a framework to explore the Implementation dimension and identify factors influencing Effectiveness. Table 1 outlines CFIR domains and associated conditions Damschroder et al., 2009.

Table 1. Five Domains of the Consolidated Framework for Implementation Research (CFIR)

| CFIR Domain | Description and Examples of Associated Constructs

Comments

No comments yet. Why don’t you start the discussion?

Leave a Reply

Your email address will not be published. Required fields are marked *