Defining Comparability Work: The Core of Research Synthesis

Introduction

In the evolving landscape of evidence-based practices, particularly within healthcare and social sciences, there’s a growing recognition of the need for methodological inclusivity. This shift acknowledges the limitations of relying solely on quantitative studies and calls for incorporating diverse research approaches, including qualitative and mixed methods, into systematic reviews. This movement towards methodological diversity has spurred significant interest in mixed research synthesis – the process of combining findings from studies employing varied methodologies. However, this very diversity, while enriching, presents a considerable challenge: how do we compare seemingly incomparable studies to synthesize seemingly uncombinable findings? This challenge brings to the forefront the critical concept of comparability work.

This article posits that all research synthesis, and especially mixed research synthesis, fundamentally hinges on comparability work. This involves the active and interpretive process by which researchers, or reviewers, establish and manage similarities and differences among studies to facilitate their synthesis. The perceived diversity of research, often seen as an inherent obstacle to synthesis, is not a pre-existing condition but rather emerges from prior comparability work. Judgments are continuously made about what constitutes methodological and topical diversity and uniformity. By understanding research synthesis through the lens of comparability work, we bring the often-unacknowledged interpretive processes of systematic review into focus. This shift provides a new framework for addressing the methodological complexities inherent in synthesizing empirical research findings. We will explore these complexities using examples from the synthesis of studies on antiretroviral adherence in HIV-positive women in the United States.

Understanding Comparability Work: A Methodological and Theoretical Foundation

Our approach to methodology here is to examine it as a subject of inquiry in itself. This article is inspired by ongoing research focused on developing methods for integrating qualitative and quantitative research findings. Our initial “method case” for this project is the body of literature on antiretroviral adherence among HIV-positive women in the US, across various demographics. The primary criterion for selecting this literature was its methodological diversity, ensuring it was broad enough for our methodological aims but not so topically diverse as to hinder synthesis. Our current analysis includes 42 reports, encompassing journal articles, dissertations, and technical reports, collected between June 2005 and January 2006. This set comprises qualitative studies, intervention studies, a mixed methods study, and various quantitative observational studies.

During our analysis of these antiretroviral adherence studies, we observed a nuanced picture of diversity. Methodologically, the studies appeared less diverse than initially anticipated. For instance, some studies labeled as longitudinal presented cross-sectional analyses. Furthermore, the analytical approaches and findings of many qualitative studies shared similarities in content, form, and interpretive depth with several quantitative studies. Conversely, the topical diversity seemed greater than expected. Differentiating “antiretroviral adherence studies” from related topics like medication use patterns or access proved challenging. These observations – less methodological diversity but more topical diversity than initially assumed – led us to question our fundamental understanding of study diversity. We found ourselves constantly re-evaluating whether the selected studies were methodologically diverse enough for our purposes yet topically similar enough for meaningful synthesis.

To gain clarity on study diversity, we turned to social science literature on difference, particularly discussions on the politics of difference and how difference is conceived, articulated, created, sustained, and managed. This literature, especially within studies of classification, method, and evidence-based practice, challenges the unproblematic notion of difference. Difference is not seen as an inherent characteristic but as an ongoing accomplishment, where individuals “impose similarities and differences” to achieve specific goals like order and control.

Building on these perspectives on difference, we drew from the sociology of work and the “sociology of the invisible”. These fields helped us reframe the solution to the problem of diversity. Work is seen as the link between the visible and invisible, allowing exploration of what is “at work” but often “deleted” in research synthesis.

The Argument for Comparability Work

From these insights, we argue that comparability work is essential to all research synthesis projects. It’s the management of difference, aiming to make findings from diverse studies within a research domain comparable enough for combination. If comparability is unattainable, comparability work involves excluding studies that resist comparison.

While not explicitly framed as such in research synthesis literature, practices like converting qualitative data into quantitative data, and vice versa, are examples of comparability work. They are intended to minimize or erase perceived differences between data types, making the qualitative-quantitative distinction less prominent. Similarly, setting inclusion and exclusion criteria, translating concepts across qualitative studies, and converting diverse statistical expressions into effect size indices are all forms of comparability work. These indices act as “metrics of compatibility and commonness,” enabling the comparison of previously incompatible elements.

The study diversity we initially grappled with was itself a product of our own comparability work. Our judgments about what constituted diversity shaped our perception. The diversity we sought was not a pre-existing attribute but rather something (re)produced through the synthesis process. Our concern about “too little methodological diversity” and “too much topical diversity” was already an act of imposing similarity or difference during study selection and review. This realization underscores that comparability work is not just a preliminary step but an ongoing, constitutive element of research synthesis.

While systematic reviews typically begin by defining the review topic and addressing topical diversity, we will first delve into methodological diversity, often seen as defining and challenging mixed research synthesis efforts.

Deconstructing Methodological Diversity through Comparability Work

A common assumption in research synthesis is that methodological diversity is inherent in any research body. Methodological classification systems differentiate between qualitative and quantitative methods, and further categorize within each, based on parameters like philosophical orientation, theoretical foundations, sampling approaches, data collection, analysis, and validity. These distinctions – qualitative vs. quantitative, qualitative vs. qualitative, quantitative vs. quantitative – are believed to shape research conduct and outcomes, influencing research questions, sampling, and findings. Reviewers are often advised to distinguish between “real” differences among studied groups and “artifactual” differences arising from methodological variations. A priori methodological distinctions are used to justify study inclusion/exclusion, assess methodological quality, guide systematic review types, and maintain methodological hierarchies.

The Qualitative-Quantitative Dichotomy as Comparability Work

However, the differences embedded in methodological classifications often don’t translate directly into actual research practice. Managing methodological diversity in systematic reviews begins with assessing the degree of methodological similarity and difference within the study body.

Consider the qualitative-quantitative distinction. While comparisons between these approaches seem “rhetorically unavoidable,” the practical boundary is often blurred. These terms are used to describe paradigms, sampling techniques, data collection, and analysis methods. When viewed in a “purist” way, qualitative and quantitative research appear fundamentally irreconcilable. Methodological differences are maximized, leading to separate analytical treatment based solely on categorization as “qualitative” or “quantitative.” Conversely, a “compatibilist” view sees the distinction as merely words versus numbers, suggesting reconciliation through conversion. Methodological differences are minimized, and the qualitative-quantitative distinction is deemed less relevant.

Depending on reviewers’ backgrounds and philosophical stances, reports labeled one way might resemble reports labeled differently. Conversely, reports labeled similarly may differ significantly. One researcher’s “grounded theory” might resemble another’s “phenomenology” to reviewers, and phenomenologies can vary greatly.

For example, in our antiretroviral adherence study review, many “qualitative” reports presented findings as inventories or summaries – lists of facilitators and barriers, reasons for adherence/non-adherence. These resembled surveys in quantitative studies more than typical qualitative studies in their analytical emphasis. In prior work, we termed these “topical surveys,” highlighting an in-between category common in health sciences, sharing with quantitative surveys an emphasis on condensing surface information through descriptive summaries.

Managing this “topical survey” issue involves comparability work. One option is exclusion, deeming them not truly “qualitative” or of weak quality. Another is inclusion but treating quality as a covariate in post-hoc analyses, assessing each study’s contribution. Both options manage difference by reinforcing qualitative-quantitative distinctions. A third option is to treat these studies as quantitative surveys, allowing inclusion and opening new analytical avenues. This option manages difference by disregarding methodological labels and re-classifying studies. This also ensures valuable findings are not lost due to methodological preconceptions.

Real vs. Nominal Methodological Diversity

The discrepancy between reported methodology and actual study conduct challenges the significance of methodological distinctions. It questions whether distinctions like ethnographic vs. non-ethnographic or good vs. bad ethnography truly impact findings. As Johnson and Onwuegbuzie noted, if methods don’t dictate practice differences, the distinctions become differences without distinction. Becker observed that “philosophical details” often have little bearing on actual research practices, and Eakin and Mykhalovskiy suggested method in qualitative research serves more to stimulate analysis than determine findings.

Furthermore, few studies fully meet any single set of quality criteria. Studies designed as grounded theory often resemble qualitative descriptive studies due to challenges in theoretical sampling or constant comparison. Randomized controlled trials often resemble uncontrolled observational studies due to biases. Ideal RCTs are rare in social and behavioral sciences due to ethical and real-world constraints. The “limitations” sections in reports often highlight the gap between ideal and actual method execution.

Methods are also not static. Quality appraisal tools often assume fixed methods, but methods evolve through use. RCTs have adapted to enhance credibility and patient-centeredness. Grounded theory can adopt a phenomenological approach, while descriptive studies can be presented as “more ethnographic.” This dynamic relationship undermines rigid methodological classifications.

Moreover, reported methods often represent “reconstructed logic” rather than “logic-in-use.” They can be “academic posturing” or attempts to gain “epistemological credibility” by labeling a study a certain type. Method, as a “language game,” is as much about “viewing and talking about reality” as technique. This difference between method discourse and practice further complicates ideas of methodological diversity. Research reports are “sites of method talk,” conforming to reporting conventions, concealing the messiness of inquiry. The conventional report is a “hygiene” move, cleaning up the messy reality of research.

In essence, methodological diversity is not inherent but a judgment reviewers impose based on study reports. This is comparability work, influencing study inclusion/exclusion and the treatment of selected findings.

Topical Diversity and the Role of Comparability Work in Research Synthesis

Topical diversity is as much a product and object of comparability work as methodological diversity. Systematic reviews typically begin by defining the topic to achieve a topically homogenous study set. However, the “apples and oranges” problem persists, requiring decisions on whether to combine seemingly different entities or maintain distinctions.

In our antiretroviral adherence studies, topical diversity is evident in the aspects of “antiretroviral regimen” studied (e.g., pill count, dosage frequency, side effects, swallowability). Comparability work involves deciding whether to treat these as a single variable influencing adherence or preserve individual aspects. The former effaces potentially significant differences, while the latter limits synthesis due to few studies addressing identical aspects. Both options manage difference: the first by abstracting comparison, the second by maintaining empirical specificity.

Glass argues that comparing “apples to apples” is trivial. Drawing from Nozick’s “closest continuer theory,” he suggests comparability is an empirical question of what differences researchers deem important. Comparability work here is deciding what is similar enough or too different to combine. Systematic review guidelines advise researchers to determine useful comparisons, acknowledging subjectivity and the lack of statistical solutions. Cooper notes that cumulative analysis should “test the same comparison,” but also avoid eliding “distinctions meaningful to users.” Comparability becomes a technical matter (e.g., effect size indices) only after judgments about useful comparisons and target audiences are made.

Topical Diversity in Qualitative vs. Quantitative Studies

In mixed research synthesis, comparability work around topical diversity is influenced by views on whether qualitative and quantitative studies can address the same topics. One view is that they can (e.g., participant “views”). Another is that they address different topics: qualitative research taps into a “different sort of curiosity.”

In our adherence studies, quantitative findings emphasize numerically measured variables (CD4 count, viral load, pill count) and demographics as correlates of adherence. Qualitative findings emphasize experiences, attitudes, and beliefs about therapy. Quantitative findings focus on predictors of adherence, while qualitative findings focus on reasons for adherence/non-adherence.

Comparability work here involves deciding whether to treat “predictors” and “reasons” as topically different, or to conceive reasons as explanations for predictors. Alternatively, they can be treated as equivalent, or qualitative findings as thematically refined versions of quantitative findings, and vice versa. The first imposes difference, the second similarity.

The Process of Defining a Body of Research: Comparability Work in Action

Whether topical differences between qualitative and quantitative studies are emphasized or minimized, managing topical diversity is complicated by the fact that no two studies, even within a defined topic, address precisely the same topic in the same way. For example, adherence studies vary widely in studied factors, their conceptualization, measurement, and linkages, as well as in how antiretroviral therapy and adherence are defined and measured, and in study populations and settings. “Adherence” itself varies across studies in aspects examined, such as prescription adherence, pill counts, and dose consumption over different timeframes. Consequently, few studies in a review will address the exact same set of factors influencing another set in the same way. For instance, in our review, many bivariate relationships were assessed in only one study, and even ostensibly similar relationships were operationalized differently, resisting direct comparison.

Despite this lack of perfect topical identity, research synthesis requires reviewers to treat a designated study set as a unified “body of research.” Even the term “body of research” implies creating identity among disparate entities. Reviewers actively construct a body of research for each synthesis. Inclusion/exclusion decisions are not just sampling but comparability work, engineering a specific sample. These efforts aim to reduce topical diversity for a more comparable dataset.

Many studies we considered didn’t explicitly use “adherence” or “compliance,” but addressed related topics like access, use patterns, and their correlations with factors like race, class, drug use, psychiatric conditions, or CD4 count and viral load. We started with a conventional adherence definition – patients following prescriptions. However, adherence is contingent on prescription, which depends on provider assessment, and prescription fulfillment, which depends on pharmacy access and financial means. The “arena” of adherence encompasses far more than typically conceived in adherence research, yet this arena is too topically diverse for a single synthesis.

Thus, the body of research on antiretroviral adherence in HIV-positive women can include studies on: (a) provider prescribing practices; (b) factors facilitating prescription fulfillment; (c) medication side effects; (d) HIV disease progression; (e) provider drug selection practices; or (f) attitudes and beliefs about antiretroviral drugs. This list, far from exhaustive, demonstrates the variable composition of this research body. “Adherence” research can include studies not directly on adherence but relevant due to implied links. This variability in the “work object” of systematic review explains why different reviews of ostensibly the same research yield varied results and why defining a single, consistent body of research is challenging.

No single project can encompass the full “health work” context surrounding medication practices, the broader “social, discursive, and institutional context(s)” in which individuals “do medications.” This health work is further situated within the context of other chronic diseases and competing life demands.

Systematic review feasibility necessitates scope reduction. Without boundaries, no review is possible. The tendency in systematic review is towards exclusion to achieve a comparable dataset. Systematic review is less about comprehensive inclusion and more about justifiable exclusion. Credibility hinges partly on transparent “boundary work.” Reviewers must account for exclusions at each review stage – search, retrieval, data extraction, analysis, and evaluation. A study initially “in” may be “out” at data extraction due to findings resisting comparison.

The boundary work in systematic review is often highly exclusionary, eliminating much of the broader contextual arena. Reviewers often synthesize findings from only a fraction of reports meeting initial search criteria. This exclusion bias has led to critiques of systematic review as a technology for finding reasons to exclude studies. However, boundary work is essential for a manageable, comparable dataset. Ultimately, a reviewable body of research results from reviewers “bridging” or “pacifying” topical differences.

Conclusion: The Centrality of Comparability Work in Research Synthesis

We have highlighted and reinterpreted the “judgments, choices, and compromises” inherent in systematic review as comparability work. This concept links the procedural transparency, reproducibility, and objectivity often attributed to systematic reviews with the “hidden judgments” that inevitably shape their results. Even reviews framed as interpretive devices or aimed at unsettling established understandings involve comparability work in deciding study selection and which findings to utilize.

Comparability work reveals that systematic reviews are procedurally transparent and reproducible only in their adherence to defined tasks (problem identification, criteria setting, search strategies, data extraction) and reporting styles. What is readily transparent is task adherence, not the enactment of those tasks. Systematic reviews are inherently “reliably unreliable,” each review a product of the unique interaction between reviewers and their constructed research body.

Systematic review, especially mixed research synthesis, depends on finding solutions to difference acceptable to relevant communities. These communities themselves vary in their views on study diversity, relevant differences, and their management. The objectivity of systematic reviews relies on transparent subjectivity. Ultimately, research synthesis is determined by what is combined, which depends on what is compared, which depends on judgments of comparability, which in turn depends on judgments of similarity and difference. This chain of judgments constitutes the system within systematic review, making comparability work its central and defining feature.

Acknowledgments

The study referenced, “Integrating Qualitative & Quantitative Research Findings,” is supported by the National Institute of Nursing Research, National Institutes of Health, 5R01NR004907. We also acknowledge Career Development Award # MRP 04-216-1 to Dr. Voils from the Health Services Research & Development Service of the Department of Veterans Affairs. The views expressed are those of the authors and not necessarily those of the Department of Veterans Affairs.

References

[R1] Barbour, R. S. (2000). The place of qualitative methodology in evidence based health care. Qualitative Health Research, 10(1), 15–20.

[R2] Barbour, R. S., & Barbour, M. (2003). Evaluating and synthesizing qualitative research: The case of interventional trials. Open University Press.

[R3] Bazerman, C. (1988). Shaping written knowledge. University of Wisconsin Press.

[R4] Becker, H. S. (1996). The epistemology of qualitative research. In R. Jessor, A. Colby, & R. A. Shweder (Eds.), Ethnography and human development: Context and meaning in social inquiry (pp. 53–71). University of Chicago Press.

[R5] Bowker, G. C., & Star, S. L. (2000). Sorting things out: Classification and its consequences. MIT Press.

[R6] Burbules, N. C., & Rice, S. (1991). Dialogue across differences: Continuing the conversation. Harvard Educational Review, 61(4), 393–416.

[R7] Chamberlain, K. (2000). Paradigms, politics and pragmatism in health psychology. Journal of Health Psychology, 5(3), 347–359.

[R8] Charmaz, K. (1990). ‘‘Discovering’’ chronic illness: Using grounded theory. Social Science & Medicine, 30(11), 1161–1172.

[R9] Charmaz, K. (2006). Constructing grounded theory: A practical guide through qualitative analysis. Sage Publications.

[R10] Clarke, A. E. (2005). Situational analysis: Grounded theory after the postmodern turn. Sage Publications.

[R11] Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2nd ed.). Lawrence Erlbaum Associates.

[R12] Cooper, H. M. (1998). Synthesizing research: A guide for literature reviews (3rd ed.). Sage Publications.

[R13] Corbin, J., & Strauss, A. (1985). Managing chronic illness at home: Three lines of work. Qualitative Sociology, 8(3), 224–247.

[R14] Deeks, J. J., Higgins, J. P. T., & Altman, D. G. (2005). Analysing data and undertaking meta-analyses. In J. P. T. Higgins & S. Green (Eds.), Cochrane handbook for systematic reviews of interventions 4.2.2 (Section 8). The Cochrane Library, Issue 3. John Wiley & Sons, Ltd.

[R15] De Souza, D., Gomes, R., & McCarthy, M. (2005). Using multiple methods in health research: Options and problems. Cadernos de Saúde Pública, 21(2), 300–306.

[R16] Dixon-Woods, M., Agarwal, S., Young, B., Jones, D., & Sutton, A. (2004). Integrative approaches to qualitative and quantitative evidence. Health Development Agency.

[R17] Eakin, J. M., & Mykhalovskiy, E. (2003). Reframing the evaluation of qualitative health research: Reflections on a review of appraisal guidelines in the health sciences. Journal of Evaluation in Clinical Practice, 9(2), 187–194.

[R18] Eisenhart, M. (1998). The fox and the rabbit: A cautionary tale about research validity (and practicality). Educational Researcher, 27(7), 16–22.

[R19] Evans, D. (2003). Hierarchy of evidence: A framework for ranking evidence evaluating healthcare interventions. Journal of Evidence-Based Medicine, 6(1), 9–10.

[R20] Forbes, A., & Griffiths, P. (2002). Synthesizing qualitative and quantitative health evidence. Journal of Advanced Nursing, 37(3), 255–270.

[R21] Gieryn, T. F. (1983). Boundary-work and the demarcation of science from non-science: Strains and interests in cultural rhetorics of demarcation. American Sociological Review, 48(6), 781–795.

[R22] Glass, G. V. (2000). Meta-analysis at 25. Evaluation Studies Review Annual, 1, 3–19.

[R23] Glasziou, P. P., & Sanders, S. L. (2002). Further challenges in appraising and using systematic reviews. Evidence-Based Mental Health, 5(1), 4–5.

[R24] Gough, D., & Elbourne, D. (2002). Systematic research synthesis: Part 2: Methodological issues. Research Papers in Education, 17(2), 125–148.

[R25] Greenhalgh, T. (2002). Is my practice evidence-based? BMJ Books.

[R26] Gross, R. E., & Fogg, L. (2001). Expanding the role of randomized controlled trials in the evaluation of complementary and alternative medicine: Pragmatic trials. Alternative Therapies in Health and Medicine, 7(4), 68–72.

[R27] Gubrium, J. F., & Holstein, J. A. (1997). The new language of qualitative method. Oxford University Press.

[R28] Hammersley, M. (2001). Some questions about evidence-based practice in education. In P. Atkinson, A. Coffey, S. Delamont, J. Lofland, & L. Lofland (Eds.), Handbook of ethnography (pp. 640–649). Sage Publications.

[R29] Harbers, H. (2005). Bridging and pacifying: STS and the social sciences. Social Studies of Science, 35(4), 575–597.

[R30] Harden, A., Garcia, J., Oliver, S., Brereton, N., Kershaw, P., & Thomas, J. (2004). A systematic review and thematic synthesis of patient-reported barriers and facilitators to the uptake of interventions to prevent unintended teenage pregnancy. BMC Health Services Research, 4(1), 39.

[R31] Harden, A., & Thomas, J. (2005). Methodological issues in combining diverse study types in systematic reviews. International Journal of Social Research Methodology, 8(3), 257–271.

[R32] Hawker, S., Payne, S., Kerr, C., Hardey, M., & Powell, J. (2002). Appraising the evidence: Reviewing disparate data systematically. Qualitative Health Research, 12(9), 1284–1299.

[R33] Hetherington, K., & Munro, R. (1997). Ideas of difference: Social spaces and the labour of division. Blackwell Publishers/Sociological Review.

[R34] Higgins, J. P. T., & Green, S. (Eds.). (2005). Cochrane handbook for systematic reviews of interventions 4.2.2. The Cochrane Library, Issue 3. John Wiley & Sons, Ltd.

[R35] Hunt, M. (1997). How science takes stock: Meta-analysis and the rhetoric of objectivity. Science, Technology, & Human Values, 22(1), 5–36.

[R36] Hunter, J. E., & Schmidt, F. L. (2004). Methods of meta-analysis: Correcting error and bias in research findings (2nd ed.). Sage Publications.

[R37] Johnson, R. B., & Onwuegbuzie, A. J. (2004). Mixed methods research: A research paradigm whose time has come. Educational Researcher, 33(7), 14–26.

[R38] Kaplan, A. (1964). The conduct of inquiry: Methodology for behavioral science. Chandler Publishing Company.

[R39] Kaptchuk, T. J. (1998). Intentional ignorance: A double-blind trial of clinical trials. Theoretical Medicine and Bioethics, 19(1), 1–25.

[R40] Kaptchuk, T. J. (2001). The double-blind, randomized, placebo-controlled trial: Gold standard, or golden calf? Journal of Clinical Epidemiology, 54(6), 541–549.

[R41] Lamont, M., & Molnár, V. (2002). The study of boundaries in the social sciences. Annual Review of Sociology, 28(1), 167–195.

[R42] Law, J. (2004). After method: Mess in social science research. Routledge.

[R43] Law, J., & Singleton, V. (2005). Allegory and its others. Journal of Cultural Economy, 1(1), 7–25.

[R44] Lemmer, C., Grellier, J., & Steven, K. (1999). Integrating qualitative and quantitative research findings: Increasing the usefulness of research synthesis. British Educational Research Journal, 25(2), 143–170.

[R45] Linde, K., & Willich, S. N. (2003). How objective are systematic reviews? Controlled Clinical Trials, 24(6), 670–686.

[R46] Lipsey, M. W., & Wilson, D. B. (2001). Practical meta-analysis. Sage Publications.

[R47] Livingston, E. (1999). Rhetorics of rationality. State University of New York Press.

[R48] Lohr, K. N., & Carey, T. S. (1999). Assessing “best evidence”: Best, better, good enough? Joint Commission Journal on Quality Improvement, 25(9), 470–479.

[R49] MacLure, M. (2005). Theorizing the “more-than-discourse” in qualitative research. Qualitative Research, 5(4), 403–429.

[R50] Maxwell, J. A. (2004). Causal generalization in qualitative research. Social Methods & Research, 33(1), 3–30.

[R51] Mays, N., Pope, C., & Popay, J. (2005). Systematically reviewing qualitative and quantitative evidence to inform management and policy-making in the health field. Health Matrix, 2(1), 5–25.

[R52] Mol, A. (2002). The body multiple: Ontology in medical practice. Duke University Press.

[R53] Morris, M. (2000). Assemblages of. In E. Grosz (Ed.), Becomings: Explorations in time, memory, and futures (pp. 231–250). Cornell University Press.

[R54] Mykhalovskiy, E., McCoy, L., & Bresalier, M. (2004). “At work” in sociology: Dorothy E. Smith’s sociology for women. Social Science & Medicine, 58(2), 313–325.

[R55] Noblit, G. W., & Hare, R. D. (1988). Meta-ethnography: Synthesizing qualitative studies. Sage Publications.

[R56] Nozick, R. (1981). Philosophical explanations. Harvard University Press.

[R57] Nurius, P. S., & Yeaton, W. H. (1987). Research synthesis reviews: An illustrative case and comments. Social Work Research & Abstracts, 23(1), 12–19.

[R58] Oakley, A. (1989). Who are the subjects of feminist research? In J. S. Reinharz & G. David (Eds.), Feminist methods in social research (pp. 50–67). Oxford University Press.

[R59] Ogilvie, D., Egan, M., Hamilton, V., & Petticrew, M. (2005). Systematic reviews of health effects of social interventions: 2. Best available evidence: How low should you go? Journal of Epidemiology and Community Health, 59(8), 686–692.

[R60] Onwuegbuzie, A. J., & Daniel, L. G. (2003). Typological analysis of research paradigms used in library and information science dissertations. Library & Information Science Research, 25(3), 311–334.

[R61] Onwuegbuzie, A. J., & Teddlie, C. (2003). A framework for analyzing data in mixed methods research. In A. Tashakkori & C. Teddlie (Eds.), Handbook of mixed methods in social & behavioral research (pp. 351–383). Sage Publications.

[R62] Paterson, B. L., Thorne, S. E., Canam, C., & Jillings, C. (2001). Meta-study of qualitative health research: A practical guide to meta-synthesis. Sage Publications.

[R63] Petticrew, M., & Roberts, H. (2003). Systematic reviews in the social sciences: A practical guide. Blackwell Publishing.

[R64] Popay, J., & Roen, K. (2003). Integrating qualitative and quantitative health research: A reader. Open University Press.

[R65] Rolfe, G. (2001). Validity, trustworthiness and rigour: Quality and the idea of qualitative research. Journal of Advanced Nursing, 36(3), 522–530.

[R66] Rolfe, G. (2002). Evidence-based practice and the case of qualitative research: A critique. Australian Journal of Advanced Nursing, 19(3), 4–7.

[R67] Rosenblum, N. L., & Travis, L. (2000). Introduction. In N. L. Rosenblum & L. Travis (Eds.), The paradox of group rights (pp. 1–30). Princeton University Press.

[R68] Sale, J. E. M., & Brazil, K. (2004). Developing and using checklists in qualitative research. Evidence-Based Nursing, 7(2), 37–40.

[R69] Sandelowski, M. (2004). Using qualitative research in metasynthesis. Qualitative Health Research, 14(10), 1366–1386.

[R70] Sandelowski, M., & Barroso, J. (2007). Handbook for synthesizing qualitative research. Springer Publishing Company.

[R71] Sandelowski, M., Voils, C. I., & Barroso, J. (2006). Defining and designing mixed research synthesis studies. Research in the Schools, 13(1), 29–40.

[R72] Seale, C. (2002). Quality issues in qualitative inquiry. Qualitative Social Work, 1(1), 97–110.

[R73] Sharpe, D. (1997). Of apples and oranges, file drawers and garbage cans: Why synthesis studies often go awry. Clinical Psychology Review, 17(8), 881–901.

[R74] Skrtic, T. M. (1990). социальный реализм и специальное образование [Social realism and special education]. Remedial and Special Education, 11(3), 115–130.

[R75] Song, F., Sheldon, T. A., Sutton, A. J., Abrams, K. R., & Jones, D. R. (2001). Methods for exploring heterogeneity in meta-analysis. Evaluation & the Health Professions, 24(2), 126–151.

[R76] Star, S. L. (1991). Invisible work and silenced dialogues in knowledge representation. In J. G. Gornostaev (Ed.), Lecture notes in computer science: Vol. 567. EKAW-91. EKAW’91. Proceedings, European knowledge acquisition workshop (pp. 264–279). Springer.

[R77] Star, S. L. (1995). The sociology of the invisible: The power of standards in sociological theory. Sociological Theory, 13(3), 503–520.

[R78] Steinmetz, G. (2004). Odious comparisons: Incommensurability, the case study, and “small Ns” in sociology. Social Science History, 28(3), 383–403.

[R79] Thorne, S. E., Kirkham, S. R., & MacDonald-Emes, J. (1997). Interpretive description: A noncategorical qualitative alternative for developing nursing knowledge. Research in Nursing & Health, 20(2), 169–177.

[R80] Timmermans, S., & Berg, M. (2003). The gold standard: The challenge of evidence-based medicine in psychiatry. Temple University Press.

[R81] Torrance, H. (2004). Systematic reviewing and educational research: Some dilemmas. Educational Research and Evaluation, 10(2), 171–196.

[R82] Trinder, L. (2000). Evidence-based practice: The marriage of knowledge and values. Journal of Social Work Education, 36(3), 427–438.

[R83] Valsiner, J. (2000). Data as representations: Contextualizing qualitative and quantitative research strategies. Social Science Information, 39(1), 99–115.

[R84] Walker, J. (2003). Evidence-based practice: Radical rhetoric and restrictive reality? A case study of dietetics. Journal of Human Nutrition and Dietetics, 16(3), 163–170.

[R85] Weinstein, J. N. (2004). The fallacy of the randomized controlled trial as the ‘gold standard’. Spine, 29(6), 585–588.

[R86] West, S., King, V., Carey, T. S., Lohr, K. N., McKoy, N., Sutton, S. F., & Lux, L. (2002). Systems to rate the strength of scientific evidence. Agency for Healthcare Research and Quality.

[R87] White, P. (2001). Evidence-based policy and practice: Panacea or passing phase? Social Policy & Administration, 35(3), 235–253.

[R88] Wilson, D. B., & Lipsey, M. W. (2001). Assessing evidence of causality in systematic reviews: Lessons from meta-analysis. In L. Bickman (Ed.), Validity and social experimentation: Donald Campbell’s legacy (pp. 329–366). Sage Publications.

[R89] Wolcott, H. F. (1990). Writing up qualitative research. Sage Publications.

Comments

No comments yet. Why don’t you start the discussion?

Leave a Reply

Your email address will not be published. Required fields are marked *