47
The Joanna Briggs Institute
6
Conversely, studies that present quantitative data provide statistical and theoretically reproducible
evidence regarding the effectiveness of a particular intervention or treatment; however such evidence
often ignores the patient’s opinion, clinical wisdom and the diverse contexts in which such statistical
evidence may be of less utility.
Although both qualitative and quantitative evidence syntheses have their strengths, any review which
focuses exclusively on one form of evidence presents only half the picture and will thus have limited
applicability in many contexts. For example, although quantitative evidence suggests that the use of
maggots is clinically and financially effective in the debridement of wounds, one study demonstrates
that 23% of leg ulcer patients would not consider larval therapy, irrespective of whether it was
recommended (Spilsbury et al 2007), and another study identifies one patient who “lost her appetite
for three days during the treatment owing to the image of maggots on her wound” (Menon 2012).
Were the authors of a quantitative review to suggest that every clinician adopt the use of maggot
therapy for wound debridement based on the evidence of effectiveness (in those cases with no
medical contraindications), this would present a very short sighted conclusion which is out of tune
with reality.
Two or more syntheses are better than one: mixed methods analyzes
Within primary research qualitative and quantitative methodologies are not always separated. Primary
mixed methods research has the capacity to overcome problems inherent in the independent
generation of quantitative or qualitative evidence alone; however mixed methods studies are usually
less likely to describe how the study was conducted, to describe procedures of qualitative data
analysis, and to be judged credible (Atkins et al. 2012). Furthermore, as with other individual studies,
the strength of evidence rests on the design and context of a particular study. That being said,
careful inclusion of such studies into systematic reviews can prove beneficial and strengthen the
conclusions.
The mixed methods approach to conducting systematic reviews is a process whereby (1)
comprehensive syntheses of two or more types of data (e.g. quantitative and qualitative) are
conducted and then aggregated into a final, combined synthesis, or (2) qualitative and quantitative
data are combined and synthesized in a single primary synthesis. Mixed methods reviews represent
an important development for all individuals involved in evidence-based health care. That being
said, Sandelowski et al. (2012) suggest that such mixed methods reviews are particularly relevant to
international organizations (such as the JBI) because they:
“...broaden the conceptualization of evidence, [are] more methodologically inclusive and produce
syntheses of evidence that will be accessible to and usable by a wider range of consumers.”
Through the development of well structured mixed method systematic reviews, the numerical strength
inherent in the positivist paradigm can fuse with the less tangible yet equally important opinions
and perspectives presented in interpretive and critical paradigms, producing far more informative
conclusions than those derived from evidence presented in autonomous modes of synthesis. By
following a systematic methodology for combining quantitative and qualitative data, the requirement
for interpretation is reduced, thereby increasing the objectivity of subsequent conclusions.
A multiplicity of methods
As the field of mixed methods systematic reviews is still in its infancy, there is at present no
consensus with regards to how such reviews should be conducted. A search of literature reveals
numerous articles claiming to encompass both quantitative and qualitative data analyses; however
few of these can be considered mixed methods in that included data are rarely combined in a
single synthesis nor united in a secondary “final” synthesis.
C# Imaging - Scan ISBN Barcode in C#.NET which can be used to track images, pictures and documents BarcodeType.ISBN); // read barcode from PDF page Barcode from PowerPoint slide, you can copy demo code
how to copy pictures from pdf to powerpoint; pdf cut and paste image
46
Reviewers’ Manual
Methodology for JBI Mixed Methods Systematic Reviews
7
Rather, the majority either develop a framework based on themes derived from qualitative studies
and incorporate quantitative data within the framework (e.g. Thomas et al. 2004), or analyze
qualitative and quantitative data separately and then provide a brief narrative discussion of the
“total” results (e.g. Bruinsma et al. 2012).
To date, several books and papers have been published presenting methodological approaches
for combining diverse forms of data; however rather than providing a foundation through which
to consolidate ideas in an effort to derive a consensus:
“…the current impetus in the literatures of mixed methods research and mixed research
synthesis is a multiplicity rather than parsimony.” (Sandelowski et al. 2012)
The continual development of new methodological approaches to mixed methods systematic
reviews diminishes the usability and utility of such reviews, as, instead of focussing on the
conclusions generated for the topic of interest, much energy is spent on critiquing the method
employed to derive these conclusions.
In a recent technical brief, Harden identifies three methods in which mixed methods systematic
reviews are conducted at the Evidence for Policy and Practice Information and Co-ordinating
Centre (EPPI-Centre) in the United Kingdom (Harden 2010):
1. The systematic review of mixed research studies is by default a mixed methods systematic
review: as the original studies are of mixed methods, the resulting synthesis will be mixed.
2. The synthesis methods used in the review are mixed (e.g. two or more syntheses are performed
involving, for instance, quantitative and qualitative data).
3. A model which involves both the building and testing of theories based on the results of original
syntheses. This involves the same process as method 2 (separate syntheses of qualitative and
quantitative data; building); however it also incorporates a third synthesis (testing), whereby
the thematic synthesis of qualitative data is used to “interrogate” the meta-analytical results of
quantitative data.
The first two of these methods do not present viable models through which to conduct mixed
methods systematic reviews, as although they include both quantitative and qualitative data,
the inability of authors to clearly delineate evidence types in a single synthesis (1), or failure to
combine evidence in a secondary synthesis (2), may significantly limit their utility. The third model
is akin to the segregated methodology described in Sandelowski et al. (2006 [see below]), in that
syntheses are conducted separately and then recommendations from the qualitative synthesis
are used to contextualize quantitative data and generate reasons behind the success and/or
failure of a program.
In the third method, two or more syntheses are conducted and then combined in a secondary
synthesis. In this example, the authors conduct both a qualitative synthesis (synthesis 1) and a
quantitative synthesis (synthesis 2) regarding the barriers to healthy eating in adolescents in the
UK (Thomas et al. 2004). By applying specific recommendations derived from qualitative-based
themes (synthesis 1) to numerical data (synthesis 2), the authors can more accurately predict the
cause behind an observed effect. For instance, if synthesis 2 demonstrates that children are not
interested in “health’ per se and do not consider future health consequences as being relevant,
by applying this statement to synthesis 1, the authors can recommend rebranding fruit and
vegetables as being “tasty” rather than “healthy” in an attempt to convince children to eat more
of these foods (Harden 2010).
46
The Joanna Briggs Institute
8
2. Methodological approaches to mixed-method syntheses
New approaches to mixed methods synthesis are continually being reported in literature. The two
dominant approaches are Realist Synthesis (Pawson 2002) and a set of alternative frameworks
posited by Sandelowski et al. (2006).
Realist synthesis
Realist synthesis (Pawson 2002) is not an evaluation technique in itself; rather it presents a
framework which encompasses the “whole enterprise”. Realist syntheses follow an unstructured
contingent design (see above) presenting a theory-driven approach that follows a context-
mechanism-outcome paradigm in which:
“...interventions offer resources which trigger choice mechanisms (M) which are taken up
selectively according to the characteristics and circumstances of subjects (C), resulting in a
varied pattern of impact (O).” (Pawson 2005)
The process attempts to “differentiate and accumulate evidence on positive and negative CMO
configurations” (Pawson 2002) and assumes that all program mechanisms are met with both
success and failure, depending on the context. A realist synthesis thereby explains not only for
whom the mechanism generates success or failure but also encompasses differing degrees of
success and the associated reasons for this variability. The focus is not on whether a particular
program works, but on tresources available to facilitate program success. The interpretation of
such a stratagem by a subject generates a “program mechanism” and forms the foundation of
realist syntheses. Pawson (2002) summarizes the realist approach as presenting evidence for
“what works for whom in what circumstances”. The process begins with the development of an
initial “theory” which, although abstract in nature, is presumed to underlie a particular program or
intervention. This theory is then utilized as a basis for identifying and extracting applicable data
from diverse sources of information, including primary studies, media reports and abstracts.
Once extracted, these data are integrated into the framework, presenting evidence either for
or against the theory based on the particular context from which the data were sourced. These
results attempt to explain how the intervention works in a way that facilitates decision-making
within a range of contexts (Pawson 2004). For example, Greenhalgh et al. (2007) conducted
a Cochrane review on the efficacy of school feeding program in disadvantaged children.
Although results demonstrated that such program had significant positive effects on the growth
and cognitive performance of disadvantaged children, trial design and social context varied
considerably between studies. The authors then used the Cochrane review as the basis for a
realist review to ascertain which aspects determined success and failure within these varying
contexts (Greenhalgh et al. 2007). The results of this review demonstrate that program should be
aimed at children with known nutritional deficiencies and that partnering with the local community
and piloting of program are more likely to produce beneficial results. Such results show that
not all school feeding programs are created equal and help to facilitate the implementation of
effective program in the future.
Although realist syntheses have been utilized effectively in many circumstances, several key
problems have been identified with the realist approach: for example, the process can lack
transparency regarding the choice of evidence selected and there is a lack of explicit guidance
regarding how to process contradictory evidence (Dixon-Woods et al, 2005; Curnock et al.
2012). In addition, as the process is highly iterative and can constantly changes the direction and
focus of the review, this may result in significant bias (Pawson 2004).
39
Reviewers’ Manual
Methodology for JBI Mixed Methods Systematic Reviews
9
General frameworks for mixed methods reviews
Sandelowski et al. (2006) identify three general frameworks through which to conduct mixed
methods systematic reviews: segregated, integrated and contingent (Figure 1).
Segregated methodologies maintain a clear distinction between quantitative and qualitative
evidence and require individual syntheses to be conducted prior to the final “mixed method”
synthesis. The findings or evidence can fall into two categories: the quantitative and qualitative
findings may either support each other (confirmation) or contradict each other (refutation); or they
may simply add to each other (complementary).
The category is not chosen by the reviewer, rather the category used depends on the data
being analyzed. For example, a qualitative study which looks at a patient’s experience following
a specific treatment could either confirm or refute quantitative findings based on lifestyle
surveys/questionnaires of the same treatment. Conversely, the same qualitative study could not
be used to confirm or refute the findings of a quantitative study of clinical effectiveness of the
same treatment, and would instead present complimentary evidence. If the quantitative and
qualitative syntheses focus on the same general phenomenon both confirmation/refutation and
complementarity can inform the topic in a complementary manner. The resulting synthesis is
often presented in the form of a theoretical framework, a set of recommendations or conclusions
or a path analysis (Figure 1a: Sandelowski et al, 2006).
Integrated methodologies directly bypass separate quantitative and qualitative syntheses and
instead combine both forms of data into a single mixed method synthesis. A primary condition
for the development of an integrated mixed method systematic review is that both quantitative
and qualitative data are similar enough to be combined into a single synthesis. As opposed to
segregated methodologies, where the final synthesis involves a configuration of data, integrated
methodologies are almost always confirmatory or refuting in nature and involve an assimilation
of data. This presents the only method whereby both forms of data can be assimilated into a
single synthesis, and requires that either (a) quantitative data is converted into themes, codified
and then presented along with qualitative data in a meta-aggregation, or (b) qualitative data
is converted into numerical format and included with quantitative data in a statistical analysis
(Figure 1b).
Contingent methodologies involve two or more syntheses conducted sequentially based on
results from the previous synthesis. The process begins by asking a question and conducting a
qualitative, quantitative or mixed method synthesis. The results of this primary synthesis generate
a second question, which is the target of a second synthesis, the results of which generate a
third question and so on. Contingent designs can include either integrated and/or segregated
syntheses, and multiple syntheses can be conducted until the final result addresses the reviewer’s
objective (Figure 1c: Sandelowski et al. 2006).
20
The Joanna Briggs Institute
10
Research question
Mixed-method synthesis
(configuration)
Study selection, data
analysis
Quantitative synthesis
Quantitative synthesis
Study selection, data
analysis
(a) Segregated
Research question
Study selection, data
analysis
Conversion of data to
compatible format
Mixed-method synthesis
(assimilation)
(b) Integrated
Documents you may be interested
Documents you may be interested