2. Syllabus
5. Quality Criteria in Qualitative Research:
Reliability, Validity, Objectivity, Alternative
Criteria, Criteria for Evaluating the Building of
Theories, Quality Assessment as a Challenge
for Qualitative Research, Triangulation,
Analytic Induction, Generalization in
Qualitative Research, The Constant
Comparative Method, Process Evaluation and
Quality Management. (5)
4. 11-4
Evaluating Measurement Tools
We have to test Validity and Reliability of our research Instrument. (e.g. –
Questionnaire)
Criteria
Validity
Practicality Reliability
5. Tests of sound measurement
• Sound measurement must meet the
• Tests of validity : Refers to the fact that the instrument is
able to measure what it is designed for.
• Reliability : Means every time the instrument is used it should
give the same result
• Practicality : Means that the instrument is very easy &
practical to use.
• These are three major considerations
one should use in evaluating a measurement
tool.
6. 1) Test of Validity
• Validity refers to the extent to which a test
measures what we actually wish to measure.
• Validity is the most critical criterion and
indicates the degree to which an instrument
measures what it is supposed to measure.
• Validity can also be thought of as utility.
• In other words validity is the extent to
which differences found with a measuring
instrument reflect true differences among
those being tested.
7. Reliability
• A measure is reliable to the degree that it supplies
consistent results.
• Reliability is a necessary contributor to validity
but is not a sufficient condition for validity.
• It is concerned with estimates of the degree to
which a measurement is free of random or
unstable error.
• Reliable instruments are robust and work well at
different times under different conditions. This
distinction of time and condition is the basis for
three perspectives on reliability – stability,
equivalence, and internal consistency.
10. Reliability & Validity
• Qualitative vs. Quantitative
“The quantitative study must convince the reader that
procedures have been followed faithfully because very little
concrete description of what anyone does is provided.
The qualitative study provides the reader with a depiction in
enough detail to show that the author’s conclusion ‘makes
sense’. . . The quantitative study portrays a world of variables
and static states. By contrast the qualitative study describes
people acting in events.”
(Firestone, 1987, p. 19)
11. Reliability & Validity : Challenges
1.What can you possibly tell from an n ( Sample) of 1?
• 2. What is it worth to just get the researcher’s
interpretation of what is taking place?
• 3. How can you generalize from a small, nonrandom
sample?
12. Reliability & Validity : Challenges
• 4. If the researcher is the primary
instrument for data collection and analysis,
how can we be sure the researcher is a valid
and reliable instrument?
• 5. How will you know when to stop
collecting data?
• 6. Isn’t the researcher biased and just
finding out what he or she expects to find?
13. Reliability & Validity : Challenges
• 7. Without hypotheses, how will you know what
you’re looking for?
• 8. Don’t people lie to field researchers?
• 9. Doesn’t the researcher’s presence result in a
change in participants’ normal behavior, thus
contaminating the data?
• 10. If somebody else did the study, would they
get the same results?
14. Reliability & Validity
• Criteria
✓Some argue that qualitative researchers
should consider validity and reliability from
the philosophical assumptions underlying the
chosen paradigm.
16. Reliability & Validity : 1) Credibility:
•Credibility: the extent to which interpretations can be
validated as true, correct, and dependable
•Is the study believable from the perspective of those
observed, does it ring true to the people studied?
• Is the data complete?
• Some argue one can never truly discover the reality of a
situation and should look for what’s credible instead.
17. •Credibility:
- Validity is relative
- Humans are closer to the truth than if a data
collection instrument had been interjected between
us and the participants
Reliability & Validity : 1)Credibility:
18. •1) Use triangulation
•2) Have adequate engagement in
data collection, saturation
•3) Employ reflexivity
Reliability & Validity : 1)Credibility:
How to bring it
19. Reliability & Validity : 1)Credibility:
How to bring it
1) Use triangulation to overcome
inherent flaws
•- data
•- investigator
•- interdisciplinary
•- theory
20. 2) Have adequate engagement in data collection,
saturation
- Holistic
- Look for data that supports
alternate explanations
Reliability & Validity : 1)Credibility: How to bring it
21. • 3) Employ reflexivity
Reflexivity: the process of reflecting
critically upon the self as the
researcher, the “human instrument”
Reliability & Validity : 1)Credibility: How to bring it
22. Reliability & Validity: 2) Transferability:
•Transferability: degree to which the results
can be applied to other settings/situations
• - Researcher supplies thick (detailed)
descriptions
• - Pays careful attention to the
sample
23. •Transferability:
• “In qualitative research, a singe case or
small, nonrandom, purposeful sample is
selected precisely because the researcher
wishes to understand the particular in
depth, not to find out what is generally true
of the many.” (Merriam, 2009, 224)
Reliability & Validity: 2) Transferability:
24. Reliability & Validity: 3) Dependability:
•Dependability: concerned with whether or
not the findings can be duplicated/repeated
•Describes changes in the setting
and how those changes affected the research
25. •Dependability:
✓ Difficult because human behavior is
constantly changing
- many interpretations
- no benchmarks or static means of
measurements
- similarity of answers does not
ensure accuracy
Reliability & Validity: 3) Dependability:
26. •Dependability:
✓ More important to ask if whether the
results are consistent with the data collected
Reliability & Validity: 3) Dependability:
27. ✓Strategies to help with dependability:
- triangulation (discussed in unit -1)
- peer examination
- investigator’s position
- audit trail
Audit trail: independent readers can authenticate
the findings of a study by following the trail of
the researcher
Reliability & Validity: 3) Dependability:
How to achieve it
28. •Confirmability: the degree to which the
results can be corroborated (Validated) by
others
• - Results should be well-reasoned
• - The results of the study vs. the
• researcher’s bias?
Reliability & Validity: 4) Confirmability
29. •Confirmability:
✓One concern is reactivity
• - How the act of observation
changes a situation
Reliability & Validity: 4) Confirmability
30. Reliability & Validity
✓Different criteria apply to
different methods
i.e. In narrative analysis look for what
“tells” a persuasive story in a narrative
way vs. the thick description needed in an
ethnography of a cultural group
31. Objectivity
•Objectivity is considered as an ideal for scientific
inquiry, as a good reason for valuing scientific
knowledge, and as the foundation of the authority of
science in society.
•It expresses the thought that the claims, methods and
results of science are not, or should not be influenced
by particular perspectives, value commitments,
community bias or personal interests, to name a few
significant factors.
•Scientific objectivity is a feature of scientific claims,
methods and results.
32. Objectivity in Qualitative Research
• The obstacles special to the social sciences are caused by the
special involvement of the investigator with his topic of study,
which relates to both his interests and his emotional make-up.
• Even though achieving complete objectivity in science is an
impossibility, aiming at it, or attaining as much of it as
reasonably possible, is a necessary condition for the conduct
of all scientific inquiry.
• The only way in which we can strive for ‘objectivity’ in
theoretical analysis is to expose the valuations to full light,
make them conscious, specific, and explicit, and permit them
to determine the theoretical research.
• A more balanced view of objectivity both as a method as well
as ideal must be considered.
33. Evaluating Research
1.Are the methods of research appropriate to the
nature of the question being asked?
2. Is the connection to an existing body of
knowledge or theory clear?
3.Are there clear accounts of the criteria used for
the selection of cases for study and of the data
collection and analysis?
34. • 4. Does the sensitivity of the methods match the
needs of the research question?
• 5.Were the data collection and record keeping
systematic?
• 6. Is reference made to accepted procedures for
analysis?
Evaluating Research
35. • 7. How systematic is the analysis?
• 8. Is there adequate discussion of how themes,
concepts and categories were derived from the
data?
• 9. Is there adequate discussion of the evidence
for and against the researcher’s arguments?
• 10. Is a clear distinction made between the data
and their interpretation?
Evaluating Research
36. Criteria for Evaluating the Building of Theories
• Corbin and Strauss (1990) mention four points of
departure for judging empirically grounded theories and
the procedures that led to them. According to their
suggestion, you should critically assess
• 1) the validity, reliability, and credibility of the data,
• 2 the plausibility, and the value of the theory itself,
• 3 the adequacy of the research process which has
generated, elaborated, or tested the theory, and
• 4 the empirical grounding of the research findings.
37. For evaluating the research process
itself, they suggest seven criteria:
• Criterion 1 How was the original sampling
selected? On what grounds (selective sampling)?
• Criterion 2 What major categories emerged?
• Criterion 3 What were some of the events,
incidents, actions, and so on that indicated some
of these major categories?
38. For evaluating the research process itself, they
suggest seven criteria
• Criterion 4 On the basis of what categories did
theoretical sampling proceed? That is, how did
theoretical formulations guide some of the data
collection? After the theoretical sampling was
carried out, how representative did these
categories prove to be?
• Criterion 5 What were some of the hypotheses
pertaining to relations among categories? On
what grounds were they formulated and tested?
Criterion
39. • 6 Were there instances when hypotheses did not
hold up against what was actually seen? How
were the discrepancies accounted for? How did
they affect the hypotheses?
• Criterion 7 How and why was the core category
selected? Was the selection sudden or gradual,
difficult or easy? On what grounds were the final
analytic decisions made?
For evaluating the research process itself, they
suggest seven criteria
40. Criteria for Evaluating the Building of
Theories
• A central role is given to the question of whether
the findings and the theory are grounded in the
empirical relations and data—if it is a grounded
theory (building) or not.
• For an evaluation of the realization of this aim,
Corbin and Strauss suggest seven criteria for
answering the question of the empirical grounding
of findings and theories:
41. Criteria for Evaluating the Building of
Theories
• Criterion 1 Are concepts generated?
• Criterion 2 Are the concepts systematically related?
• Criterion 3 Are there many conceptual linkages and
are the categories well developed? Do the categories
have conceptual density?
• Criterion 4 Is there much variation built into the
theory?
42. • Criterion 5 Are broader conditions that affect the
phenomenon under study built into its
explanation?
• Criterion 6 Has "process" been taken into
account?
• Criterion 7 Do the theoretical findings seem
significant and to what extent? (1990, pp. 17-18)
Criteria for Evaluating the Building of
Theories
43. Criteria for Theory Development
in Qualitative Research
• 1 The degree to which generic/formal theory is
produced.
• 2 The degree of development of the theory.
• 3 The novelty of the claims made.
• 4 The consistency of the claims with empirical
observations and the inclusion of representative
examples of the latter in the report.
44. • 5 The credibility of the account to readers and/or
those studied.
• 6 The extent to which findings are transferable to
other settings.
• 7 The reflexivity of the account: the degree to
which the effects on the findings of the
researcher and of the research settings employed
are assessed and/or the amount of information
about the research process that is provided to
readers.
Criteria for Theory Development in
Qualitative Research
45. Quality Assessment as a Challenge for
Qualitative Research
• Nevertheless, the question of how to assess the quality
of qualitative research is currently raised in three
respects.
• First, by the researchers who want to check and secure
their proceeding and their results.
• Second, by the consumers of qualitative research—the
readers of publications or the funding agencies, who
want to assess what has been presented to them;
• and finally in the evaluation of research in reviewing
research proposals and in peer reviews of manuscripts
submitted to journals
46. • ("Are the results credible and
appropriate?"),
• "Do they address the research question(s)?"
• "Data collection procedures are fully
explained
48. Analytic Induction
• Znaniecki (1934) introduced analytic induction.
This strategy explicitly starts from a specific case.
According to Buhler-Niederberger it can be
characterized as follows:
• Analytic induction is a method of systematic
interpretation of events, which includes the
process of generating hypotheses as well as
testing them. Its decisive instrument is to analyze
the exception, the case, which is deviant to the
hypothesis. (1985, p. 476)
49. Steps of Analytic Induction
• 1 A rough definition of the phenomenon to be
explained is formulated.
• 2 A hypothetical explanation of the phenomenon
is formulated.
• 3 A case is studied in the light of this hypothesis
to find out whether the hypothesis corresponds to
the facts in this case.
• 4 If the hypothesis is not correct, either the
hypothesis is reformulated or the phenomenon to
be explained is redefined in a way that excludes
this case.
50. Steps of Analytic Induction
• 5 Practical certainty can be obtained after a small
number of cases have been studied, but the
discovery of each individual negative case by the
researcher or another researcher refutes the
explanation and calls for its reformulation.
• 6 Further cases are studied, the phenomenon is
redefined, and the hypotheses are reformulated
until a universal relation is established; each
negative case calls for redefinition or
reformulation.
51. Generalization in Qualitative
Research
• The generalization of concepts and relations found
from analysis is another strategy of grounding
qualitative research.
• The central points to consider in such an evaluation
are first the analyses and, second, the steps taken to
arrive at more or less general statements.
• The problem of generalization in qualitative
research is that its statements are often made for a
certain context or specific cases and based on
analyses of relations, conditions, processes, etc., in
them.
52. Generalization in Qualitative
Research
• However, when attempts are made at
generalizing the findings, this context
link has to be given up in order to find
out whether the findings are valid
independently of and outside specific
contexts.
53. Generalization in Qualitative Research
• Correspondingly, various possibilities are discussed for mapping
out the path from the case to the theory in a way that will allow you
to reach at least a certain generalization.
• A first step is to clarify which degree of generalization you are
aiming at and is possible to obtain with the concrete study in order
to derive appropriate claims for generalization.
• A second step is the cautious integration of different cases and
contexts in which the relations under study are empirically
analyzed.
• The generalizability of the results is often closely linked to the way
the sampling is done.
• The third step is the systematic comparison of the collected
material. Here again, the procedures for developing grounded
theories can be drawn on.
54. The Constant Comparative Method
• In the process of developing theories, and
additional to the method of "theoretical
sampling", Glaser (1969) suggests the constant
comparative method as a procedure for
interpreting texts.
55. The Constant Comparative
Method
• “It basically consists of four stages: “
• (1) comparing incidents applicable to each
category,
• (2) integrating categories and their properties,
• (3) delimiting the theory, and
• (4) writing the theory" (1969, p. 220).
56. The Constant Comparative
Method
• Although this method is a continuous
growth process—each stage after a time
transforms itself into the next—previous
stages remain in operation throughout the
analysis and provide continuous
development to the following stage until the
analysis is terminated. (1969, p. 220)
57. Process Evaluation and Quality
Management
• A) Process Evaluation :
• Qualitative research is embedded in a process in
a special way. It does not make sense to ask and
answer questions of sampling or concerning
special methods in an isolated way. Whether a
sampling is appropriate can only be answered
with regard to the research question, to the
results, and to the generalizations that are aimed
at and the methods used.
58. Process Evaluation :
• Abstract measures like the representativeness of a sample, which
can be judged generally, do not have any benefit here.
• A central starting point for answering such questions is the
sounding of the research process, which means whether the
sampling that was applied harmonizes with the concrete research
question and with the concrete process.
• Activities for optimizing qualitative research in the concrete case
have to start from the stages of the qualitative research process.
Correspondingly, note a shift in the accent of evaluating
qualitative methods and their use from mere evaluation of the
application to process evaluation.
• Thus, the aspect of grounding is shifted to the level of the research
process. The aim of this shift is also to underscore a different
understanding of quality in qualitative research and to relate it to a
concrete project.
59. Quality Management
• Impulses for further developments can be provided by the
general discussion about quality management (Kamiske and
Brauer 1995), which lies mainly in the areas of industrial
production but also of public services (Murphy 1994).
• But some of the concepts and strategies used in this discussion
may be adopted to promote a discussion about quality in
research, which is appropriate to the issues and research
concepts.
• 1) The concept of auditing : It provides first intersections: "An
audit is understood as a systematic, independent examination of
an activity and its results, by which the existence and
appropriate application of specified demands are evaluated and
documented" (Kamiske and Brauer 1995, p. 5).
60. Quality Management
• In particular, the "procedural audit" is
interesting for qualitative research.
• It should guarantee that "the pre-defined
demands are fulfilled and are useful for the
respective application .... Priority is always
given to an enduring remedy of causes of
mistakes, not only a simple detection of
mistakes" .
61. Quality Management
• 2) Concepts like "member checks" or
communicative validation explicitly take
this orientation into account.
62. Quality Management
• Designing the research process and
proceeding in a way which gives enough
room to those who are studied realizes this
orientation implicitly.
• For an evaluation, both aspects may be
analyzed explicitly: how far did the study
proceed in such a way that it answered its
research question
63. Quality Management
• Make sure that the definition of the goals and standards of the
project are as clear as possible, and that all researchers and co-
workers integrate themselves in this definition.
• Define how these goals and standards and, more generally, the
quality are obtained; finally, a consensus about the way to apply
certain methods (perhaps through joint interview training) and its
analysis is a precondition for quality in the research process.
• Provide a clear definition of the responsibilities for obtaining
quality in the research process.
• Allow transparency of the judgment and the assessment and
quality in the process.
64. References
• 1. An Introduction to Qualitative Research,
Uwe Flick, 4th Edition, SAGE
• 2. Research Methods in the Social Sciences,
Bridget Somekh & Cathy Lewin, 5th
Edition, SAGE India.
• https://www.slideshare.net/shoeb786/objecti
vity-in-social-science-research