1. Case Study Research Inter-University Research Workshop University of Canberra, 3 Feb 2011 Dr Raymond Young (MBA, GAICD) Raymond.young@canberra.edu.au
2. An opening wordYin (2003, pp11,17) “... Most people feel that they can prepare a case study, and nearly all of us believe we can understand one... Neither view is well founded...” “Case study research is remarkably hard, even though case studies have been traditionally considered ‘soft’ research, possibly because investigators have not followed systematic procedures”
3. Agenda Introduction to case study research Exercise 1 Designing case studies Exercise 2 Deeper considerations Conducting case studies: Preparing for data collection Analysing case study evidence Reporting case studies
4. Traditional prejudices against case study research(Yin 2003, 10-12) Lack of rigour NB. Case study research <> case studies for teaching It is true they are hard to do well “we have little way to screen or test an investigators ability to do good case studies” Little basis for generalisation Like experiments, generalise to theoretical propositions [analytical generalisation] not to populations or universes [statistical generalisation] Take too long and result in massive unreadable documents NB. not ethnography nor participant observation Different types: explanatory, descriptive, exploratory
5. What is case study research?Yin (2003, 13-14) A case study is an empirical inquiry that Investigates contemporary phenomena within it’s real life context, especially when, the boundaries between phenomena and context are not clearly evident The case study inquiry Copes with the technically distinctive situation in which there will be many more variables of interest than data points, and as one result Relies on multiple sources of evidence, with data needing to converge in a triangulating fashion, and as another result Benefits from the prior development of theoretical propositions to guide data collection and analysis
7. Exercise 1Yin (2003, 17) Defining a case study question Defining “significant” case study questions Examining case studies used for teaching purposes Defining different types of case studies used for research purposes Explanatory/causal Descriptive Exploratory
8. Designing case studies Yin (2003, chapter 2) Research questions Propositions if any Unit(s) of analysis Logic linking data to propositions Criteria for quality
9. Designing case studies – the role of theoryYin (2003, 28-33) How and why questions capture what you are interested in answering However they do not point to what you should study Theory development is essential vs ethnography & grounded theory A hypothetical story about why acts, events, structure and thoughts occur Propositions: help identify relevant information vs study everything Needed in order to generalise from case study to theory Exploratory (no propositions): should still have some purpose ... [3] ships to [go west] to explore the new world. [Criteria for success]
10. An example of theory development Young and Jordan (2008) Standish (1996) User involvement (19) TMS (16) Clear statement of requirements (15) Proper planning (11) realistic expectations (10) smaller project milestones (9) Competent staff (8) ownership (6) clear vision & objectives (3) hard working, focussed staff (3) Project methodologies (35) Clear statement of requirements (15), Proper planning (11), smaller project milestones (9) User (25): User involvement (19), ownership (6) TMS (16) High level planning (13): realistic expectations (10), clear vision & objectives (3) Project staff (11): Competent (8), hard working and focussed (3) Reference: Young and Jordan (2008) “Top Management Support: mantra or necessity” International Journal of Project Management Vol 26, pp713-725
18. Interpretivist criteria for judging quality / credibilityKlein and Myers (1999): pragmatic research synthesises and transcends quantitative concepts (internal validity and external validity) and qualitative concepts (credibility and transferability). Design quality should meet both qualitative and quantitative criteriae.g. sampling criteria and length of engagement. Tashakkori and Teddlie (2003)
21. Exercise 2Yin (2003, 55) Defining the boundaries of a case study Defining the unit of analysis of a case study Defining the criteria for judging the quality of research designs Defining a case study research design Establishing the rationale for single- and multiple-case studies
22. Preparing for data collectionYin (2003, 58-62, 67-80) The case study investigator Ask good questions Be a good listener Adaptive and flexible A firm grasp of the issues being studied Unbiased by preconceived notions Case study protocol Overview Field procedures Case study questions Guide for the report Pilot The demands of a case study on your intellect, ego and emotions are far greater than those of any other research strategy ... continuous interaction between the theoretical issues being studied and the data being collected ... [must be able to] take advantage of unexpected opportunities [and] guard against biases
23.
24. Analysing case study evidenceYin (2003 Chapter 5) Three general strategies Rely on theoretical propositions Thinking about rival explanations Developing a case description High quality analysis Attend to all the evidence Address all major rival interpretations Address the most significant aspect of your case study Use your own prior, expert knowledge
26. Reporting case studiesYin (2003 Chapter 6) ISWORLD (Re: information overload while conducting case studies) "Its a good thing to have too much written up on a case. That way, when you go back to the article, you can figure out what to cut."
e.g of proposition: companies collaborate becausethey derive mutual benefits
Rigour of data collection can be understood in terms of validity and reliability, or credibility, depending on one’s conception of reality. Validity and reliability are positivist criteria of rigour and are underpinned by the notion there is an objective reality that exists independent of an observer. Credibility is more relevant if it is assumed there is no objective reality and that knowledge is subjective and socially constructed (Orlikowski and Baroudi 1991, Walsham 1995, Klein and Myers 1999). The criterion of credibility is further addressed by following the principles established be Klein and Myers (1999): The principles of the hermeneutic, of contextualization, of interaction between researchers and subjects, and the principle of suspicion. Following the principle of the hermeneutic, long periods of time were allowed between conducting cases, preparing drafts and drawing conclusions. The first case took over two years before reaching any final conclusions and the other cases took similar lengths of time (Appendix 8.5). This allowed sufficient time to reconsider the context, check alternative interpretations and maintain an overall suspicion of the findings. The test case reported in Chapter Three provides some evidence these principles were followed. This case was conducted after the first two cases to dispel the suspicion that the researcher was overly biased by his research hypothesis and specifically checked alternative explanations.This process was made possible by the calibre of key informants (ie. high self confidence) and the relatively high levels of trust established between the researcher and the key-informants PragmatismPragmatism does not fit into the Burrell & Morgan (1979) framework of paradigms. It is a radically alternative paradigm because ontologically it does not accept the either/or of the incompatibility thesis methodologically and it rejects the forced choice between post-positivism and constructivism with regard to logic and epistemology. Pragmatists have shown that methodology is not determined by the ontology and take the position that the best method or mix of methods is the one that produces the most effectiveness. Pragmatic methodology has evolved from triangulating information from different data sources (a technique that first emerged from psychology and sociology but reached its fullest application in applied research areas such as evaluation and nursing). The appropriateness of method is determined pragmatically by whether it achieves its purposes (Tashakkori and Teddlie 2003). An emerging strength of pragmatism is the attention to inference - methods to interpret results, make sense of findings and drawing conclusions. Tashakkori and Teddlie (2003) believe the quality of inferences should be assessed by evaluating separately the quality of the data/observations and the quality of the inferences. This is a new concept.Tashakkori and Teddlie (2003) suggest inference should be assessed through inference quality and inference transferability. These pragmatic assessment criteria are detailed below. The main point to note is that pragmatic research is attempting to increase interpretive rigour by synthesising and transcending qualitative concepts (credibility and transferability) and quantitative concepts (internal validity and external validity).Inference quality relates to the degree that a researcher believes his/her conclusions accurately describe what actually happened. It is assessed through design quality and interpretive rigour. Design quality should meet both qualitative and quantitative criteria eg. sampling criteria and length of engagement. Interpretive rigour is concerned with a researchers construction of the relationships among people, events and variables as well as his/her construction of respondents’ perceptions behaviours, and feelings and how these relate to each other. Tashakkori and Teddlie (2003 p40-41, 692) acknowledge this is more difficult to assess but suggest it is determined by whether the interpretation is coherent and systemic with “the ‘evidence’ … clearly delineated in the research report so that other scholars can review and judge the adequacy of the researchers representations and conclusions”. Assessment is in terms of within-design consistency, conceptual consistency (with internal results and to theory), interpretive consistency (between scholars and with participants) and interpretive distinctiveness (and superiority to other interpretations).Inference transferability relates to the generalizability of the results. It is assessed by both quantitative criteria of external validity (use of theory and replication logic in the research design (Yin 2003)) and qualitative criteria of transferability (thick descriptions of context). Inference transferability determines whether conclusions may be extrapolated beyond the particular conditions of the research study to different contexts, different groups of people, other time periods of other modes of measuring/observing.