Ce diaporama a bien été signalé.
Nous utilisons votre profil LinkedIn et vos données d’activité pour vous proposer des publicités personnalisées et pertinentes. Vous pouvez changer vos préférences de publicités à tout moment.

Single subject experimental design

10 430 vues

Publié le

Single subject experimental design is reviewed by M. Sarhady and S. Gharebaghy. foundations, methods, analysis and its role in EBP are discussed.

Publié dans : Santé & Médecine
  • Identifiez-vous pour voir les commentaires

Single subject experimental design

  1. 1. SINGLE SUBJECT EXPERIMENTAL DESIGN Mohsen Sarhady MSc. OT & Soraya Gharebaghy MSc. OT © 2015 msarhady1980@gmail.com
  2. 2. ©2015MSARHADY1980@GMAIL.COM Outline Introduction to SSED Foundations for SSED Common types of SSED Data analysis in SSED Evidence based practice and SSED Evaluating quality of SSED Ethical issues in SSED
  3. 3. ©2015MSARHADY1980@GMAIL.COM Think of these research problems:  If there were two hearing aids, which one is more suitable for a 5 years old child with cerebral palsy and inability in lip-reading?  Is repeated storybook reading with adult scaffolding effective in increasing spontaneous speech in a 3 years old autistic boy?  Which one is more effective in enhancing the ability to stand up from chair in a child with dyskinetic CP: A weighted vest or strengthening exercise?
  4. 4. INTRODUCTION What is Single Subject Experimental Design
  5. 5. ©2015MSARHADY1980@GMAIL.COM What is SSED?  A research methodology in which the subject serves as his/her own control, rather than using another individual/group  Single-subject (sometimes referred to as single-case or single-system) designs offer an alternative to group designs  The focus is on an N=1, a single subject, in which the “1” can be an individual or a group of individuals
  6. 6. ©2015MSARHADY1980@GMAIL.COM The unit of clinical interest: the individual Example: Gait training Results for 30 subjects: Improved: 10 Unchanged: 10 Decline: 10 Conclusion: treatment had no effect
  7. 7. ©2015MSARHADY1980@GMAIL.COM Assumptions of the SSED
  8. 8. ©2015MSARHADY1980@GMAIL.COM Contrast between Case Study and SSED  The case study is a subjective description of an individual's behavior  Providing impetus for further study or for the generation of theoretical hypotheses  Case study lacks variable controls and systematic data collection  It cannot document causal relationships between intervention and changes in behavior  Single subject research demands  careful control of variables,  clearly delineated and reliable data collection,  and the introduction and manipulation of only one intervention at a time
  9. 9. ©2015MSARHADY1980@GMAIL.COM Contrast between Group Designs and SSED  Group designs do not naturally conform to practice  Particularly when the practice involves interventions with individuals  Analysis of group designs typically refers to:  “group’s average change score” or  “the number of subjects altering their status.”  In group design we miss each individual’s experience with the intervention  Individual participants within the group may not respond to the particular type of treatment offered
  10. 10. FOUNDATIONS For Single Subject Experimental Design
  11. 11. ©2015MSARHADY1980@GMAIL.COM Components of SSED  The underlying principle of a single subject design:  If an intervention with a client or a group of individuals is effective, it should be possible to see a change in status from the period prior to intervention to the period during and after the intervention.  This type of design minimally has three components:  (a)Repeated measurement,  (b)Baseline phase, and  (c)Treatment phase
  12. 12. ©2015MSARHADY1980@GMAIL.COM Repeated Measurement  Single-subject designs require the repeated measurement of a dependent variable (target problem)  Prior to starting and during the intervention, we must be able to measure the subject’s status on the target problem at regular time intervals, whether the intervals are hours, days, weeks, or months
  13. 13. ©2015MSARHADY1980@GMAIL.COM Baseline Phase  The period in which the intervention to be evaluated is not offered to the subject  Abbreviated by the letter “A”  During the baseline phase, repeated measurements of the dependent variable are taken  These measures reflect the status of the client(s) on the dependent variable prior to the implementation of the intervention  Baseline measurements provide two aspects of control analogous to a control group in a group design:  Its scores serve as a control group  Its repeated measures solve the internal validity issue
  14. 14. ©2015MSARHADY1980@GMAIL.COM Treatment Phase  The time period during which the intervention is implemented  Signified by the letter “B”  During the treatment phase, repeated measurements of the same dependent variable using the same measures are obtained  Patterns and magnitude of the data points are compared to the data points in the baseline phase to determine whether a change has occurred  It is recommended that the length of the treatment phase be as long as the baseline phase
  15. 15. ©2015MSARHADY1980@GMAIL.COM Measuring Dependent Variable  Measures of behaviors, status, or functioning are often characterized in four ways:  Frequency refers to counting the number of times an event occurs  Duration refers to the length of time an event or some symptom lasts and usually is measured for each occurrence of the event or symptom  Interval refers to length of time between events  Magnitude refers to intensity of a particular event, behavior or state  The measures of phases are almost always summarized on a graph  The y axis: used to represent the scores of the dependent variable,  The x axis: represents a unit of time, such as an hour, a day, a week, or a month
  16. 16. COMMON TYPES Of Single Subject Experimental Design
  17. 17. ©2015MSARHADY1980@GMAIL.COM Common types of SSEDs  Basic Design (A-B)  Withdrawal Designs (A-B-A)  Multiple Treatment Designs  Multiple Baseline Designs “A” : No treatment phases “B”: Treatment phases
  18. 18. ©2015MSARHADY1980@GMAIL.COM Basic Design (A-B)  An A-B design represents a baseline phase followed by a treatment phase  No causal statements can be made  A-B design provides evidence of an association between the intervention and the change
  19. 19. ©2015MSARHADY1980@GMAIL.COM Withdrawal Designs  There are two withdrawal designs:  the A-B-A design  the A-B-A-B design  Withdrawal:  Intervention is concluded (A-B-A design) or  is stopped for some period of time before it is begun again (A- B-A-B design)  The premise:  If the intervention is effective, the target problem should be improved only during the course of intervention  The target scores should worsen when the intervention is removed
  20. 20. ©2015MSARHADY1980@GMAIL.COM A-B-A Design:  Based on the A-B design by integrating a post-treatment follow-up  This design answers the question left unanswered by the A-B design:  Does the effect of the intervention persist beyond the period in which treatment is provided?  It may also be possible to learn how long the effect of the intervention persists  The follow-up period should include multiple measures until a follow- up pattern emerges  A-B-A design provides additional support for the effectiveness of an intervention
  21. 21. ©2015MSARHADY1980@GMAIL.COM
  22. 22. ©2015MSARHADY1980@GMAIL.COM A-B-A-B Design:  The A-B-A-B design builds in a second intervention phase  The intervention is identical to the intervention in the first B phase
  23. 23. ©2015MSARHADY1980@GMAIL.COM Multiple Treatment Designs  The nature of the intervention changes over time, and each change represents a new phase of the design  One type of change that might occur is the intensity of the intervention: (A-B1-B2-B3)  Another type of changing intensity design is when you add additional tasks to be accomplished (e.g. The B1 may involve walking safely within the house, the B2 may add methods for using a checkbook, the B3 adds a component on cooking)  Other type: the actual intervention may change over time (A-B-C- D)
  24. 24. ©2015MSARHADY1980@GMAIL.COM Limitation:  Only adjacent phases can be compared so that the effect for nonadjacent phases cannot be determined  There might have been a carryover effect from the previous interventions
  25. 25. single subject design 26
  26. 26. Multiple Baseline Designs  A single transition from baseline to treatment (AB) is instituted at different times across multiple clients, behavior or settings.  This staggered or unequal baseline period is what gives the design its name.  Internal validity is ensured by the multiple replications of the intervention delivered across client, behaviors or settings. 27single subject design
  27. 27. 28single subject design
  28. 28.  Each transition from baseline to intervention is a opportunity to observe the effects of treatment.  Transition at different times allows to rule out alternative explanations for behavior change  Concurrent measurement controls better for threats to internal validity 29single subject design
  29. 29. Multiple base design across behavior There is one client, and the same intervention is applied to different but related problems or behaviors 30single subject design
  30. 30.  Multiple base design across subjects  each subject receives the same intervention sequentially to address the same target problem 31single subject design
  31. 31.  Multiple base design across settings  Multiple baseline designs can be applied to test the effect of an intervention as it is applied to one client, dealing with one behavior but sequentially applied as the client moves to different settings 32single subject design
  32. 32. Multiple-baseline designs strengths  internal validity  no reversal or withdrawal of the intervention  they are useful when behaviors are not likely to be reversible Weakness: require more data collection time 33single subject design
  33. 33. Eliminating Alternative Hypotheses  By systematically delivering the treatment and continuously measuring the relevant target behavior, change in the behavior can be monitored and conclusions drawn about determinates of this change.  Ability to eliminate alternative explanations for behavior change 34single subject design
  34. 34. Internal validity  How confident that changes in the dependent variable are due to introduction of independent variable and not to some other factors. 35single subject design
  35. 35. Threats of internal validity  Confound variable  Maturation effects  History effects  Statistical regression toward the mean 36single subject design
  36. 36. Threats to internal validity are controlled primarily through:  Replication: each replication allows for a comparison between the subject behavior during baseline and during treatment: a) Phase change b) Intersubject replication  Repeated measure 37single subject design
  37. 37. External validity  Whether their finding applicable to subjects and or settings beyond the research. 38single subject design
  38. 38. Data Analysis in Single-Case research
  39. 39.  Isolate causal relationships between independent and dependent variables  By systematically delivering the treatment (independent variable) and continuously measuring the relevant target behavior (dependent variable), changes in behavior can be monitored. 40single subject design
  40. 40.  single-subject researchers rely on visual analysis of graphed data  Are there changes in the data patterns?  If changes do exist, do they correspond with the experimental manipulations? 41single subject design
  41. 41. Data Graphs  Graphing the data facilitates monitoring and evaluating the impact of the intervention  data for each variable for each participant or system are graphed: dependent variable on the y-axis & time (e.g., hour, a day, a week, or a month) on the x-axis.  Graphing data for one variable for more than one participant, the scale for each graph should be the same to facilitate comparisons across graphs 42single subject design
  42. 42. Visual Analysis  Differences in level  Changes in trend or slope: direction of the trend/ rate of increase or decrease  Change in variability 43single subject design
  43. 43. A) simple method to describe the level is to inspect the actual data points Differences in level 44single subject design
  44. 44. Differences in level B) using the mean (the average of the observations in the phase), or the median (the value at which 50% of the scores in the phase are higher and 50% are lower). 45single subject design
  45. 45. Changes in level are typically used when the observations fall along relatively stable lines. 46single subject design
  46. 46. Changes in Trend and slope  compare trends in the baseline and intervention stages.  direction in the pattern of the data points and can be increasing, decreasing, cyclical, or curvilinear.  rate of increase or decrease  Magnitude and rapidity of behavior transitions Nugent’s method split-middle lines 47single subject design
  47. 47. Nugent Method 48single subject design
  48. 48. Split-middle lines 49single subject design
  49. 49. Variability  stability or variability of the data points draw range lines 50single subject design
  50. 50. Interpreting Visual Patterns  patterns of level and trend 51single subject design
  51. 51. stable line (or a close approximation of a stable line A: the intervention has only made the problem worse, B : the intervention has had no effect, C: suggests that there has been an improvement 52single subject design
  52. 52. trend changes 53single subject design
  53. 53. F: no effect G:no change in the direction of the trend, but the rate of deterioration has slowed H: improved the situation only to the extent that it is not getting worse I: improvement in the subject’s status. 54single subject design
  54. 54. No change in…..? Change in…..? 55single subject design
  55. 55. No change in….? Change in…..? 56single subject design
  56. 56. No Change in……? Change in……? 57single subject design
  57. 57. The PND statistic  Percentage of nonoverlapping Data  Percentage of treatment data that overlap with the most extreme data point  Reduce maladaptive behavior: the most extreme data point in baseline with lowest numerical value  Increase adaptive behavior: most extreme data point in baseline with highest numerical value 58single subject design
  58. 58. PND≥90% Very effective treatment PND 70-90 % Effective treatment PND 50-70 % Questionable effectiveness PND< 50 Ineffective treatment 59single subject design
  59. 59. conservative dual-criterion(CDC)  mean line based on baseline data  split-middle line is calculated based on baseline data 60single subject design
  60. 60. 61single subject design
  61. 61. 62single subject design
  62. 62. Question?? 63single subject design
  63. 63. 64single subject design
  64. 64. EVIDENCE BASED PRACTICE And Single Subject Experimental Design
  65. 65. ©2015MSARHADY1980@GMAIL.COM Evidence based medicine (that is, EBP specific to the field of medicine) is “the conscientious, explicit, and judicious use of current best evidence in making decisions about the care of individual patients . . . [by] integrating individual clinical expertise with the best available external clinical evidence from systematic research”
  66. 66. ©2015MSARHADY1980@GMAIL.COM
  67. 67. ©2015MSARHADY1980@GMAIL.COM
  68. 68. ©2015MSARHADY1980@GMAIL.COM
  69. 69. EVALUATING QUALITY Of Single Subject Experimental Design
  70. 70. ©2015MSARHADY1980@GMAIL.COM Questions for evaluating quality of SSED DESCRIPTION OF PARTICIPANTS AND SETTINGS 1. Was/were the participant(s) sufficiently well described to allow comparison with other studies or with the reader’s own patient population?
  71. 71. ©2015MSARHADY1980@GMAIL.COM Questions for evaluating quality of SSED INDEPENDENT VARIABLE 2. Were the independent variables operationally defined to allow replication? 3. Were intervention conditions operationally defined to allow replication?
  72. 72. ©2015MSARHADY1980@GMAIL.COM Questions for evaluating quality of SSED DEPENDENT VARIABLE 4. Were the dependent variables operationally defined as dependent measures? 5. Was interrater or intra-rater reliability of the dependent measures assessed before and during each phase of the study? 6. Was the outcome assessor unaware of the phase of the study (intervention vs control) in which the participant was involved? 7. Was stability of the data demonstrated in baseline, namely lack of variability or a trend opposite to the direction one would expect after application of the intervention?
  73. 73. ©2015MSARHADY1980@GMAIL.COM Questions for evaluating quality of SSED DESIGN 8. Was the type of SSED clearly and correctly stated, for example A–B, multiple baseline across subjects? 9. Were there an adequate number of data points in each phase (minimum of five) for each participant? 10. Were the effects of the intervention replicated across three or more subjects?
  74. 74. ©2015MSARHADY1980@GMAIL.COM Questions for evaluating quality of SSED ANALYSIS 11. Did the authors conduct and report appropriate visual analysis, for example, level, trend, and variability? 12. Did the graphs used for visual analysis follow standard conventions, for example x- and y-axes labeled clearly and logically, phases clearly labeled (A, B, etc.) and delineated with vertical lines, data paths separated between phases, consistency of scales? 13. Did the authors report tests of statistical analysis, for example celeration line approach, two-standard deviation band method, C-statistic, or other? 14. Were all criteria met for the statistical analyses used?
  75. 75. ©2015MSARHADY1980@GMAIL.COM External validity of SSED • Three sequential replication strategies to enhance the external validity: Direct replication: repeating the same procedures, by the same researchers, including the same treatment, in the same setting, and in the same situation, with different clients who have similar characteristics Systematic replication: repeating the experiment in different settings, using different providers, and other related behaviors Clinical replication: combining different interventions in the same setting and with clients who have the same types of problems
  76. 76. ETHICAL ISSUES In Single Subject Experimental Design
  77. 77. ©2015MSARHADY1980@GMAIL.COM Like any form of research, single-subject designs require the informed consent Participants must understand that the onset of the intervention is likely to be delayed until either a baseline pattern emerges or some assigned time period elapses The risks associated with prematurely ending treatment in withdrawal may be hard to predict
  78. 78. THANK YOU!

×