Skip to main content
Log in

Single-Subject Research Designs and Data Analyses for Assessing Elite Athletes’ Conditioning

  • Leading Article
  • Published:
Sports Medicine Aims and scope Submit manuscript

Abstract

Research in conditioning (all the processes of preparation for competition) has used group research designs, where multiple athletes are observed at one or more points in time. However, empirical reports of large inter-individual differences in response to conditioning regimens suggest that applied conditioning research would greatly benefit from single-subject research designs. Single-subject research designs allow us to find out the extent to which a specific conditioning regimen works for a specific athlete, as opposed to the average athlete, who is the focal point of group research designs. The aim of the following review is to outline the strategies and procedures of single-subject research as they pertain to the assessment of conditioning for individual athletes. The four main experimental designs in single-subject research are: the AB design, reversal (withdrawal) designs and their extensions, multiple baseline designs and alternating treatment designs. Visual and statistical analyses commonly used to analyse single-subject data, and advantages and limitations are discussed. Modelling of multivariate single-subject data using techniques such as dynamic factor analysis and structural equation modelling may identify individualised models of conditioning leading to better prediction of performance. Despite problems associated with data analyses in single-subject research (e.g. serial dependency), sports scientists should use single-subject research designs in applied conditioning research to understand how well an intervention (e.g. a training method) works and to predict performance for a particular athlete.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Table I

Similar content being viewed by others

References

  1. Lehmann M, Baumgartl P, Wiesenack C, et al. Training-overtraining: influence of a defined increase in training volume vs training intensity on performance, catecholamines and some metabolic parameters in experienced middle- and long-distance runners. Eur J Appl Physiol 1992; 64: 169–77

    Article  CAS  Google Scholar 

  2. Mujika I, Chatard JC, Busso T, et al. Effects of training on performance in competitive swimming. Can J Appl Physiol 1995; 20: 395–406

    Article  PubMed  CAS  Google Scholar 

  3. Kinugasa T, Miyanaga Y, Shimojo H, et al. Statistical evaluation of conditioning using a single-case design. J Strength Cond Res 2002; 16: 466–71

    PubMed  Google Scholar 

  4. Rowbottom DG, Morton A, Keast D. Monitoring for overtraining in the endurance performer. In: Shephard RJ, Åstrand PO, editors. Endurance in sport: volume II of the encyclopaedia of sports medicine. 2nd ed. Malden (MS): Blackwell Scientific, 2000: 486–504

    Google Scholar 

  5. Bompa TO. Theory and methodology of training: the key to athletic performance. 3rd ed. Dubuque (IA): Kendall/Hunt, 1994

    Google Scholar 

  6. Matveyev L. Fundamentals of sports training. Moscow: Progress Publishers, 1981

    Google Scholar 

  7. Bagger M, Petersen PH, Pedersen PK. Biological variation in variables associated with exercise training. Int J Sports Med 2003 Aug; 24 (6): 433–40

    Article  PubMed  CAS  Google Scholar 

  8. Martin DT, Andersen MB, Gates W. Using profile of mood states (POMS) to monitor high-intensity training in cyclists: group versus case studies. Int J Sport Psychol 2000; 14: 138–56

    Google Scholar 

  9. Boulay MR. Physiological monitoring of elite cyclists: practical methods. Sports Med 1995; 20: 1–11

    Article  PubMed  CAS  Google Scholar 

  10. Fry RW, Morton AR, Keast D. Periodisation of training stress: a review. Can J Sport Sci 1992; 17: 234–40

    PubMed  CAS  Google Scholar 

  11. Pyne DB, Gleeson M, McDonald WA, et al. Training strategies to maintain immunocompetence in athletes. Int J Sports Med 2000; 21: s51–60

    Article  Google Scholar 

  12. Banister EW, Carter JB, Zarkadas PC. Training theory and taper: validation in triathlon athletes. Eur J Appl Physiol 1999; 79: 182–91

    Article  CAS  Google Scholar 

  13. Busso T. Variable dose-response relationship between exercise training and performance. Med Sci Sports Exerc 2003; 35: 1188–95

    Article  PubMed  Google Scholar 

  14. Barlow DH, Hersen M. Single-case experimental designs: strategies for studying behavior change. 2nd ed. New York: Pergamon Press, 1984

    Google Scholar 

  15. Backman CL, Harris SR. Case studies, single subject research, and N of 1 randomized trials: comparison and contrasts. Am J Phys Med Rehabil 1999; 78: 170–6

    CAS  Google Scholar 

  16. Bobrovitz CD, Ottenbacher KJ. Comparison of visual inspection and statistical analysis of single-subject data in rehabilitation research. Am J Phys Med Rehabil 1998; 77: 94–102

    Article  PubMed  CAS  Google Scholar 

  17. Hartmann DP. Forcing square pegs into round holes: some comments on ‘an analysis-of-variance model for the intrasubject replication design’. J Appl Behav Anal 1974; 7: 635–8

    Article  PubMed  CAS  Google Scholar 

  18. Kazdin AE. Research design in clinical psychology. 3rd ed. Needham Heights (MA): Allyn and Bacon, 1998

    Google Scholar 

  19. Campbell DT. Reforms as experiments. Am Psychol 1969; 24: 409–29

    Article  Google Scholar 

  20. Lerner BS, Ostrow AC, Yura MT, et al. The effect of goal-setting and imagery training programs on the free-throw performance of female collegiate basketball players. Sport Psychol 1996; 10: 382–97

    Google Scholar 

  21. Zhan S, Ottenbacher KJ. Single subject research designs for disability research. Disabil Rehabil 2001; 23: 1–8

    Article  PubMed  CAS  Google Scholar 

  22. Bryan AJ. Single-subject designs for evaluation of sport psychology interventions. Sport Psychol 1987; 1: 283–92

    Google Scholar 

  23. Callow N, Hardy L, Hall C. The effect of a motivational general-mastery imagery intervention on the sport confidence of high-level badminton players. Res Q Exerc Sport 2001; 72: 389–400

    PubMed  CAS  Google Scholar 

  24. Neuman SB, McCormick S. Single-subject experimental research: applications for literacy. Newark (DE): International Reading Association, 1995

    Google Scholar 

  25. Richards SB, Taylor RL, Ramasamy R, et al. Single subject research: applications in educational and clinical settings. San Diego (CA): Singular Publishing Group, 1999

    Google Scholar 

  26. Shambrook CJ, Bull SJ. The use of a single-case research design to investigate the efficacy of imagery training. J Appl Sport Psychol 1996; 8: 27–43

    Article  Google Scholar 

  27. Wolko KL, Hrycaiko DW, Martin GL. A comparison of two self-management packages to standard coaching for improving practice performance of gymnasts. Behav Modif 1993; 17: 209–23

    Article  Google Scholar 

  28. Kazdin AE. Single-case research design: methods for clinical and applied settings. New York: Oxford University Press, 1982

    Google Scholar 

  29. Kratochwill TR, Levin JR. Single-case research design and analysis: new directions for psychology and education. Hilldale (DE): Lawlence Erlbaum Associates, 1992

    Google Scholar 

  30. Mattacola CG, Lloyd JW. Effects of a 6-week strength and proprioception training program on measures of dynamic balance: a single-case design. J Athlet Train 1997; 32: 127–35

    CAS  Google Scholar 

  31. Parsonson BS, Baer DM. The visual analysis of data, and current research into the stimuli controlling it. In: Kratochwill TR, Levin JR, editors. Single-case research design and analysis: new directions for psychology and education. Hillsdale (NJ): Lawrence Erlbaum Associates, 1992: 15–40

    Google Scholar 

  32. Rowbottom DG, Keast D, Green S, et al. The case history of an elite ultra-endurance cyclists who developed chronic fatigue syndrome. Med Sci Sports Exerc 1998; 30: 1345–8

    PubMed  CAS  Google Scholar 

  33. Ferron J, Foster-Johnson L. Analyzing single-case data with visually guided randomization tests. Behav Res Methods Instrum Comput 1998; 30: 698–706

    Article  Google Scholar 

  34. DeProspero A, Cohen S. Inconsistent visual analyses of intrasubject data. J Appl Behav Anal 1979; 12: 573–9

    Article  PubMed  CAS  Google Scholar 

  35. Ottenbacher KJ. Reliability and accuracy of visually analyzing graphed data from single-subject designs. Am J Occup Ther 1986; 40: 464–9

    Article  PubMed  CAS  Google Scholar 

  36. Yamada T. Introduction of randomization tests as methods for analyzing single-case data. Jpn J Behav Anal 1998; 13: 44–58

    Google Scholar 

  37. Ottenbacher KJ. Interrater agreement of visual analysis in single-subject decisions: quantitative review and analysis. Am J Ment Retard 1993; 98: 135–42

    PubMed  CAS  Google Scholar 

  38. Kazdin AE. Statistical analyses for single-case experimental designs. In: Barlow DH, Hersen M, editors. Single case experimental designs: strategies for studying behavior change. 2nd ed. New York: Pergamon Press, 1984: 285–321

    Google Scholar 

  39. Busk PL, Marascuilo LA. Statistical analysis in single-case research: issues, procedures, and recommendations, with application to multiple behaviors. In: Kratochwill TR, Levin JR, editors. Single-case research design and analysis: new directions for psychology and education. Hillsdale (NJ): Lawrence Erlbaum Associates, 1992: 159–85

    Google Scholar 

  40. McCleary R, Welsh WN. Philosophical and statistical foundations of time-series experiments. In: Kratochwill TR, Levin JR, editors. Single-case research design and analysis: new directions for psychology and education. Hillsdale (NJ): Lawrence Erlbaum Associates, 1992: 41–91

    Google Scholar 

  41. Todman JB, Dugard P. Single-case and small-n experimental designs: a practical guide to randomization tests. Mahwah (NJ): Lawrence Erlbaum Associates, 2001

    Google Scholar 

  42. Box GEP, Jenkins GM. Time series analysis: forecasting and control. Rev ed. San Francisco (CA): Holden-Day, 1976

    Google Scholar 

  43. Glass GV, Willson VL, Gottman JM. Design and analysis of time-series experiments. Boulder (CO): Colorado Associated University Press, 1975

    Google Scholar 

  44. Crosbie J. Interrupted time-series analysis with brief single-subject data. J Consult Clin Psychol 1993; 61: 966–74

    Article  PubMed  CAS  Google Scholar 

  45. Hartmann DP, Gottman JM, Jones RR, et al. Interrupted time-series analysis and its application to behavioral data. J Appl Behav Anal 1980; 13: 543–59

    Article  PubMed  CAS  Google Scholar 

  46. Tryon WW. A simplified time-series analysis for evaluating treatment interventions. J Appl Behav Anal 1982; 15: 423–9

    Article  PubMed  CAS  Google Scholar 

  47. Young LC. On randomness in ordered sequences. Ann Math Stat 1941; 12: 293–300

    Article  Google Scholar 

  48. Blumberg CJ. Comments on ‘A simplified time-series analysis for evaluating treatment interventions’. J Appl Behav Anal 1984; 17: 539–42

    Article  PubMed  CAS  Google Scholar 

  49. Crosbie J. The inappropriateness of the C statistic for assessing stability or treatment effects with single-subject data. Behav Assess 1989; 11: 315–25

    Google Scholar 

  50. Yamada T. Applications of statistical tests for single-case data: power comparison between randomization tests and C statistic [in Japanese]. Jpn J Behav Anal 1999; 14: 87–98

    Google Scholar 

  51. Edgington ES. Randomization tests. 3rd ed. New York: Marcel Dekker, 1995

    Google Scholar 

  52. Levin JR, Marascuilo LA, Hubert LJ. N = nonparametric randomization tests. In: Kratochwill TR, Levin JR, editors. Single-case research design and analysis: new directions for psychology and education. Hillsdale (NJ): Lawrence Erlbaum Associates, 1992: 159–85

    Google Scholar 

  53. Edgington ES. Statistical inference from n = 1 experiments. J Psychol 1967; 65: 195–9

    Article  Google Scholar 

  54. Ferron J, Onghena P. The power of randomization tests for single-case phase designs. J Exp Educ 1996; 64: 231–9

    Article  Google Scholar 

  55. Ferron J, Ware W. Analyzing single-case data: the power of randomization tests. J Exp Educ 1995; 63: 167–78

    Article  Google Scholar 

  56. Cohen J. Statistical power analysis for the behavioral sciences. 2nd ed. Hillsdale (NJ): Lawrence Erlbaum, 1988

    Google Scholar 

  57. Cohen J. A power primer. Psychol Bull 1992; 112: 155–9

    Article  PubMed  CAS  Google Scholar 

  58. Gorman BS, Allison DB. Statistical alternatives for single-case designs. In: Franklin RD, Allison DB, Gorman BS, editors. Design and analysis of single-case research. Mahwah (NJ): Lawrence Erlbaum Associates, 1996

    Google Scholar 

  59. White OR. A glossary of behavioral terminology. Champaign (IL): Research Press, 1971

    Google Scholar 

  60. White OR. A manual for the calculation and use of the median slope: a technique of progress estimation and prediction in the single case. Eugene (OR): University of Oregon, Regional Resource Center for Handicapped Children, 1972

    Google Scholar 

  61. White OR. The ‘Split Middle’: a ‘Quickie’ method of trend estimation. Seattle (WA): University of Washington, Experimental Education Unit, Child Development and Mental Retardation Center, 1974

    Google Scholar 

  62. Nourbakhsh MR, Ottenbacher KJ. The statistical analysis of single-subject data: a comparative examination. Phys Ther 1994; 74: 768–76

    PubMed  CAS  Google Scholar 

  63. Marlow C, Bull SJ, Heath B, et al. The use of a single case design to investigate the effect of a pre-performance routine on the water polo penalty shot. J Sci Med Sport 1998; 1: 143–55

    Article  PubMed  CAS  Google Scholar 

  64. Silliman LM, French R. Use of selected reinforcers to improve the ball kicking of youths with profound mental retardation. Adapted Phys Activity Q 1993; 10: 52–69

    Google Scholar 

  65. Revusky SH. Some statistical treatments compatible with individual organism methodology. J Exp Anal Behav 1967; 10: 319–30

    Article  PubMed  CAS  Google Scholar 

  66. Wolery M, Billingsley FF. The application of Revusky’s Rn test to slope and level changes. Behav Assess 1982; 4: 93–103

    Google Scholar 

  67. Mackinnon LT, Hooper SL. Overtraining and overreaching: cause, effects, and prevention. In: Garrett Jr WE, Kirkendall DT, editors. Exercise and sport science. Philadelphia (PA): Lippincott Williams and Willkins, 2000: 487–98

    Google Scholar 

  68. McKenzie DC. Markers of excessive exercise. Can J Appl Physiol 1999; 24: 66–73

    Article  PubMed  CAS  Google Scholar 

  69. Rowbottom DG, Keast D, Morton A. Monitoring and preventing of overreaching and overtraining in endurance athletes. In: Kreider RB, Fry AC, O’Toole ML, editors. Overtraining in sport. Champaign (IL): Human Kinetics, 1998: 47–66

    Google Scholar 

  70. Cattell RB. Factor analysis. New York: Holt, 1952

    Google Scholar 

  71. Anderson TW. The use of factor analysis in the statistical analysis of multiple time series. Psychometrika 1963; 28: 1–25

    Article  Google Scholar 

  72. Wood P, Brown D. The study of intraindividual differences by means of dynamic factor models: rationale, implementation, and interpretation. Psychol Bull 1994; 116: 166–86

    Article  Google Scholar 

  73. Molenaar PCM. A dynamic factor model for the analysis of multivariate time series. Psychometrika 1985; 50: 181–202

    Article  Google Scholar 

  74. Molenaar PCM, Rovine MJ, Corneal SE. Dynamic factor analysis of emotional dispositions of adolescent stepsons toward their stepfathers. In: Silbereisen R, von Eye A, editors. Growing up in times of social change. New York: DeGruyter, 1999: 287–318

    Google Scholar 

  75. Hershberger SL, Corneal SE, Molenaar PCM. Dynamic factor analysis: an application to emotional response patterns underlying daughter/father and stepdaughter/stepfather relationships. Struct Equat Model 1994; 2: 31–52

    Article  Google Scholar 

  76. Hox JJ, Bechger TM. An introduction to structural equation modeling. Fam Sci Rev 1998; 11: 354–73

    Google Scholar 

  77. Marsh HW, Grayson D. Longitudinal stability of latent means and individual differences: a unified approach. Struct Equat Model 1994; 1: 317–59

    Article  Google Scholar 

  78. Raykov T, Widaman KF. Issues in applied structural equation modeling research. Struct Equat Model 1995; 2: 289–318

    Article  Google Scholar 

  79. Browne MW, Cudeck R. Alternative ways of assessing model fit. In: Bollen KA, Long JS, editors. Testing structural equation models. Newbury Park (CA): Sage Publications, 1993

    Google Scholar 

  80. Rigdon EE. Software review: Amos and AmosDraw. Struct Equat Model 1994; 1: 196–201

    Article  Google Scholar 

  81. Anderson TW. Some stochastic process models for intelligence test scores. In: Arrow KJ, Karlin K, Suppes P, editors. Mathematical methods in the social sciences. Stanford (CA): Stanford University Press, 1960

    Google Scholar 

  82. Meredith W, Tisak J. Latent curve analysis. Psychometrika 1990; 55: 107–22

    Article  Google Scholar 

  83. Mackinnon LT. Overtraining effects on immunity and performance in athletes. Immunol Cell Biol 2000; 78: 502–9

    Article  PubMed  CAS  Google Scholar 

  84. Bates BT. Single-subject methodology: an alternative approach. Med Sci Sports Exerc 1996; 28: 631–8

    PubMed  CAS  Google Scholar 

  85. Reboussin DM, Morgan TM. Statistical considerations in the use and analysis of single-subject designs. Med Sci Sports Exerc 1996; 28: 639–44

    PubMed  CAS  Google Scholar 

  86. Hopkins WG. Measures of reliability in sports medicine and science. Sports Med 2000; 30: 1–15

    Article  PubMed  CAS  Google Scholar 

  87. Hopkins WG. Probabilities of clinical or practical significance [online]. Available from URL: http//sportsci.org/jour/0201/wghprob.htm [Accessed 2004 Oct 27]

  88. Hrycaiko D, Martin GL. Applied research studies with single-subject designs: why so few? J Appl Sport Psychol 1996; 8: 183–99

    Article  Google Scholar 

Download references

Acknowledgements

The authors wish to gratefully acknowledge Professor Will G. Hopkins, Auckland University of Technology, Auckland, New Zealand, for his invaluable contribution to this manuscript.

The authors have provided no information on sources of funding or on conflicts of interest directly relevant to the content of this review.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Sue Hooper.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Kinugasa, T., Cerin, E. & Hooper, S. Single-Subject Research Designs and Data Analyses for Assessing Elite Athletes’ Conditioning. Sports Med 34, 1035–1050 (2004). https://doi.org/10.2165/00007256-200434150-00003

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.2165/00007256-200434150-00003

Keywords

Navigation