Abstract
Research in conditioning (all the processes of preparation for competition) has used group research designs, where multiple athletes are observed at one or more points in time. However, empirical reports of large inter-individual differences in response to conditioning regimens suggest that applied conditioning research would greatly benefit from single-subject research designs. Single-subject research designs allow us to find out the extent to which a specific conditioning regimen works for a specific athlete, as opposed to the average athlete, who is the focal point of group research designs. The aim of the following review is to outline the strategies and procedures of single-subject research as they pertain to the assessment of conditioning for individual athletes. The four main experimental designs in single-subject research are: the AB design, reversal (withdrawal) designs and their extensions, multiple baseline designs and alternating treatment designs. Visual and statistical analyses commonly used to analyse single-subject data, and advantages and limitations are discussed. Modelling of multivariate single-subject data using techniques such as dynamic factor analysis and structural equation modelling may identify individualised models of conditioning leading to better prediction of performance. Despite problems associated with data analyses in single-subject research (e.g. serial dependency), sports scientists should use single-subject research designs in applied conditioning research to understand how well an intervention (e.g. a training method) works and to predict performance for a particular athlete.
Similar content being viewed by others
References
Lehmann M, Baumgartl P, Wiesenack C, et al. Training-overtraining: influence of a defined increase in training volume vs training intensity on performance, catecholamines and some metabolic parameters in experienced middle- and long-distance runners. Eur J Appl Physiol 1992; 64: 169–77
Mujika I, Chatard JC, Busso T, et al. Effects of training on performance in competitive swimming. Can J Appl Physiol 1995; 20: 395–406
Kinugasa T, Miyanaga Y, Shimojo H, et al. Statistical evaluation of conditioning using a single-case design. J Strength Cond Res 2002; 16: 466–71
Rowbottom DG, Morton A, Keast D. Monitoring for overtraining in the endurance performer. In: Shephard RJ, Åstrand PO, editors. Endurance in sport: volume II of the encyclopaedia of sports medicine. 2nd ed. Malden (MS): Blackwell Scientific, 2000: 486–504
Bompa TO. Theory and methodology of training: the key to athletic performance. 3rd ed. Dubuque (IA): Kendall/Hunt, 1994
Matveyev L. Fundamentals of sports training. Moscow: Progress Publishers, 1981
Bagger M, Petersen PH, Pedersen PK. Biological variation in variables associated with exercise training. Int J Sports Med 2003 Aug; 24 (6): 433–40
Martin DT, Andersen MB, Gates W. Using profile of mood states (POMS) to monitor high-intensity training in cyclists: group versus case studies. Int J Sport Psychol 2000; 14: 138–56
Boulay MR. Physiological monitoring of elite cyclists: practical methods. Sports Med 1995; 20: 1–11
Fry RW, Morton AR, Keast D. Periodisation of training stress: a review. Can J Sport Sci 1992; 17: 234–40
Pyne DB, Gleeson M, McDonald WA, et al. Training strategies to maintain immunocompetence in athletes. Int J Sports Med 2000; 21: s51–60
Banister EW, Carter JB, Zarkadas PC. Training theory and taper: validation in triathlon athletes. Eur J Appl Physiol 1999; 79: 182–91
Busso T. Variable dose-response relationship between exercise training and performance. Med Sci Sports Exerc 2003; 35: 1188–95
Barlow DH, Hersen M. Single-case experimental designs: strategies for studying behavior change. 2nd ed. New York: Pergamon Press, 1984
Backman CL, Harris SR. Case studies, single subject research, and N of 1 randomized trials: comparison and contrasts. Am J Phys Med Rehabil 1999; 78: 170–6
Bobrovitz CD, Ottenbacher KJ. Comparison of visual inspection and statistical analysis of single-subject data in rehabilitation research. Am J Phys Med Rehabil 1998; 77: 94–102
Hartmann DP. Forcing square pegs into round holes: some comments on ‘an analysis-of-variance model for the intrasubject replication design’. J Appl Behav Anal 1974; 7: 635–8
Kazdin AE. Research design in clinical psychology. 3rd ed. Needham Heights (MA): Allyn and Bacon, 1998
Campbell DT. Reforms as experiments. Am Psychol 1969; 24: 409–29
Lerner BS, Ostrow AC, Yura MT, et al. The effect of goal-setting and imagery training programs on the free-throw performance of female collegiate basketball players. Sport Psychol 1996; 10: 382–97
Zhan S, Ottenbacher KJ. Single subject research designs for disability research. Disabil Rehabil 2001; 23: 1–8
Bryan AJ. Single-subject designs for evaluation of sport psychology interventions. Sport Psychol 1987; 1: 283–92
Callow N, Hardy L, Hall C. The effect of a motivational general-mastery imagery intervention on the sport confidence of high-level badminton players. Res Q Exerc Sport 2001; 72: 389–400
Neuman SB, McCormick S. Single-subject experimental research: applications for literacy. Newark (DE): International Reading Association, 1995
Richards SB, Taylor RL, Ramasamy R, et al. Single subject research: applications in educational and clinical settings. San Diego (CA): Singular Publishing Group, 1999
Shambrook CJ, Bull SJ. The use of a single-case research design to investigate the efficacy of imagery training. J Appl Sport Psychol 1996; 8: 27–43
Wolko KL, Hrycaiko DW, Martin GL. A comparison of two self-management packages to standard coaching for improving practice performance of gymnasts. Behav Modif 1993; 17: 209–23
Kazdin AE. Single-case research design: methods for clinical and applied settings. New York: Oxford University Press, 1982
Kratochwill TR, Levin JR. Single-case research design and analysis: new directions for psychology and education. Hilldale (DE): Lawlence Erlbaum Associates, 1992
Mattacola CG, Lloyd JW. Effects of a 6-week strength and proprioception training program on measures of dynamic balance: a single-case design. J Athlet Train 1997; 32: 127–35
Parsonson BS, Baer DM. The visual analysis of data, and current research into the stimuli controlling it. In: Kratochwill TR, Levin JR, editors. Single-case research design and analysis: new directions for psychology and education. Hillsdale (NJ): Lawrence Erlbaum Associates, 1992: 15–40
Rowbottom DG, Keast D, Green S, et al. The case history of an elite ultra-endurance cyclists who developed chronic fatigue syndrome. Med Sci Sports Exerc 1998; 30: 1345–8
Ferron J, Foster-Johnson L. Analyzing single-case data with visually guided randomization tests. Behav Res Methods Instrum Comput 1998; 30: 698–706
DeProspero A, Cohen S. Inconsistent visual analyses of intrasubject data. J Appl Behav Anal 1979; 12: 573–9
Ottenbacher KJ. Reliability and accuracy of visually analyzing graphed data from single-subject designs. Am J Occup Ther 1986; 40: 464–9
Yamada T. Introduction of randomization tests as methods for analyzing single-case data. Jpn J Behav Anal 1998; 13: 44–58
Ottenbacher KJ. Interrater agreement of visual analysis in single-subject decisions: quantitative review and analysis. Am J Ment Retard 1993; 98: 135–42
Kazdin AE. Statistical analyses for single-case experimental designs. In: Barlow DH, Hersen M, editors. Single case experimental designs: strategies for studying behavior change. 2nd ed. New York: Pergamon Press, 1984: 285–321
Busk PL, Marascuilo LA. Statistical analysis in single-case research: issues, procedures, and recommendations, with application to multiple behaviors. In: Kratochwill TR, Levin JR, editors. Single-case research design and analysis: new directions for psychology and education. Hillsdale (NJ): Lawrence Erlbaum Associates, 1992: 159–85
McCleary R, Welsh WN. Philosophical and statistical foundations of time-series experiments. In: Kratochwill TR, Levin JR, editors. Single-case research design and analysis: new directions for psychology and education. Hillsdale (NJ): Lawrence Erlbaum Associates, 1992: 41–91
Todman JB, Dugard P. Single-case and small-n experimental designs: a practical guide to randomization tests. Mahwah (NJ): Lawrence Erlbaum Associates, 2001
Box GEP, Jenkins GM. Time series analysis: forecasting and control. Rev ed. San Francisco (CA): Holden-Day, 1976
Glass GV, Willson VL, Gottman JM. Design and analysis of time-series experiments. Boulder (CO): Colorado Associated University Press, 1975
Crosbie J. Interrupted time-series analysis with brief single-subject data. J Consult Clin Psychol 1993; 61: 966–74
Hartmann DP, Gottman JM, Jones RR, et al. Interrupted time-series analysis and its application to behavioral data. J Appl Behav Anal 1980; 13: 543–59
Tryon WW. A simplified time-series analysis for evaluating treatment interventions. J Appl Behav Anal 1982; 15: 423–9
Young LC. On randomness in ordered sequences. Ann Math Stat 1941; 12: 293–300
Blumberg CJ. Comments on ‘A simplified time-series analysis for evaluating treatment interventions’. J Appl Behav Anal 1984; 17: 539–42
Crosbie J. The inappropriateness of the C statistic for assessing stability or treatment effects with single-subject data. Behav Assess 1989; 11: 315–25
Yamada T. Applications of statistical tests for single-case data: power comparison between randomization tests and C statistic [in Japanese]. Jpn J Behav Anal 1999; 14: 87–98
Edgington ES. Randomization tests. 3rd ed. New York: Marcel Dekker, 1995
Levin JR, Marascuilo LA, Hubert LJ. N = nonparametric randomization tests. In: Kratochwill TR, Levin JR, editors. Single-case research design and analysis: new directions for psychology and education. Hillsdale (NJ): Lawrence Erlbaum Associates, 1992: 159–85
Edgington ES. Statistical inference from n = 1 experiments. J Psychol 1967; 65: 195–9
Ferron J, Onghena P. The power of randomization tests for single-case phase designs. J Exp Educ 1996; 64: 231–9
Ferron J, Ware W. Analyzing single-case data: the power of randomization tests. J Exp Educ 1995; 63: 167–78
Cohen J. Statistical power analysis for the behavioral sciences. 2nd ed. Hillsdale (NJ): Lawrence Erlbaum, 1988
Cohen J. A power primer. Psychol Bull 1992; 112: 155–9
Gorman BS, Allison DB. Statistical alternatives for single-case designs. In: Franklin RD, Allison DB, Gorman BS, editors. Design and analysis of single-case research. Mahwah (NJ): Lawrence Erlbaum Associates, 1996
White OR. A glossary of behavioral terminology. Champaign (IL): Research Press, 1971
White OR. A manual for the calculation and use of the median slope: a technique of progress estimation and prediction in the single case. Eugene (OR): University of Oregon, Regional Resource Center for Handicapped Children, 1972
White OR. The ‘Split Middle’: a ‘Quickie’ method of trend estimation. Seattle (WA): University of Washington, Experimental Education Unit, Child Development and Mental Retardation Center, 1974
Nourbakhsh MR, Ottenbacher KJ. The statistical analysis of single-subject data: a comparative examination. Phys Ther 1994; 74: 768–76
Marlow C, Bull SJ, Heath B, et al. The use of a single case design to investigate the effect of a pre-performance routine on the water polo penalty shot. J Sci Med Sport 1998; 1: 143–55
Silliman LM, French R. Use of selected reinforcers to improve the ball kicking of youths with profound mental retardation. Adapted Phys Activity Q 1993; 10: 52–69
Revusky SH. Some statistical treatments compatible with individual organism methodology. J Exp Anal Behav 1967; 10: 319–30
Wolery M, Billingsley FF. The application of Revusky’s Rn test to slope and level changes. Behav Assess 1982; 4: 93–103
Mackinnon LT, Hooper SL. Overtraining and overreaching: cause, effects, and prevention. In: Garrett Jr WE, Kirkendall DT, editors. Exercise and sport science. Philadelphia (PA): Lippincott Williams and Willkins, 2000: 487–98
McKenzie DC. Markers of excessive exercise. Can J Appl Physiol 1999; 24: 66–73
Rowbottom DG, Keast D, Morton A. Monitoring and preventing of overreaching and overtraining in endurance athletes. In: Kreider RB, Fry AC, O’Toole ML, editors. Overtraining in sport. Champaign (IL): Human Kinetics, 1998: 47–66
Cattell RB. Factor analysis. New York: Holt, 1952
Anderson TW. The use of factor analysis in the statistical analysis of multiple time series. Psychometrika 1963; 28: 1–25
Wood P, Brown D. The study of intraindividual differences by means of dynamic factor models: rationale, implementation, and interpretation. Psychol Bull 1994; 116: 166–86
Molenaar PCM. A dynamic factor model for the analysis of multivariate time series. Psychometrika 1985; 50: 181–202
Molenaar PCM, Rovine MJ, Corneal SE. Dynamic factor analysis of emotional dispositions of adolescent stepsons toward their stepfathers. In: Silbereisen R, von Eye A, editors. Growing up in times of social change. New York: DeGruyter, 1999: 287–318
Hershberger SL, Corneal SE, Molenaar PCM. Dynamic factor analysis: an application to emotional response patterns underlying daughter/father and stepdaughter/stepfather relationships. Struct Equat Model 1994; 2: 31–52
Hox JJ, Bechger TM. An introduction to structural equation modeling. Fam Sci Rev 1998; 11: 354–73
Marsh HW, Grayson D. Longitudinal stability of latent means and individual differences: a unified approach. Struct Equat Model 1994; 1: 317–59
Raykov T, Widaman KF. Issues in applied structural equation modeling research. Struct Equat Model 1995; 2: 289–318
Browne MW, Cudeck R. Alternative ways of assessing model fit. In: Bollen KA, Long JS, editors. Testing structural equation models. Newbury Park (CA): Sage Publications, 1993
Rigdon EE. Software review: Amos and AmosDraw. Struct Equat Model 1994; 1: 196–201
Anderson TW. Some stochastic process models for intelligence test scores. In: Arrow KJ, Karlin K, Suppes P, editors. Mathematical methods in the social sciences. Stanford (CA): Stanford University Press, 1960
Meredith W, Tisak J. Latent curve analysis. Psychometrika 1990; 55: 107–22
Mackinnon LT. Overtraining effects on immunity and performance in athletes. Immunol Cell Biol 2000; 78: 502–9
Bates BT. Single-subject methodology: an alternative approach. Med Sci Sports Exerc 1996; 28: 631–8
Reboussin DM, Morgan TM. Statistical considerations in the use and analysis of single-subject designs. Med Sci Sports Exerc 1996; 28: 639–44
Hopkins WG. Measures of reliability in sports medicine and science. Sports Med 2000; 30: 1–15
Hopkins WG. Probabilities of clinical or practical significance [online]. Available from URL: http//sportsci.org/jour/0201/wghprob.htm [Accessed 2004 Oct 27]
Hrycaiko D, Martin GL. Applied research studies with single-subject designs: why so few? J Appl Sport Psychol 1996; 8: 183–99
Acknowledgements
The authors wish to gratefully acknowledge Professor Will G. Hopkins, Auckland University of Technology, Auckland, New Zealand, for his invaluable contribution to this manuscript.
The authors have provided no information on sources of funding or on conflicts of interest directly relevant to the content of this review.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Kinugasa, T., Cerin, E. & Hooper, S. Single-Subject Research Designs and Data Analyses for Assessing Elite Athletes’ Conditioning. Sports Med 34, 1035–1050 (2004). https://doi.org/10.2165/00007256-200434150-00003
Published:
Issue Date:
DOI: https://doi.org/10.2165/00007256-200434150-00003