SSD banner
home image ...being the source for sources.
SSD Home
SSD Announcements
SSD Assignments
SSD Calendar
SSD Lectures
SSD Resources
SSD References

These are most of the references cited during the class--omitting, of course, those cited in other papers. A diligent student should be able to obtain these from a respectable library. The Society for the Experimental Analysis of Behavior republished articles from volumes 1-45 of the Journal of Applied Behavior Analysis on PubMed.

|| A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z ||

American Psychological Association. (2001). Publication manual (5th. ed.). Washington, DC: Author.

Arnold, B. L. (1997). Single-subject research as an alternative to group research. Athletic Therapy Today, 2(3), 19-20.

Baer, D. M. (1975). In the beginning, there was the response. In E. Ramp & G. Semb (Eds.), Behavior analysis: Areas of research and application (pp. 16-30). Englewood Cliffs, NJ: Prentice-Hall.

Baer, D. M. (1977a). Just because it's reliable doesn't mean that you can use it. Journal of Applied Behavior Analysis, 10, 117-119.

Baer, D. M. (1977b). Perhaps it would be better not to know everything. Journal of Applied Behavior Analysis, 10, 167-172.

Baer, D. M., Wolf, M. M., & Risley, T. R. (1968). Some current dimensions of applied behavior analysis. Journal of Applied Behavior Analysis, 1, 91-97.

Baer, D. M., Wolf, M. M., & Risley, T. R. (1987). Some still-current dimensions of applied behavior analysis. Journal of Applied Behavior Analysis, 20, 313-327.

Bailey, D. B., Jr. (1984). Effects of lines of progress and semilogarithmic charts on rating of charted data. Journal of Applied Behavior Analysis, 17, 359-365.

Bailey, J. S., & Burch, M. R. (2002). Research methods in applied behavior analysis. Thousand Oaks, CA: Sage.

Bakeman, R., & Gottman, J. M. (1997). Observing interaction: An introduction to sequential analysis (2nd ed.). New York: Cambridge University Press.

Barlow, D. H., & Hayes, S. C. (1979). Alternating treatments design: One strategy for comparing the effects of two treatments in a single subject. Journal of Applied Behavior Analysis, 12, 199-210.

Barlow, D. H., & Hersen, M. (1985). Single case experimental design: Strategies for studying behavior change (2nd ed.) New York: Pergamon.

Baron, A., & Derenne, A. (2000). Quantitative summaries of single-subject studies: What do group comparisons tell us about individual performances? Behavior Analyst, 23(1), 101-106.

Barrios, B. A., & Hartmann, D. P. (1988). Recent developments in single-subject methodology: Methods for analyzing generalization, maintenance, and multicomponent treatments. In M. Hersen, R. M. Eisler, & P. M. Miller (Eds.), Progress in behavior modification (Vol. 22, pp. 11-47). Newbury Park, CA: Sage.

Bass, R. F. (1987). Computer-assisted observer training. Journal of Applied Behavior Analysis, 20, 83-88.

Bass, R. F., & Aserlind, L. (1984). Interval and time-sample data collection procedures: Methodological issues. In K. Gadow & I. Bialer (Ed.), Advances in learning and behavioral disabilities (Vol. 3, pp. 1-39). Greenwich, CT: JAI Press.

Bijou, S. W. (1970). What psychology has to offer education--Now. Journal of Applied Behavior Analysis, 3, 65-71.

Bijou, S. W., Peterson, R. F., & Ault, M. H. (1968). A method to integrate descriptive and field studies at the level of data and concepts. Journal of Applied Behavior Analysis, 1, 175-191.

Billings, D. C., & Wasik, B. H. (1985). Self-instructional training with preschoolers: An attempt to replicate. Journal of Applied Behavior Analysis, 18, 61-67.

Billingsley, R., White, O. R., & Munson, R. (1980). Procedural reliability: A rationale and an example. Behavioral Assessment, 2, 229-241.

Birkimer, J. C., & Brown, J. H. (1979a). Back to basics: Percentage agreement measures are adequate, but there are easier ways. Journal of Applied Behavior Analysis, 12, 535-543.

Birkimer, J. C., & Brown, J. H. (1979b). A graphical aid which summarizes obtained and chance reliability data and helps assess the believability of experimental effects. Journal of Applied Behavior Analysis, 12, 523-533.

Birnbrauer, J. S. (1981). External validity and experimental investigation of individual behaviour. Analysis & Intervention in Developmental Disabilities, 1, 117-132. doi:10.1016/0270-4684(81)90026-4

Birnbrauer, J. S., Peterson, C. R., & Solnick, J. V. (1974). Design and interpretation of studies of single subjects. American Journal of Mental Deficiency, 79, 191-203.

Boykin, R. A., & Nelson, R. O. (1981). The effects of instructions and calculation procedures on observers' accuracy, agreement, and calculation correctness. Journal of Applied Behavior Analysis, 14, 479-489.

Box, G. E. P., & Jenkins, G. M. (1976). Time series analysis: Forecasting and control. San Francisco: Holden-Day.

Browning, R. M. (1967). A same-subject design for simultaneious comparison of three reinforcement contingencies. Behavior Research and Therapy, 5, 137-243.

Busk, P. L., & Serlin, R. C. (1992). Meta-analysis for single-case research. In T. R. Kratochwill & J. R. Levin (Eds.), Single-case research design and analysis (pp. 187-212). Hillsdale, NJ: Erlbaum.

Busse, R. T., Kratochwill, T. R., & Elliott, S. N. (1995). Meta-analysis for single-case consultation outcomes: Applications to research and practice. Journal of School Psychology, 33, 269-285.

Carnine, D. W. (1976). Effect of two teacher presentation rates on off-task behavior, answering correctly, and participation. Journal of Applied Behavior Analysis, 9, 199-206.

Carter, M. (2013). Reconsidering overlap-based measures for quantitative synthesis of single-subject data: What they tell us and what they don't. Behavior Modification, 37, 378-390. doi:10.1177/0145445513476609

Castellano, J., Perea, A., Alday, L., & Mendo, A. H. (2008). The measuring and observation tools in sports. Behavior Research Methods, 40, 898-905. doi:10.3758/BRM.40.3.898

Center, B. A., Skiba, R. J., & Casey, A. (1986). A methodology for the quantitative synthesis of intra-subject design research. Journal of Special Education, 19, 387–400.

Chassan, J. B. (1960). Statistical inference and the single case in clinical design. Psychiatry, 23, 173-184.

Corcoran, K. J. (1985). Aggregating the idiographic data of single-subject research. Social Work Research & Abstracts, 21(2), 9-12.

Crosbie, J. (1989). The inappropriateness of the C statistic for assessing stability of treatment effects with single-subject data. Behavioral Assessment, 11, 315-325.

Crosbie, J. (1993). Interrupted time-series analysis with brief single-subject data. Journal of Consulting and Clinical Psychology, 61, 966-974.

Davis, D. H., Gagné, P., Fredrick, L. D., Alberto, P. A., Waugh, R. E., & Haardörfer, R. (2013). Augmenting visual analysis in single-case research with hierarchical linear modeling. Behavior Modification, 37, 62-89. doi:10.1177/0145445512453734

Delaney, E. M., & Kaiser, A. P. (2001). The effects of teaching parents blended communication and behavior support strategies. Behavioral Disorders, 26, 93-116.

DeProspero, A., & Cohen, S. (1979). Inconsistent visual analyses of intrasubject data. Journal of Applied Behavior Analysis, 12, 573-579.

Dixon, M. R., Jackson, J. W., Small, S. L., Horner-Kin, M. J., Lik, N. M. K., Garcia, Y., & Rosale, R. (2009). Creating single-subject design graphs in Microsoft Excel™ 2007. Journal of Applied Behavior Analysis, 42, 277-293

Doehring, D. G. (1996). Research strategies in human communication disorders (2nd ed.). Austin, TX, US: PRO-ED.

Dorsey, B. L., Nelson, R. O., & Hayes, S. C. (1986). The effects of code complexity and of behavioral frequency on observer accuracy and interobserver agreement. Behavioral Assessment, 8, 349-363.

Dunlap, G., dePerczel, M., Clarke, S., Wilson, D., Wright, S., White, R., & Gomez, A. (1994). Choice making to promote adaptive behavior for students with emotional and behavioral challenges. Journal of Applied Behavior Analysis, 27, 505-518.

Edgington, E. S. (1980). Random assignment and statistical tests for one-subject experiments. Behavioral Assessment, 2, 19-28.

Edgington, E. S. (1995). Randomization tests (3rd ed.) New York: Marcel Dekker.

Edgington, E. S. (1969). Statistical inference: The distribution-free approach. New York: McGraw-Hill.

Ennis, R. P., Jolivette, K., Fredrick, L. D., & Alberto, P. A. (2013). Using comparison peers as an objective measure of social validity: Recommendations for researchers. Focus on Autism and Other Developmental Disabilities, 28, 195-201.

Epstein, M. H., & Cullinan, D. (1979). Social validation: Use of normative peer data to evaluate LD interventions. Learning Disability Quarterly, 2(4), 93-98.

Faith, M. S., Allison, D. B., & Gorman, B. S. (1997). Meta-analysis of single-case research. In R. D. Franklin, D. B. Allison, and B. S. Gorman (Eds.), Design and analysis of single-case research (pp. 245-277). Mahwah, NJ: Erlbaum.

Ferguson, D. L., & Rosales-Ruiz, J. (2001). Loading the problem loader: The effects of target training and shaping on trailer-loading behavior or horses. Journal of Applied Behavior Analysis, 24, 409-424.

Ferron, J. M., Farmer, J. L., & Owens, C. M. (2010). Estimating individual treatment effects from multiple-baseline data: A Monte Carlo study of multilevel-modeling approaches. Behavior Research Methods, 42, 9300-943.

Foster, W. (1986). The application of single subject research methods to the study of exceptional ability and extraordinary achievement. Gifted Child Quarterly, 30(1), 33-37.

Franklin, R. D., Allison, D. B., & Gorman, B. S. (Eds.). (1997). Design and analysis of single-case research. Mahwah, NJ: Erlbaum.

Franklin, R. D., Gorman, B. S., Beasley, T. M., & Allison, D. B. (1996). Graphical display and visual analysis. In R. D. Franklin, D. B. Allison, & B. S. Gorman (Eds.), Design and analysis of single-case research (pp. 119-158). Mahwah, NJ: Erlbaum.

Fuqua, R. W., & Schwade, J. (1986). Social validation of applied behavioral research: A selective review and critique. In A. Poling & R. W. Fuqua (Eds.), Research methods in applied behavior analysis: Issues and advances (pp. 265-292). New York: Plenum.

Furlong, M. J., & Wampold, B. E. (1982). Intervention effects and relative variation as dimensions in experts' use of visual inference. Journal of Applied Behavior Analysis, 15, 415-421.

Gast, D. L., & Hammond, D. (2010). Withdrawal and reversal designs. In D. L. Gast (Ed.). Single subject research methodology in behavioral sciences (pp. 234-275). New York: Routledge.

Gast, D. L., & Ledford, J. (2010). Multiple baseline and multiple probe designs. In D. L. Gast (Ed.). Single subject research methodology in behavioral sciences (pp. 276-328). New York: Routledge.

Gast, D. L., & Wolery, M. (1988). Parallel treatments design: A nested single-subject design for comparing instructional procedures. Education and Treatment of Children, 11, 270-285.

Gibson, G., & Ottenbacher, K. J. (1988). Characteristics influencing the visual analysis of single-subject data: An empirical analysis. Journal of Applied Behavioral Science, 24, 298-314.

Glass, G. V., Wilson, V. L., & Gottman, J. M. (1975). Design and analysis of time-series experiments. Boulder, CO: Colorado Associated University Press.

To the top of this page.

Hains, A. H., & Baer, D. M. (1989). Interaction effects in multielement designs: Inevitable, desirable, and ignorable. Journal of Applied Behavior Analysis, 22, 57-69.

Hall, R. V. (1971). Behavior modification: The measurement of behavior. Lawrence, KS: H & H Enterprises.

Hallahan, D. P., Lloyd, J., Kosiewicz, M. M. Kauffman, J. M., & Graves, A. W. (1979). Self-monitoring of attention as a treatment for a learning disabled boy's off-task behavior. Learning Disability Quarterly, 2(3), 24-32.

Hallahan, D. P., Marshall, K. J., & Lloyd, J. W. (1981). Self-recording during group instruction: Effects on attention to task. Learning Disability Quarterly, 4, 407-415.

Hallahan, D. P., Lloyd, J. W., Kneedler, R. D., & Marshall, K. J. (1982). A comparison of the effects of self- versus teacher-assessment of on-task behavior. Behavior Therapy, 13, 715-723.

Haring, T. G., & Kennedy, C. H. (1988). Units of analysis in task analytic research. Journal of Applied Behavior Analysis, 21, 207-215.

Harris, F. C., & Lahey, B. B. (1978). A method for combining occurrence and non-occurrence interobserver agreement scores. Journal of Applied Behavior Analysis, 11, 523-527.

Harris, R. C., & Lahey, B. B. (1986). Condition-related reactivity: The interaction of observation and intervention in increasing peer praising in preschool children. Education and Treatment of Children, 9, 221-231.

Harrop, A., & Daniels, M. (1986). Methods of time sampling: A reappraisal of momentary time sampling and partial interval recording. Journal of Applied Behavior Analysis, 19, 73-77.

Hartmann, D. P. (1977). Considerations in the choice of interobserver reliability estimates. Journal of Applied Behavior Analysis, 10, 103-116.

Hartmann, D. P. (1982). Assessing the dependability of observational data. In D. P. Hartmann (Ed.), Using observers to study behavior (pp. 49-65). San Francisco: Jossey-Bass.

Hartmann, D. P., Gottman, J. M., Jones, R. R., Gardner, W., Kazdin, A. E., & Vaught, R. (1976). Interrupted time-series and its application to behavioral data. Journal of Applied Behavior Analysis, 13, 543-559.

Hartmann, D. P., & Hall, R. V. (1976). The changing criterion design. Journal of Applied Behavior Analysis, 9, 527-532.

Hawkins, R. P. (1983). A frequent error in calculation or reporting of interobserver agreement. The Behavior Therapist, 6, 109.

Hawkins, R. P., & Dobes, R. W. (1977). Behavioral definitions in applied behavior analysis: Explicit or implicit? In B. C. Etzel, J. M. LeBlanc, & D. M. Baer (Eds.), New directions in behavioral research: Theory, methods, and applications(pp. 167-188). Hillsdale, NJ: Erlbaum.

Hawkins, R. P., & Dobes, V. A. (1975). Reliability scores that delude: An Alice in Wonderland trip through the misleading characteristics of inter-observer agreement scores in interval recording. In E. Ramp & G. Semb (Eds.), Behavior analysis: Areas of research and application (pp. 359-376). Englewood Cliffs, NJ: Prentice-Hall.

Hay, L. R., Nelson, R. O., & Hay, W. M. (1980). Methodological problems in the use of participant observers. Journal of Applied Behavior Analysis, 13, 501-504.

Heins, E. D., Lloyd, J. W., & Hallahan, D. P. (1986). Cued and non-cued self-recording of attention to task. Behavior Modification, 10, 235-254.

Hersen, M., & Barlow, D. H. (1976). Single case experimental design: Strategies for studying behavior change. New York: Pergamon.

Hopkins, B. L., & Hermann, J. A. (1977). Evaluating interobserver reliability of interval data. Journal of Applied Behavior Analysis, 10, 121-126.

Horne, G. P., Yang, M. C., & Ware, W. B. (1982). Time series analysis for single-subject designs. Psychological Bulletin, 91, 178-189. doi:10.1037/0033-2909.91.1.178

Horner, R. D., & Baer, D. M. (1978). Multiple-probe technique: A variation on the multiple baseline. Journal of Applied Behavior Analysis, 11, 189-196.

House, A. E. (1980). Detecting bias in observational data. Behavioral Assessment, 2, 29-31.

Huitema, B. E. (1985). Autocorrelation in behavior analysis: A myth. Behavioral Assessment, 7, 107-118.

Huitema, B. E. (1986a). Autocorrelation in behavioral research: Wherefore art thou? In A. Poling & R. W. Fuqua (Eds.), Research methods in applied behavior analysis: Issues and advances (pp. 187-208). New York: Plenum.

Huitema, B. E. (1986b). Statistical analysis and single-subject designs: Some misunderstandings. In A. Poling & R. W. Fuqua (Eds.), Research methods in applied behavior analysis: Issues and advances (pp. 209-232). New York: Plenum.

Huitema, B. E. (1988). Autocorrelation: 10 years of confusion. Behavioral Assessment, 10, 253-294.

Huitema, B. E. (2004). Analysis of interrupted time-series experiments using ITSE: A critique. Understanding Statistics, 3, 27-46.

Huitema, B. E., & McKean, J. W. (2000). Design specification issues in time-series intervention models. Educational and Psychological Measurement, 60, 38-58.

To the top of this page.

Iwata, B. A., Bailey, J. S., Fuqua, R. W., Neef, N. A., Page, T. J., & Reid, D. H. (1989). Methodological and conceptual issues in applied behavior analysis: 1968-1988 from the Journal of Applied Behavior Analysis (Reprint Series, Vol. 4). Lawrence, KS: Society for the Experimental Analysis of Behavior.

Jackson, D. A., Della-Piana, G. M., & Sloane, H. (1975). How to establish a behavior observation system. Englewood Cliffs, NJ: Educational Technology Publications.

Jackson, N. C., & Mathews, R. M. (1995). Using public feedback to increase contributions to a multipurpose senior center. Journal of Applied Behavior Analysis, 28, 449-455.

Jenson, W. R., Clark, E., Kircher, J. C., & Kristjansson, S. D. (2007). Statistical reform: Evidence-based practice, meta-analyses, and single subject designs. Psychology in the Schools, 44, 483-493.

Johnson, S. M., & Bolstad, O. D. (1973). Methodological issues in naturalistic observation: Some problems and solutions for field research. In L. A. Hamerlynck, L. C. Handy, & E. J. Marsh (Eds.), Behavior change: Methodology, concepts, and practice (pp. 7-68). Champaign, IL: Research Press.

Johnston, J. M., & Pennypacker, H. S. (1980). Strategies and tactics of human behavioral research. Hillsdale, NJ: Erlbaum.

Johnston, J. M., & Pennypacker, H. S. (1992). Strategies and tactics of human behavioral research. (2nd ed.). Hillsdale, NJ: Erlbaum.

Johnston, J. M., & Pennypacker, H. S. (1986a). The nature and functions of experimental questions. In A. Poling & R. W. Fuqua (Eds.), Research methods in applied behavior analysis: Issues and advances (pp. 55-84). New York: Plenum.

Johnston, J. M., & Pennypacker, H. S. (1986b). Pure versus quasi-behavioral research. In A. Poling & R. W. Fuqua (Eds.), Research methods in applied behavior analysis: Issues and advances (pp. 29-54). New York: Plenum.

Jones, C. J., & Nesselroade, J. R. (1990). Mutivariate, replicated, single-subject, repeated measures designs and P-technique factor analysis: A review of intraindividual change studies. Experimental Aging Research, 16, 171-183,

Jones, R. R., Reid, J. B., & Patterson, G. R. (1975). Naturalistic observation in clinical assessment. In P. McReynolds (Ed.), Advances in psychological assessment (Vol. 3; pp. 42-95). San Francisco: Jossey-Bass.

Jones, R. R., Vaught, R. S., & Weinrott, M. (1977). Time series analysis in operant research. Journal of Applied Behavior Analysis, 10, 151-166.

Jones, R. R., Weinrott, M. R., & Vaught, R. S. (1978). Effects of serial dependency on the agreement between visual and statistical inference. Journal of Applied Behavior Analysis, 11, 277-283.

To the top of this page.

Kahng, S. W., & Iwata, B. A. (1998). Computerized systems for collecting real-time observational data. Journal of Applied Behavior Analysis, 31 253-261.

Kavale, K. A., Mathur, S. R., Forness, S. R., Quinn, M. M., & Rutherford, R. B., Jr. (2000). Right reason in the integration of group and single-subject research in behavioral disorders. Behavioral Disorders, 25, 142-157.

Kazdin, A. E. (1973). Methodological and assessment considerations in evaluating reinforcement programs in applied settings. Journal of Applied Behavior Analysis, 6, 517-531.

Kazdin, A. E. (1976). Statistical analyses for single-case experimental designs. In M. Hersen & D. H. Barlow, Single case experimental designs: Strategies for studying behavior change (pp. 265-316). New York: Pergamon.

Kazdin, A. E. (1977a). Artifact, bias, and complexity: The ABCs of reliability. Journal of Applied Behavior Analysis, 10, 141-150.

Kazdin, A. E. (1977b). Assessing the clinical or applied importance of behavior change through social validation. Behavior Modification, 1, 427-452.

Kazdin, A. E. (1979). Unobtrusive measures in behavioral assessment. Journal of Applied Behavioral Analysis, 12, 713-724.

Kazdin, A. E. (1982). Single-case research designs: Methods for clinical and applied settings. New York: Oxford.

Kazdin, A. E. (2010). Single-case research designs: Methods for clinical and applied settings (2nd ed.). New York: Oxford.

Kazdin, A. E., & Kopel, S. A. (1975). On resolving ambiguities of the multiple-baseline designs: Problems and recommendations. Behavior Therapy, 6, 601-608.

Kelley, M. L., & McCain, A. P. (1995). Promoting academic performance in inattentive children. Behavior Modification, 19, 357-375.

Kennedy, C. H. (2005). Single-case designs for educational research. Boston: Allyn & Bacon.

Kneedler, R. D., Wissick, C., & Lloyd, J. W. (1998). Notes from a field study of self-recording in the regular classroom. Effective School Practices, 17(2) .

Kollins, S H., Newland, M. C., Critchfield, T. S. (1999). Quantitative integration of single-subject studies: Methods and misinterpretations. Behavior Analyst, 22, 149-157.

Kratochwill, T. R. (1978). Foundations of time-series research. In T. R. Kratochwill (Ed.), Single subject research: Strategies for evaluating change (pp. 1-100). New York: Academic Press.

Kratochwill, T. R., Hitchcock, J., Horner, R. H., Levin, J. R., Odom, S. L., Rindskopf, D. M. & Shadish, W. R. (2010). Single-case designs technical documentation. Retrieved from U.S. Department of Education Institute of Education Sciences http://ies.ed.gov/ncee/wwc/pdf/wwc_scd.pdf

Kratochwill, T. R., & Levin, J. R. (1992). Single-case research design and analysis: New directions for psychology and education. Hillsdale, NJ: Erlbaum.

Kratochwill, T. R., & Levin, J. R. (2010). Enhancing the scientific credibility of single-case intervention research: Randomization to the rescue. Psychological Methods, 15, 124-144. doi:10.1037/a0017736

Kubany, E. S., & Sloggett, B. B. (1973). A coding procedure for teachers. Journal of Applied Behavior Analysis, 6, 339-344.

Lane, K., Wolery, M., Reichow, B., & Rogers, L. (2007). Describing baseline conditions: Suggestions for study reports. Journal of Behavioral Education, 16, 224-234. doi:10.1007/s10864-006-9036-4

LeLaurin, K., & Wolery, M. (1991). Research standards in early intervention: Defining, describing, and measuring the independent variable. Unpublished manuscript, Philadephia Children's Network, Philiadelphia, PN.

Lloyd, J. W., Bateman, D. F., Landrum, T. J., & Hallahan, D. P. (1989). Self-recording of attention versus productivity. Journal of Applied Behavior Analysis, 22, 315-323.

Lloyd, J. W., Hallahan, D. P., Kosiewicz, M. M., & Kneedler, R. D. (1982). Reactive effects of self-assessment and self-recording on attention to task and academic productivity. Learning Disability Quarterly, 5, 216-227.

Lloyd, J. W., Tankersley, M., & Talbott, E. (1994). Using single subject research methodology to study learning disabilities. In S. Vaughn & C. Bos (Eds.), Research issues in learning disabilities: Theory, methodology, assessment, and ethics (pp. 163-177). New York: Springer-Verlag.

To the top of this page.

Maggin, D. M., O'Keeffe, B. V., & Johnson, A. H. (2011). A quantitative synthesis of methodology in the meta-analysis of single-subject research for students with disabilities: 1985-2009. Exceptionality, 19, 109-135. doi:10.1080/09362835.2011.565725

Manlov, R., Arnau, J., Solana, A., & Bono, R. (2010). Regression-based techniques for statistical decision making in singlecase designs. Psicothema, 22, 1026-1032.

Marshall, K. J., Lloyd, J. W., & Hallahan, D. P. (1993). Effects of training to increase self-monitoring accuracy. Journal of Behavioral Education, 4, 445-459.

Matson, J. L., & Ollendick, T. H. (1982). The random stimulus design. Child Behavior Therapy, 3(4), 69-75.

Matyas, T. A., & Greenwood, K. M. (1990). Visual analysis of single-case time series: Effects of variability, serial dependence, and magnitude of effects. Journal of Applied Behavior Analysis, 23, 341-351.

McGonigle, J. J., Rojahn, J., Dixon, J., & Strain, P. S. (1987). Multiple treatment interference in the alternating treatments design as a function of the intercomponent interval length. Journal of Applied Behavior Analysis, 20, 171-178.

McKnight, S., McKean, J. W., & Huitema, B. E. (2000). A double bootstrap method to analyze an intervention time series model with autoregressive error terms. Psychological Methods, 5, 87-101.

Michael, J. (1974). Statistical inference for individual organism research: Mixed blessing or curse. Journal of Applied Behavior Analysis, 7, 649-653.

Miltenberger, R. G., Rapp, J. T. & Long, E. S (1999). A low-tech method for conducting real-time recording. Journal of Applied Behavior Analysis, 32, 119-120.

Moore, S. R. (1998). Effects of sample size on the representativeness of observational data used in evaluation. Education and Treatment of Children, 21, 209-226.

Mudford, O. C., Beale, I. L., & Singh, N. N. (1990). The representativeness of observational samples of different durations. Journal of Applied Behavior Analysis, 23, 323-331.

Murphy, R., Doughty, N, & Nunes, D. (1979). Multi-element designs: An alternative to reversal and multiple-baseline evaluations strategies. Mental Retardation, 17, 23-27.

Nelson, R. O., & Hayes, S. N. (1979). Some current dimensions of behavioral assessment. Behavioral Assessment, 1, 1-16.

Nourbakhsh, M. R., & Ottenbacher, K. J. (1994). The statistical analysis of single-subject data: A camparitive examnination. Physical Therapy, 74, 768-776.

O'Brien, S., & Repp, A. C. (1990). Reinforcement-based reductive procedures: A review of 20 years of their use with persons with severe or profound retardation. Journal of the Association for Persons with Severe Handicaps, 15, 148-159.

O'Leary, K. D., & Kent, R. N. (1977). Sources of bias in observational recording. In B. C. Etzel, J. M. LeBlanc, & D. M. Baer (Eds.), New developments in behavioral research: Theory, method, and application (pp. 231-236). Hillsdale, NJ: Erlbaum.

Orme, J., & Cox, M. (2001). Analyzing single-subject design data using statistical process control charts. Social Work Research, 25, 115.

Ottenbacher, K. J. (1990a). Visual inspection of single-subject data: An empirical analysis. Mental Retardation, 28, 283-290.

Ottenbacher, K. J. (1990b). When is a picture worth a thousand p values? A comparison of visual and quantitative methods to analyze single subject data. Journal of Special Education, 23, 436-449.

Ottenbacher, K. J. (1986). Reliability and accuracy of visually analyzing graphed data from single-subject designs. American Journal of Occupational Therapy, 40, 464-469.

Page, T. J., & Iwata, B. A. (1986). Interobserver agreement: History, theory, and current methods. In A. Poling & R. W. Fuqua (Eds.), Research methods in applied behavior analysis: Issues and advances (pp. 99-126). New York: Plenum.

Parker, R.I., Hagan-Burke, S., & Vannest, K.J. ( 2007) Percent of all nonoverlapping data PAND: An alternative to PND. Journal of Special Education, 40, 194-204.

Parker, R. I., Vannest, K. J., & Brown, L. (2009). The improvement rate difference for single case research. Exceptional Children, 75, 135–150.

Parker, R. I., Vannest, K. J., & Davis, J. L. (2011). Effect size in single-case research: A review of nine nonoverlap techniques. Behavior Modification, 35, 303-322.

Parsonson, B. S., & Baer, D. M. (1978). The analysis and presentation of graphic data. In T. R. Kratochwill (Ed.), Single subject research: Strategies for evaluating change (pp. 101-165). New York: Academic Press.

Parsonson, B. S., & Baer, D. M. (1986). The graphic analysis of data. In A. Poling & R. W. Fuqua (Eds.), Research methods in applied behavior analysis: Issues and advances (pp. 157-186). New York: Plenum.

Peterson, L., Homer, A. L., & Wonderlich, S. A. (1982). The integrity of independent variables in behavior analysis. Journal of Applied Behavior Analysis, 15, 477-492.

Poling, A., & Fuqua, R. W. (Eds.). (1986). Research methods in applied behavior analysis: Issues and advances. New York: Plenum.

Poling, A., & Grossett, D. (1986). Basic research designs in applied behavior analysis. In A. Poling & R. W. Fuqua (Eds.), Research methods in applied behavior analysis: Issues and advances (pp. 7-28). New York: Plenum.

Powell, J., Martindale, A., & Kulp, S. (1977). An evaluation of time sampling measures of behavior. Journal of Applied Behavior Analysis, 8, 463-469.

Powell, J., Martindale, B., Kulp, S., Martindale, A., & Bauman, R. (1977). Taking a closer look: Time sampling and measurement error. Journal of Applied Behavior Analysis, 10, 325-332.

Primavera, L. H., Allison, D. b., Alfonso, V. C. (1997). Measurement of dependent variables. In R. D. Franklin, D. B. Allison, & B. S. Forman (Eds.), Design and analysis of single-case research (pp. 41-89). Mahwah, NJ: Erlbaum.

To the top of this page.

Reid, J. R. (1970). Reliability assessment of observation data: A possible methodological problem. Child Development, 41, 1143-1150.

Repp, A. C., Deitz, D. E. D., Boles, S. M., Deitz, S. M., & Repp, C. F. (1976). Differences among common methods of calculating interobserver agreement. Journal of Applied Behavior Analysis, 9, 109-113.

Repp, A. C., & Lloyd, J. (1980). Evaluating educational changes with single-subject designs. In J. Gottlieb (Ed.), Educating mentally retarded persons in the mainstream (pp. 73-105). Baltimore: University Park Press.

Repp, A. C., Nieminen, G. S., Olinger, E., & Brusca, R. (1988). Direct observation: Factors affecting the accuracy of observers. Exceptional Children, 55, 29-36.

Repp, A. C., Roberts, D. M., Slack, D. J., Repp, C. F., & Berkler, M. S. (1976). A comparison of frequency, interval, and time-sampling methods of data collection. Journal of Applied Behavior Analysis, 9, 501-508.

Robey, R. R., Schultz, M. C., Crawford, A. B., & Sinner, C. A. (1999). Single-subject clinical-outcome research: Designs, data, effect sizes, and analyses. Aphasiology, 13, 445-473.

Romanczyk, R. G., Kent, R. N., Diament, C., & O'Leary, K. D. (1973). Measuring the reliability of observation data: A reactive process. Journal of Applied Behavior Analysis, 6, 175-184.

Rooney, K. J., Hallahan, D. P., & Lloyd, J. W. (1984). Self-recording of attention by learning-disabled students in the regular classroom. Journal of Learning Disabilities, 17, 360-364.

Rusch, F. R., & Kazdin, A. E. (1981). Toward a methodology of withdrawal designs for the assessment of response maintenance. Journal of Applied Behavior Analysis, 14, 131-140.

Sackett, G. P. (Ed.). (1976). Observing behavior: Vol. 1: Theory and applications in mental retardation. Baltimore: University Park Press.

Sackett, G. P. (Ed.). (1976). Observing behavior: Vol. 2: Data collection and analysis methods. Baltimore: University Park Press.

Salzberg, C. L., Strain, P. S., & Baer, D. M. (1987). Meta-analysis for single-subject research: When does it clarify, when does it obscure? Rase: Remedial & Special Education, 8(2), 43-48.

Sanson-Fisher, R. W., Poole, A. D., & Dunn, J. (1980). An empirical method for determining an appropriate interval length for recording behavior. Journal of Applied Behavior Analysis, 13, 493-500.

Saudargas, R. A., & Lentz, F. E. (1986). Estimating percent of time and rate via direct observation: A suggested observational procedure and format. School Psychology Review, 15, 36-48.

Scruggs, T. E., & Mastropieri, M. A. (1998). Summarizing single-subject research: Issues and applications. Behavior Modification, 22, 221-242.

Scruggs, T. E., Mastropieri, M. A., Casto, G. (1987). The quantitative synthesis of single-subject research: Methodology and validation. Remedial and Special Education 8(2), 24-52.

Scruggs, T. E., Mastropieri, M. A., & Casto, G. (1987). "Meta-analysis for single-subject research: When does it clarify, when does it obscure?": Response to Salzberg, Strain, and Baer. Rase: Remedial & Special Education, 8(2), 49-52.

Shadish, W. R., Brasil, I. C. C., Illingworth, D. A., White, K., Galindo, R., Nagler, E. D., & Rindskopf, D. M. (2009). Using UnGraph® to extract data from image files: Verification of reliability and validity. Behavior Research Methods, 41, 177-183.

Shadish, W. R., & Rindskopf, D. M. (2007). Methods for evidence-based practice: Quantitative synthesis of single-subject designs. New Directions for Evaluation, 113, 95-109.

Shadish, W. R., Rindskopf, D. M., & Hedges, L. V. (2008). The state of the science in the meta-analysis of single-case experimental designs. Evidence-Based Communication Assessment and Intervention, 2, 188-196.

Shapiro, E. S., Kazdin, A. E., & McGonigle, J. J. (1982). Multiple-treatment interference in the simultaneous or alternating-treatments design. Behavioral Assessment, 4, 105-115.

Sidman, M. (1960). Tactics of scientific resarch: Evaluating experimental data in psychology. New York: Basic.

Sindelar, P. R., Rosenburg, M. S., & Wilson, R. J. (1985). An adapted alternating treatments design for instructional research. Education and Treatment of Children, 8, 67-76.

Skinner, B. F. (1966). What is the experimental analysis of behavior? Journal of Experimental Analysis of Behavior, 9, 213-218. [Reprinted in a revised form in B. F. Skinner, (1969), Contingencies of reinforcement: A theoretical analysis. New York, Appleton-Century-Croft.]

Skrtic, T. M., & Sepler, H. J. (1982). Simplifying continuous monitoring of multiple-response/ multiple-subject classroom interactions. Journal of Applied Behavior Analysis, 15, 183-187.

Smith, P. K., & Connolly, K. J. (1972). Patters of play and social interaction in preschool children. In N. B. Jones (Ed.), Ethological studies of child behavior (pp. 65-95). Cambridge, Cambridge University Press.

Snell, M. E., & Loyd, B. H. (1991). A study of the effects of trend, variability, frequency, and form of data on teachers' judgments about progress and their decisions about program change. Research in Developmental Disabilities, 12(1), 41-61.

Suen, H. K., & Ary, D. (1986). Poisoon cumulative probabilities of systematic errors in single-subject and multiple-subject time sampling. Behavioral Assessment, 8, 155-169.

Spinuzzi, C. (2003). Using a handheld PC to collect and analyze observational data. Association for Computing Machinery: Proceedings of the 21st Annual International Conference on Documentation, 73-79.
PDF

Suen, H. K., Ary, D., & Ary, R. M. (1986). A note on the relationship among eight indices of interobserver agreement. Behavioral Assessment, 8, 301-303.

To the top of this page.

Taplin, P. S., & Reid, J. B. (1973). Effects of instructional set and experimenter influence on observer reliability. Child Development, 44, 547-554.

Tapp, J. T., Wehby, J. H., & Ellis, D. N. (1995). A multiple option observation system for experimental studies: MOOSES. Behavior Research Methods, Instruments, & Computers, 27(1), 25-31.

Tawney, J. W., & Gast, D. L. (1984). Single-subject research in special education. Columbus, OH: Charles E. Merrill.

Thomas, E. J., Bastien, J., Stuebe, D. R., Bronson, D. E., & Yaffe, J. (1987). Assessing procedural descriptiveness: Rationale and illustrative study. Behavioral Assessment, 9, 43-56.

Thomson, C., Holmberg, M., & Baer, D. M. (1974). A brief report on a comparison of time-sampling procedures. Journal of Applied Behavior Analysis, 7, 623-626.

Tryon, W. W. (1982). A simplified time-series analysis for evaluating treatment interventions. Journal of Applied Behavior Analysis, 15, 423-429.

Tufte E. R. (1983). The visual display of quantitative information. Chelsire, CN: Graphics Press.

Tufte E. R. (1990). Envisioning information. Chelsire, CN: Graphics Press.

Ulman, J. D., & Sulzer-Azaroff, B. (1975). Multi-element baseline designs in educational research. In E. Ramp & G. Semb (Eds.), Behavior analysis: Areas of research and application (pp. 377-391). Englewood Cliffs, NJ: Prentice-Hall.

Van Acker, R., Grant, S. H., & Getty, J. E. (1991). Observer accuracy under two different methods of data collection: The effect of behavior frequency and predictability. Journal of Special Education Technology, 11, 155-166.

To the top of this page.

Wacker, D., McMahon, C., Steege, M., Berg, W., Sasso, G., & Melloy, K. (1990). Applications of a sequential alternating treatments design. Journal of Applied Behavior Analysis, 23, 333-339.

Walker, H. M., & Hops, H. (1976). Use of normative peer data as a standard for evaluating classroom treatment effects. Journal of Applied Behavior Analysis, 9, 159–168. doi: 10.1901/jaba.1976.9-159

Wasik, B. H. (1989). The systematic observation of children: Rediscovery and advances. Behavioral Assessment, 11, 201-217.

Weinrott, M. R., Reid, J. B., Bouske, B. W., & Brummett, B. (1981). Supplementing naturalistic observations with observer impressions. Behavioral Assessment, 3, 151-159.

White, D. M., Rusch, F. R., Kazdin, A. E., & Hartmann, D. P. (1989). Applications of meta analysis in individual-subject research. Behavioral Assessment, 11, 281-296.

White, O. (1987). "The quantitative synthesis of single-subject research: Methodology and validation": Comment. Rase: Remedial & Special Education, 8(2), 34-39.

Wolery, M., Gast, D. L., & Hammond, D. (2010). Comparative intervention designs. In D. L. Gast (Ed.). Single subject research methodology in behavioral sciences (pp. 329-381). New York: Routledge.

Wolf, M. M. (1978). Social validity: The case for subjective measurement or how applied behavior analysis is finding its heart. Journal of Applied Behavior Analysis, 11, 203-214.

Zanolli, K., Daggett, J., Ortiz, K., & Mullins, J. (1999). Using rapidly alternating multiple schedules to assess and treat aberrant behavior in natural settings. Behavior Modification, 23, 358-378.

Zigmond, N., & Strain, P. S. (1987). How various forms of data effect teacher analysis of student performance. Exceptional Children 53, 411-422.

Zwick, R. (1988). Another look at interrater agreement. Psychological Bulletin, 103, 374-378.

To the top of this page.

SSD Home Announcements Assignments Calendar
Lectures Resources References Curry