Evidence Based Practices
EDIS 5222

If it doesn't work, why would I do it? A look at evidence-based practices.

 

This module is designed to provide the background knowledge necessary to complete the Evidence-Based Reading Project. To complete the module, go through each page and follow the directions provided.

 

 Click here to respond to a prompt.

 

 

Our charge as special education teachers is to improve the educational and life outcomes for children and youth with disabilities, and their families.

 

If this is our charge, then consider this question:

 

If it doesn't work, why would I do it?

Peanuts1.jpg Peanuts2.jpg Peanuts3.jpg Peanuts4.jpg

Why require standards for evidence?

 

The field of K-12 education contains a vast array of educational interventions

 

--such as reading and math curricula, schoolwide reform programs, after-school programs, and new educational technologies--

 

that CLAIM to be able to improve educational outcomes and, in many cases, to be supported by evidence. This evidence often consists of poorly-designed and/or advocacy-driven studies (Institute of Education Sciences, 2003, p. iii).

 

What constitutes scientifically based research?

 

According to IES, a study is scientific if it has the following features:

 

 

According to IES, scientifically based research:

 

 

 

 

Experimental and Quasi-Experimental Research

 

What is the difference between the two?

 

A true experimental design uses a random sample (i.e., every member of the target population has an equal chance of being chosen to participate) and random assignment to experimental groups (i.e., each participant has an equal chance to be assigned to either to experimental or control group), whereas a quasi-experimental design lacks random assignment.

 

Example:

 

Foorman, B. R., Francis, D. J., Fletcher, J. M., & Schatschneider, C. (1998). The role of instruction in learning to read: Preventing reading failure in at-risk children. Journal of Educational Psychology, 90, 37-55.

 

Abstract [with notes added in red]:

 

 

First- and second-graders (N=285) receiving Title 1 services received 1 of 3 kinds of classroom reading programs: direct instruction in letter-sound correspondences practiced in decodable text (direct code); less direct instruction in systematic sound-spelling patterns embedded in connected text (embedded code); and implicit instruction in the alphabetic code while reading connected text (implicit code). [This is a quasi-experimental study, because the children were not randomly assigned to classes. The experimenters did not manipulate the enrollment in each class.] Children receiving direct code instruction improved in word reading at a faster rate and had higher word-recognition skills than those receiving implicit code instruction. Effects of instructional group on word recognition were moderated by initial levels of phonological processing and were most apparent in children with poorer initial phonological processing skills. Group differences in reading comprehension paralleled those for word recognition but were less robust. Groups did not differ in spelling achievement or in vocabulary growth. Results show advantages for reading instructional programs that emphasize explicit instruction in the alphabetic principle for at-risk children. [These results are answering causal questions; the type of instruction caused differences in outcome variables.]

 

Here are some selected graphs from the article:

 

edu-90-1-37-fig1a.png

 

edu-90-1-37-fig2a.png

 

Correlational Research

 

Creswell (2008) defines a correlation as "a statistical test to determine the tendency or pattern for two (or more) variables or two sets of data to vary consistently" (p. 638).

 

Example:

 

Georgiou, G. K., Parrila, R., Kirby, J. R., & Stephenson, K. (2008). Rapid naming components and their relationship with phonological awareness, orthographic knowledge, speed of processing, and different reading outcomes. Scientific Studies of Reading, 12, 325-350.

 

Abstract [with notes added in red]:

 

This study examines (a) how rapid automatized naming (RAN) speed components—articulation time and pause time—predict [key word] reading accuracy and reading fluency in Grades 2 and 3, and (b) how RAN components are related to [key words] measures of phonological awareness, orthographic knowledge, and speed of processing. Forty-eight children were administered RAN tasks in Grades 1, 2, and 3. Results indicated that pause time was highly correlated with both reading accuracy and reading fluency measures and shared more of its predictive variance with orthographic knowledge than with phonological awareness or speed of processing. In contrast, articulation time was only weakly correlated with the reading measures and was rather independent from any processing skill at any point of measurement.

 

Here is one correlation table from this study:

7461649.jpg

 

 

Single Subject Research

Creswell (2008) describes single-subject research as "the study of single individuals, their observation over a baseline period, and the administration of an intervention. This is followed by another observation after the intervention to determine if the treatment affects the outcome" (p. 647). This definition provides a general idea of what a single-subject design may look like. They can vary by number of participants and design details.

 

Example:

 

Pullen, P. C., Lane, H. B., Lloyd, J. W., Nowak, R., & Ryals, J. (2005). Effects of explicit instruction on decoding of struggling first grade students: A data-based case study. Education and Treatment of Children, 28, 63-76.

 

 

Abstract [with notes added in red]:

 

Decoding unknown words when reading text is a necessary tool of skilled readers. Beginning readers need repeated opportunities to develop decoding ability. We investigated whether explicitly teaching essential components of beginning reading instruction promoted first graders' skill in decoding pseudowords. We employed a multiple- baseline design [one type of single-subject design that is commonly used] across groups of children to examine the effects of an intervention that included the use of manipulative letters to promote segmenting, blending, sounding out, and spelling skills. We monitored decoding skills by repeatedly measuring reading of pseudowords by 9 first-grade students [single-subject studies use a relatively small number of participants] identified as having incipient reading problems. Findings indicate that each student's skill in decoding increased with the introduction of instruction incorporating explicit decoding practice. These results reveal that teachers can use relatively simple instructional practices to enhance early reading skills. [Single-subject researchers use graphs show their data.]

 

This is a typical graph that one would see in a single-subject study. Notice that, in this example, each graph represents one participant.

 

PullenLaneLloydNowakRyalsGraph.jpg

 

Qualitative Research

 

Qualitative research commonly includes the use of interviews, observations, field notes, and artifacts (i.e. photographs, documents, and other hard evidence) to better understand a central phenomenon. Research questions used in qualitative designs are significantly different than those of quantitative studies.

 

Example:

 

Weber, R. M. (1993). Even in the midst of work: Reading among turn-of-the-century farmers' wives. Reading Research Quarterly, 28, 292-302.

 

Abstract [with notes added in red]:

 

The place of reading in the lives of farm women at the turn-of-the-century [one type of qualitative study is a historical study] is examined through an analyis of the content of extention bulletins directed to farmers' wives and of their responses to it [In this study, the researcher examined artifacts, i.e. the bulletins and written responses. This is a common form of data collection in qualitative studies]. The bulletins of the Cornell Reading-Course for Farmers' Wives offered an ideal vision of literacy that prevailed at the time, tempered to suit women's lives on farms. These bulletins recommended a range of literate practices to guide and lighten work, offset prejudices against farming, and foster virtue. These included reading thoughtfully and widely, cultivating reading in children, and reading with others at home and in study clubs. The women's responses generally accepted the recommendations, but valued reading mainly as a diversion from work.

 

 

Meta-Analysis

 

Creswell (2008) defines a meta-analysis as "a type of research report in which the author integrates the findings of many (primary source) research studies" (p. 642). Completing a meta-analysis involves establishing research criteria, an exhaustive search of the literature, coding each appropriate article for researcher-determined variables, and compiling the findings into one concise document with its own conclusions.

 

Example:

 

Graham, S., & Perin, D. (2007). A meta-analysis of writing instruction for adolescent students. Journal of Educational Psychology, 99, 445-476.

 

Abstract [with notes added in red]:

 

There is considerable concern that the majority of adolescents do not develop the competence in writing they need to be successful in school, the workplace, or their personal lives. A common explanation for why youngsters do not write well is that schools do not do a good job of teaching this complex skill. In an effort to identify effective instructional practices for teaching writing to adolescents, the authors conducted a meta-analysis of the writing intervention literature (Grades 4-12), focusing their efforts on experimental and quasi-experimental studies [The researchers did an extensive search to find all articles that met these criteria]. They located 123 documents that yielded 154 effect sizes for quality of writing [The researchers found 123 documents that met their criteria. Meta-analyses could include a smaller number of studies, depending on the research questions, criteria, and the breadth of the given area]. The authors calculated an average weighted effect size (presented in parentheses) for the following 11 interventions: strategy instruction (0.82), summarization (0.82), peer assistance (0.75), setting product goals (0.70), word processing (0.55), sentence combining (0.50), inquiry (0.32), prewriting activities (0.32), process writing approach (0.32), study of models (0.25), grammar instruction (- 0.32) [Meta-analysts use effect sizes as a statistical way to identify the strength of the conclusions and/or the strength of the variables].

 

 

Complete the drag and drop activity by matching the research design to its corresponding characteristics.

Hyperlink to DragNDrop Activity simplegraph.jpg