Article Archive for ‘February, 2012’

Using scientific evidence to improve educational decisions

February 15th, 2012

I often find myself playing the role of Grumpy Old Man in conversations about the selection of intervention programs and other teaching practices. A statement along the lines of “but there’s no evidence that it works” is often preceded by much face rubbing and hair pulling on my part. The response I hear most often is “but we see it working” which precipitates more face rubbing and hair pulling from yours truly. Perhaps the biggest barrier in these conversations is that teachers and scientists often have different definitions of what is meant by “evidence“. This blog attempts to explain what scientists mean by the term evidence.

Evidence-based practice

The term evidence-based practice began in medicine. It seeks to maximise the accuracy of clinical decisions based on evidence gathered from the scientific method. No one wants to see a doctor who prescribes a treatment just because they believe it works or because they heard a colleague give a presentation on it. There should be a burden of proof (and theoretically there is, although this burden doesn’t exempt doctors from making mistakes) on a doctor to make treatment choices that have been shown to work significantly better than no treatment or, if there are alternative treatments available, to choose the one that works most effectively with the fewest side effects. These statements are axiomatic but they aren’t often applied to education, an area of at least equal importance as health.

What constitutes scientific evidence?

A very brief description of the scientific method in relation to treatments is as follows:

  1. Select 2 equivalent groups. If they don’t share the same characteristics and the same level of skills before the intervention you can’t be certain that any differences observed after the treatment were due to the treatment itself or to the pre-existing differences between the groups.
  2. Gather pre-treatment data using well-validated and reliable instruments that clearly measure the outcome in question. For example, a comparison of a group versus a one-on-one reading program should be evaluated with tests of reading ability not of motor skill.
  3. Randomly allocate students to the groups. If students, parents or teachers actively select the groups it damages the results. For example, a recent study compared neurofeedback therapy for ADHD to a non-treatment group. The results showed that parent-report measures of ADHD symptoms improved in the neurofeedback group relative to the untreated group. However, because parents actively decided to enrol their child in the neurofeedback group or actively decided not to, all these data show is that if parents believe that neurofeedback is going to work they will report that it does work!
  4. Implement the treatment while making sure that the treatment is run the same for everyone and that additional teaching or therapies are not going on at the same time.
  5. Administer post-tests to determine outcome.

Additional points of note:

  1. Children’s development is not static; they are in a constant state of personal improvement. Observations that students seem to improve over the course of a program/intervention are therefore mostly meaningless. All students will improve over time. The only way to tell if a teaching method works is to compare a child (or preferably a group of children) to an equivalent group who receive a different teaching method.
  2. Placebo effects are large in children. They will often improve just because of the extra attention paid to them. Therefore, observations, or even more empirical data, collected on a treated group in the absence of a group receiving an equal amount of attention are prone to error.
  3. Regression to the mean is a statistical artefact that holds that extreme scores (e.g., low scores on a reading test) are likely to return closer to the mean (average) on repeat testing. Hence, proponents of a brief intervention, say of 2-weeks, may claim that their treatment resulted in the student improving from a score of 70 to 80 and that change was significant when in fact the reason for the improvement may simply be regression to the mean.
  4. Beware research using tests that measure what is taught in the treatment. See Merzenich et al. (1996) for an example.

An example of good data

Hatcher, Hulme, and Ellis (1994) compared four groups of children all of whom had comparable reading dififuclties. The children were randomly allocated to four groups each of which received a different treatment. One got phonological awareness training, one phonological awareness plus reading and another just did reading. The last group received no treatment. Although the 3 treated groups received different treatments, they received the same amount of instruction in terms of time, attention and contact with teachers. Because this study controlled for all other variables they were able to claim that the stronger reading growth in the phonological awareness plus reading group was due to that treatment being superior. These claims could not have been made if the students were not randomly assigned to groups, if they differed on some other variable before treatment, or if some other variable such as the time spent on intervention differed between the groups.

An example of poor data

I recently suggested to a teacher that their suggestion that a child with reading problems see a behavioural optometrist was not evidence-based. See here for a review of the evidence for visual therapy in reading difficulties. The person claimed that they had a student in a previous year who improved in their reading ability while doing vision therapy and that they therefore believed that vision therapy worked. Unfortunately, the belief simply cannot be supported by scientific principles. First, the improvements could have been a placebo effect. In fact, it is safe to assume that placebo effects represent at least part of all positive teaching outcomes for all students. Paying attention to them helps them improve. It is probably equally likely that the child would have improved if they recited the alphabet while standing on a wobble board each day. Second, children improve in almost all skills over the course of a year as a natural course of events. In this case, there is no way of being certain that the improvement wouldn’t have occurred anyway. Finally, teachers obviously want students to improve and history is full of examples where even eminent scientists have deluded themselves into believing something because they were keen for it to be true (e.g., see the case of cold fusion). In summary, beware of using observations of single cases like this as the basis for educational decisions.

Types of evidence

Carter and Wheldall (2008) have proposed 5 levels of evidence that can be used to guide interpretation of educational research.

Level 1 

Level 1 programs or practices meet two criteria. First, they are consistent with existing scientific evidence in terms of current theory and recommended practice. Second, there is evidence of efficacy from a number of independent randomised controlled trials. Carter and Wheldall (2008) refer to Level 1 as the ‘gold standard’ and suggest that programs and practices meeting these criteria may be recommended with confidence.

The Hatcher et al. (1994) study described above represents an example of the gold standard Level 1 evidence.

Level 2

Like programs or practices that meet Level 1 criteria, Level 2 programs or practices are consistent with existing scientific evidence in terms of current theory and recommended practice. They also have empirical evidence supporting their efficacy but the design of the studies may not quite meet the gold standard of a randomised controlled trial necessary for Level 1 rating. These programs represent the silver standard and can be recommended with reasonable confidence.

An example, of a Level 2 program is my own Understanding Words reading intervention program. The data we have on Understanding Words is summarised briefly below.

  1. A clinic study showed that a group of students made significant improvements in response to two terms of Understanding Words teaching. The strength of the gains were strong and similar to the average growth seen in randomised trials reported in the literature. However, because the study didn’t have a control group we can’t guarantee that the changes were the result of the intervention rather than to some other variable.
  2. A controlled study that showed that a group of Grade 1 students with reading difficulties, made significantly greater gains than a control group of average readers. In other words, the poor readers ‘closed the gap’ on the good readers as a result of the intervention. This study goes close to meeting the gold standard criteria except that the students were not randomly allocated to groups and the research was not independent of the program developer.
  3. We also have four studies using well-controlled case series designs. These studies have showed that introduction of the Understand Words treatment prompts increased reading growth in individual students compared to baseline periods in which no treatment or an alternative treatment was being provided.

Together, these studies fall short of the gold standard of Level 1 evidence but the program fits into the Level 2 strata based on its theoretical soundness and treatment-outcome data.

Level 3

Level 3 programs and practices make theoretical sense. These programs could be said to be based on evidence because there is often empirical data showing the effectiveness of the type of teaching contained in the program. However, there have been no scientific studies documenting the effectiveness of the program or practice itself. These programs might be used in the absence of an alternative that has stronger evidence. However, they should be used with caution. An example may be the ELF reading program. Arguably, ELF has a reasonably sound theoretical basis; however, there is no evidence beyond observations that the program works. Teachers and clinicians who want to be evidence-based practitioners would be cautious about selecting the program when there are other programs with stronger evidence bases.

Level 4

Level 4 programs are Not Recommended. They provide little or no empirical evidence for efficacy. They often rely on testimonials and observational ‘data’ to support their claims. Examples include fish oil as a treatment for ADHD and behavioural optometry as a treatment for reading difficulties.

Level 5

Level 5 programs and practices represent those for which there is evidence that the program is unsafe or results in negative outcomes. These programs and practices should be avoided at all costs.

Summary

Teachers can do a lot of good by becoming evidence-based teachers. At present, most of teachers’ professional reading involves practically-oriented periodicals or books rather than research-based and peer-reviewed journals (Rudland & Kemp, 2004). It has also been reported that regular and special education teachers value the opinion of colleagues, workshops and in-service activities (which may present opinions with no evidence-base) as more trustworthy than professional journals (Landrum, Cook, Tankersley, & Fitzgerald, 2002). Further, Boardman, Arguelles, Vaughn, Hughes, and Klinger (2005) reported that when making decisions about classroom implementation of practices, special education teachers did not consider it important that they be research-based. I suspect that this is understandable as practical strategies may seem to be more applicable for teachers. However, it would be nice if these things began to change as they have in medicine. Teachers could become more critical about the claims made by proponents of educational practices and more critical of their own teaching methods. They could begin by asking themselves what the evidence is to support the use of programs and practices. They could actively seek out that evidence from peer-reviewed sources rather than relying on books, the Internet or the opinions of colleagues presenting PD. They could ask themselves: “Would I be happy if my GP used a Level 3, 4 or 5 treatment on me just because s/he believed that it worked?“. If not, they could ask another question: “Should I therefore be cautious in selecting educational programs that have limited evidence?“.

Final note

The last thing I intended was for this blog to be interpreted as teacher-bashing. I could write a similar blog about some of my psychologist, occupational therapist, or paediatrician colleagues. Nor am I immune to human foibles and biases. However, the fact is that we should all strive to do better for students who have learning difficulties and indeed all children. To do that we need to recognise the dangers of our belief system and of relying on the opinions of peers. We all need to strive to base decisions on science, not on philosophy or pseudoscience. We would also do better to recognise the limits of our knowledge. To paraphrase Donald Rumsfeld, the best teachers (and other clinicians) know what they don’t know.


 

 

 

Review of Early Literacy Foundations (ELF) program

February 14th, 2012

The Early Literacy Foundations (ELF; UQ, 2006) program is produced by the Speech Pathology and Occupational Therapy faculties at the University of Queensland. It is designed for “boosting a range of literacy skills in year one students” (p. 8). The program uses the term ‘literacy’ in a broad sense to encompass the skills of reading (at the word-level), spelling and handwriting. It is designed as a withdrawal program for small groups. This feature will be sure to grab the attention of cash strapped learning support co-ordinators. The aim of the program is to “provide students with strategies to boost their literacy, including listening, spelling, reading, handwriting, and a range of the motor skills important for school participation” (p. 9).

The program consists of a resource manual and a theme book that provides instructions and student materials. The teacher instructions are clear and could be followed by a paraprofessional. Being largely developed by speech pathologists it is unsurprising that there is a strong emphasis on phonological awareness. There is also a strong emphasis on postural, sensory and motor skills and here’s where the first problem arises. It is true that motor coordination weaknesses co-exist with learning difficulties (e.g., Kaplan, Wilson, Dewey, & Crawford, 1998). However, far from 100% of students with reading difficulties have motor weaknesses and there is no evidence that motor weaknesses are causal in the reading difficulties. It is therefore strange that a literacy program would include a motor component. In this author’s opinion, motor activities have no place in a reading program and it would be far better to select the students who have motor disorders for a separate program. The rest of this review will ignore the motor component of the program and focus on the ‘literacy’.

Teaching is preceded by a screening test that consists of various phonological awareness activities, a spelling task and a nonword spelling task from the SPAT. I like the author’s suggestion to rank order scores and select all students who score below the mean for intervention. This is unlikely to occur in the real-world but it shows the right intent.

The program has 12 “themes”. Each theme consists of a number of activities. Together, the activities in each theme take approximately 1-1.5 hours to administer.  If true, this means that students will receive a maximum of 18 hours of instruction. In reality, the amount of reading instruction will be even less as a large part of the program involves motor skill activities. This seems light for an intervention program.

The phonological awareness part of ELF progresses through the developmental stages of this metalinguistic skill (e.g., Adams, 1990; Yopp, 1992). Themes 1 and 2 consist of rhyming, segmenting sentences into words and syllabification activities. Themes 3 and 4 focuses on onset-rime activities while later themes focus on phoneme-level activities. Here’s the next problem.

There is certainly evidence suggesting that phonological awareness is correlated with reading and many draw the inference that it is involved in learning to read (e.g, Foorman, Francis, Novy & Liberman, 1991; Hatcher, Hulme & Ellis, 1994; see Snowling, 2000 for review) but the case is far from proven (see Castles & Coltheart, 2004 for review). There are cases of reading difficulties in which no phonological problems are present (Castles, 1996; Zoccolotti & Friedmann, 2010) and, despite popular belief, there is actually limited evidence showing that teaching phonological awareness has any additional benefit above and beyond teaching letter-sound conversion rules.

Even if one accepts that phonological awareness is a skill required for learning to read, the question becomes how much phonological awareness is required? Many people agree that being able to segment and blend words of 4-5 phonemes is sufficient. This makes the phoneme manipulation, deletion and substitution activities in the later Themes of ELF somewhat redundant. It should be noted that the major concern is not that these activities are bad, but that they are unnecessary. Reading programs need to target reading and spelling skills, not distal factors like phonological awareness. Students need as many repetitions of letter-sound conversion rules and decoding and spelling attempts using the letter-sound rules as teachers can possibly give them; excessive teaching of phonological awareness distracts from this essential requirement.

A positive is that ELF teaches letter sounds. The letter sequence is t, f, j, g, m, n, h, v, w, y, sh, th, ch, k, p, b, d, i, a, u, o, e, r, l, s and z. However, the sequence is somewhat odd with easily confused letters (e.g., p, d and b) taught together and low-frequency letter-sounds (e.g., v, w, y and z) taught before more frequently occurring letter-sound conversion rules.

It is not until Theme 5 that students begin spelling nonsense words using the letter-sound conversion rules. It is worth noting that the reading and spelling activities provide a limited number of repetitions compared to other intervention programs (e.g., Understanding Words and Minilit).

So I have a few problems with the program, but does it work? The answer is that we don’t know. There are no published peer-reviewed studies on effectiveness.

Finally, I was surprised to read that the authors recommended using the program in semester 2 of Grade 1 or even in Grade 2. They claim that this will give students the opportunity for 6-months of classroom instruction and gain some exposure to both phonics (a dangerous assumption) and handwriting. They provide no evidence for this suggestion and it seems an odd one. They are almost recommending a wait-and-see-who-fails approach.  Surely an early literacy foundations program should target Prep/Kinder students or at least from the very start of Grade 1?

Conflict of Interest:

Craig Wright is the author of the Understanding Words reading intervention program.

 


Comments Off

How do children learn to read and what goes wrong for some children?

February 07th, 2012

Models of reading: The dual-route approach

There are a number of different models of how we read, the most appealing of which is Max Colheart’s Dual-Route Approach. 

This approach uses the terms “lexical” and “non-lexical” to describe two ways in which words can be read aloud. “Lexical” refers to a route where the word is familiar and recognition prompts direct access to a pre-existing representation of the word name that is then produced as speech. “Non-lexical” refers to a route used for novel or unfamiliar words. As unfamiliar words are, by definition, unrepresented in the brain’s lexicon, they cannot be read directly. They have to be decoded using knowledge of grapheme-phoneme (or “letter-sound”) conversion rules (GPCs).

Figure 1 shows the Dual-Route model. The visual features and the global form of the printed word shelf are recognised as a familiar word, which activates the orthographic representation of shelf in the Orthographic Lexicon, in turn activating the word’s name in the Phonological Lexicon, before activating the word’s meaning in the Semantic Lexicon. The 4 Sub-Lexical Phonological Representations (speech sounds) (i.e., /sh/ /e/ /l/ /f/) are then activated and produced as the spoken word shelf.

In contrast, gallimaufry will not be read directly by anyone other than those with exceptional large vocabularies, because most mere mortals will have no pre-existing Orthographic or Phonological representations for this very low-frequency word. Instead, the individual letters are analysed using knowledge of GPCs (e.g., g = /g/), the appropriate Sub-Lexical Phonological Representations are accessed, before finally, the sub-lexical units are reassembled as a word and translated to speech. There is a feedback system in operation in this process that allows access to the word’s meaning and learning of new words to take place.

Figure 1. An adapted Dual-Route model of reading showing the different pathways by which the know word shelf and the unknown word gallimaufry may be read aloud. Source http://www.maccs.mq.edu.au/~ssaunder/DRC/.

 

Skilled readers mostly use the Lexical route. They retain the ability to use the Sub-Lexical route (consider how you read gallimaufry and bioluminescence), it is just that they don’t need to – they have had enough experience with reading to have developed sufficient lexical knowledge. In contrast, young readers, and individuals struggling with reading , do not possess word-specific lexical knowledge in sufficient quantities. How then, do we teach this skill?

The goal of all word-reading instruction should be to assist students to read most words fluently, using the lexical route. But how do we do this? The answer lies in the development of the sub-lexical route.

The development of Sub-Lexical Reading

The following describes what we think might happen in learning to read. However, readers should note that we have a good idea of how skilled reading occurs but we actually don’t yet know how we learn to read.

Imagine that the young student destined to become a skilled reader has, by virtue of genetic fortune, all of the skills required to read. Then imagine that the following words are the first they ever attempt to read:

sit

pat

The skilled-reader-to-be has some recognition that words can be segmented into speech sounds (e.g., sit has three: /s/ /i/ /t/). This helps them map the written letters s, i t, p, a onto a speech sound (e.g., s = /s/). Acquiring these “letter-sound mappings” gives the student access to the Sub-Lexical reading route. They can read any word that has any combination of those five letters without the help of an adult (i.e., they can independently read words like tap, tip, sap, spit).

Research has shown us that we have to accurately identity a word between 4-12 times before it becomes what teachers refer to as a “sight word”. That is, before a strong enough representation of the visual form of the word and its name is formed to allow reading using the Lexical route. At this point, reading begins to speed up. The student no longer has to laboriously decode every word; fluent recognition frees up cognitive space and energy which can be used for other functions, such as comprehension and learning unusual spelling patterns.

The process seems to be different for the unskilled reader. For whatever reason, when they see the first words sit and pat they have difficulty recognising the relationship between the speech sounds in the words and the written letters used to represent them. Acquisition of “letter-sound mappings” is therefore delayed, preventing access to the Sub-Lexical reading route. When the young, or unskilled reader sees the words below, how then do they read them?

sap tip

at pit

They can’t accurately decode them using the Sub-Lexical route. Instead, they guess. In some cases the guess may be ‘educated’, but a guess all the same. Sometimes they will try to predict the word from the meaning or structure of the sentence. Often they will look at a picture to help with the printed words. They may also rely upon salient visual cues within the words, such as the initial letter, word length, or other obvious letters. It is possible that an unskilled reader will read “A fat cat sat on the mat” as “A big kitten was sitting on the floor”.

Despite common belief in education circles, using contextual cues is not only inaccurate, but damaging to students’ reading. Research has shown that contextual cues only provide 5-25% accuracy rates; and for the important content words in sentences the accuracy rate is towards the bottom of that range. In addition, because prediction from context avoids use of both the Lexical and Sub-lexical routes, even if the student guesses correctly, it does not count towards the 4-12 successful decoding attempts required to learn a word “by sight”. Using contextual cues is therefore self-defeating.

Teaching students to read

Reading is a complicated process that requires instruction in, among other things, phonological awareness, letter-knowledge, phonics, spelling, strategy development, vocabulary, grammatical awareness, and comprehension strategies. The Understanding Words programme is a good example of an evidence-based reading intervention.

sciseekclaimtoken-4f30c7efb0951

<p><span style=”display:none”>sciseekclaimtoken-4f30c7efb0951</span></p>

 

 

Comments Off

Executive Functions & Education

February 07th, 2012

What are Executive Functions?

The Executive Functions (EF) are a set of cognitive functions that provide the infrastructure for acquiring skills and knowledge and that coordinate the production and organisation of that knowledge. They include the ability to inhibit motor responses and other actions, to initiate effort, to sustain attention and effort, to shift attention or strategy, the controls of memory, and the ability to plan and organise for task performance.

Teachers may be more familiar with the term metacognition. This term may be misleading because it creates a false impression of a little meta-person running all cognition or thinking. In fact, the EF are no more or no less important than other cognitive operations and academic skills. The latter can be thought of as the ingredients for a task while the EF provide the recipe. One cannot prepare a meal without the ingredients and the recipe.

Dysfunction in the core EF of behavioural inhibition (the ability to inhibit or stop behavioural responses to stimuli) is now considered to be the a major deficit in Attention-Deficit/Hyperactivity Disorder (ADHD). Executive dysfunction in various forms is also present in a number of other disorders including learning disabilities, Autistic Spectrum Disorders, anxiety disorders and depression.

The adverse effects of executive dysfunction

The EF exist within the brain at a cognitive level and therefore cannot be directly observed. The behaviours that EF dysfunction creates can, however, be observed and include:

  • Having difficulty inhibiting behaviour (i.e., stopping and thinking).
  • Being impulsive, rushing work, and blurting out answers.
  • Failing to pay close attention to details.
  • Having difficulty sustaining mental effort and avoiding tasks that require sustained effort.
  • Being easily distracted and switching from one task to something less important.
  • Difficulty with organisation and planning.
  • Appearing to make the same mistakes repeatedly despite seeming to understand rules and the use of appropriate discipline.
  • Parents and teachers having to continually repeat rules.
  • Some students may manifest hyperactivity verbally (i.e., talk excessively).
  • Becoming fixated on particular things (e.g., television or computer).
  • Failing to learn from previous behaviour and consequences.
  • Only being motivated to perform when there is something in it for them (i.e., they need external motivation).
  • Having poor perception of time and poor ability to use time to plan behaviour.
  • Appearing sluggish and taking a long time to process information.
  • Needing constant assistance in solving problems.
  • Poor short-term memory and general forgetfulness; including forgetting things they need for school or forgetting to hand things in at school.
  • General difficulties with regulating emotion. Emotional responses to situations may appear extreme. It can be difficult for them to remain calm and think things through. They can become overexcited and ‘throw tantrums’ more regularly than their peers.
  • Becoming overwhelmed by tasks that should be manageable.
  • Talking a lot, but not really saying anything.
  • Disorganised speech/language and poor grammar (i.e., their sentences are poorly constructed).
  • These kids have been described as carrying around an excitement meter that they use to evaluate every stimulus in their immediate environment. Essentially, the thing with the highest reading wins (i.e., gets their attention).

Consider the child in a classroom who is faced with both a page of maths problems and his peers talking about BMX bikes. Which stimulus is he to choose? For most kids with EF dysfunction there is no option – they go for the more exciting discussion about BMX. And what happens? They are seen as ‘inattentive’ and in some cases ‘disruptive’. In actual fact, they are being quite attentive to the BMX conversation; it’s just that it may be inappropriate to do so in the classroom.

Once attention has been allocated to an exciting and rewarding stimulus, it can also be hard to get the child to inhibit that response and return to the original task. 

Inconsistency

An individual with EF dysfunction is likely to be inconsistent in academic performance and behaviour; some days they will and some days they won’t, rather than simply not being able to do something at all. They will often have the skills necessary for a task (i.e., the ingredients), but fail to produce adequate performance or output because the EF controls (the  recipe) do not provide the necessary regulation on performance.

Managing Executive Dysfunction

If a child displays some of the symptoms described above and those symptoms are causing them a problem it is appropriate to have them assessed. The psychologist will need to determine what is causing the problems, make the appropriate diagnosis(es) and design a specific treatment plan.

Some individuals, particularly those with a diagnosis of ADHD, may benefit from a stimulant medication or non-stimulant such as atomoxetine. It is important to recognise that medications do not teach skills. They can, however, give the child a greater ability to stop, think, and to perform what they know. Medication does not work for every child but if you consider it worth a trial you should discuss the matter with a medical specialist.

Management of EF should always include a behavioural component and requires a team-based approach. Ideally, the school counsellor and learning support team can assist with management. However, it may be wise to arrange a meeting of all stakeholders at the school to discuss the case. These meetings should be used to further define the problem behaviours within the classroom and to develop methods for improving attention to detail and task and for increasing consistency and output.

More information on EF

lanlfoundation

Lynn Meltzer

National Resource Center on ADHD

Russ Barkley

Russ Barkley 2

 

Comments Off