Article for Category: ‘Educational Program Reviews’

Cogmed improves working memory scores but not attention or academic skills

October 01st, 2012

ADHD and learning disabilities (LD) co-exist in many children. Many of these students have problems with working memory. Although a little crude, the best description I have for working memory is that it is a set of cognitive functions that help you keep your s@#* together while performing a complex task.

Working memory predicts parent ratings of inattentive behaviours and has been found to be below average in LD and ADHD samples in a large number of studies. It has also been shown to be a predictor of academic success. Previously it had been thought that working memory was a fixed trait. However, recent evidence (see for e.g., Klingberg, 2010; Klingberg et al., 2005) is suggesting that it can be modified via a computerised memory training program called Cogmed. Note that the claim has been that Cogmed improves working memory when all we can really conclude is that a period of training using the program leads to improved scores on tests of working memory. There is an important difference between these two claims.

While there may be evidence for improved working memory scores there is limited evidence of transfer to important functional skills. The sort of transfer one would want to see includes reductions in symptoms of ADHD and improved academic performance in students who have ADHD/LD.

A recent study (Gray et al., 2012) in Journal of Child Psychology & Psychiatry performed a randomised controlled trial on the Cogmed program. 60 adolescents (age 12-17) were recruited from a residential school for students with severe LD and ADHD. Inclusion criteria were (a) full time attendance, (b) diagnosis of LD and ADHD made in the community before entering the school, (c) IQ >80, and (d) English as primary language.  Data from standardised achievement tests indicated that 82% of the sample scored <25th percentile in reading, spelling and maths. 72% of the sample were <25th percentile on the WISC-IV Working Memory Index. Almost all were receiving psychostimulant medication.

Participants were allocated randomly to a Cogmed or maths training group. Working memory tests included digits forward and backward and spatial span, the D2 Test of Attention and the Working Memory Rating Scale. Transfer tests were the WRAT-4 Progress Monitoring Version tests, which includes tests of reading, spelling, maths, and sentence comprehension. Parent and teacher ratings of attention and other symptoms of ADHD were also obtained.

Results showed that the Cogmed group performed better at post-test on the measures of backwards digit span and spatial span. No group differences were found for forwards digit span. Cogmed had no effect on teacher ratings of attention and behaviour. No effects were found for any of the academic measures.

Taken together, the data showed two important things. First, they added to the evidence that working memory is trainable. Second, and this is the most important point, improving working memory via Cogmed did not lead to any improvements in teacher- and parent-rated behaviour or to improvements in any academic skill relative to a group who received maths intervention.

These conclusions are fairly consistent with the whole “brain training” (or as I call it, “neurobabble”) literature. Great claims are made by program developers about improvements in “brain function” but few gains are seen on real-life skills.

Comments Off

Errors in Wechsler Individual Achievement Test nonword reading subtest

May 24th, 2012

The Wechsler Individual Achievement Test – 2nd Ed (WIAT-II) is one of those tests that most psychologists and special education people have in their cupboard. Mostly this is because it is published by Psych Corp who market it well and because it covers a number of different curriculum areas so people don’t have to use/buy multiple tests.

I personally quite like the test. The norms are Australian and it does cover a range of skills/knowledge areas. I do have issues with the Listening Comprehension and Oral Expression subtests and wish they had included both irregular-and non-word reading and spelling subtests. The other issue I have with the WIAT-II is that there are real words in the Pseudoword (nonword) Decoding subtest.

The test was developed in America so they can perhaps be forgiven for thinking that ‘kip’ is a nonword. However, most English and Australians will recognise that a kip is a synonym for a quick nap.

Then we have the (non) words ‘scaw’ and ‘nesal’. Neither is a legal word when written in this way. However, when ‘sounded out’ as the test requires kids to do they produce the same name as ‘score’ and ‘nestle’. Note that ‘nesal’ can also be said as /neesal/. There is also the nonword ‘fum’ which seems to be a perfectly good nonword. However, many children who present for reading assessment have speech articulation errors. Many of them will still say the /f/ sound for /th/, that is they say ‘fum’ instead of ‘thumb’. Hence, when they decode ‘fum’ (correctly) they think they are saying a real word because /fum/ is how they say the real word ‘thumb’. (Hope that makes sense).

On face value this doesn’t seem like a big problem because kids still have to use sub-lexical decoding to arrive at the (non)word name. However, it potentially affects the child’s mental set. A mental set is the framework or strategy one uses to solve a problem. Before a child begins the task they are told that they are to read made up words and that even though they are made up words they can sound them out and say them like a real word.  Part of any good strategy would be to monitor the name produced by decoding to make sure it is a nonword and by definition not a real word. Yet for some children (those with /f/ /th/articulation issues) the very first item produces a real word name. This has to have an affect on their mental set and potentially an adverse affect on task performance.

I don’t quite understand how a company with as many resources as Psych Corp could have made these simple errors when there must be hundreds of legal nonwords that they could have used instead. One can only hope they will make changes for the 3rd edition.

 

 

Comments Off

Review of Success and Dyslexia

March 02nd, 2012

Perceptions of competence, contingency and control are all constructs related to coping and to emotional distress. Competence refers to the degree to which the individual believes himself or herself to be good (competent) at a task or skill. Contingency is the degree to which the individual believes that outcomes are contingent on their own behaviour. Control describes a general construct that measures how much control the individual feels in a given situation or for a given task. In the book Success and Dyslexia, authors Nola Firth and Erica Frydenberg take as their starting point that the coping strategies used by students who have dyslexia (and presumably other learning difficulties) tend to be passive and negative. These strategies typically betray a lower sense of competence, contingency and control than their higher-achieving peers. They recognise that passive and negative coping strategies not only adversely affect school performance but that they have the potential to lead to emotional distress. To help alleviate this problem they have developed an 11-step coping program.

The program is designed for middle primary students and runs on two fronts: as a whole class program with additional small-group work for students who have dyslexia. Four sessions (1-3 and 5) are devoted to developing awareness of helpful and unhelpful coping strategies and to help the students understand their own coping mechanisms. Session 4 is a goal setting session in which students are encouraged to develop a realistic goal they want to achieve over the course of the program. Although the authors do not explicitly say so, presumably helping the student attain a goal with effort and active coping will help develop a sense of competence, improve perceived contingency and make it more likely they will be motivated by opportunities to develop competence in the future.

Sessions 6-11 teach cognitive-behavioural principles to help students become aware of the link between cognitions (thoughts and images), emotion and behaviour. There is a large psycho-education component and skills training that some teachers and clinicians will be familiar with from social skills programs. For example, students are taught about positive self-talk and how to use assertive language and body language. While at times the authors seem to confuse the cognition and behavioural components of the cognitive model, they have done well to simplify the core component of the program down to a choice between helpful and unhelpful choices.

The book includes sufficient information and materials to allow a clinician or educator to run the program with reasonable fidelity. Other positives include the focus on the affective and strategic aspects of learning difficulties that are too often ignored and the way in which class teachers and non-LD students are involved directly in the therapy. Teachers are also provided with a list of useful accommodations for students who have dyslexia (pp. 7-8). Implementing these strategies alone would likely lead to better academic, emotional and behavioural outcomes. However, in my experience, professional development, including ongoing support from a skilled learning support coordinator would be necessary to ensure successful application of the accommodations within the classroom.

On the downside, it was unfortunate that the authors felt the need to highlight the dubious practice of including an IQ test in the “diagnosis” of dyslexia (p. 6). It might also have been good to draw attention to the possibility that the Success and Dyslexia program is not a substitute for intensive skills intervention. Given the current state of practice in this Australia, one cannot presume that a student who gets to middle primary grades as a poor reader has had the systematic intervention that we know can significantly reduce the effects of dyslexia.  It is hard to imagine that a student will develop a strong sense of well-being when their perceptions of low competence and control are entirely rational. Indeed, most adults would leave a job that made them face the daily failure that students with dyslexia typically experience. Perhaps the Success and Dyslexia program would work best in combination with an intervention designed to directly improve academic skills.

 

 

Using scientific evidence to improve educational decisions

February 15th, 2012

I often find myself playing the role of Grumpy Old Man in conversations about the selection of intervention programs and other teaching practices. A statement along the lines of “but there’s no evidence that it works” is often preceded by much face rubbing and hair pulling on my part. The response I hear most often is “but we see it working” which precipitates more face rubbing and hair pulling from yours truly. Perhaps the biggest barrier in these conversations is that teachers and scientists often have different definitions of what is meant by “evidence“. This blog attempts to explain what scientists mean by the term evidence.

Evidence-based practice

The term evidence-based practice began in medicine. It seeks to maximise the accuracy of clinical decisions based on evidence gathered from the scientific method. No one wants to see a doctor who prescribes a treatment just because they believe it works or because they heard a colleague give a presentation on it. There should be a burden of proof (and theoretically there is, although this burden doesn’t exempt doctors from making mistakes) on a doctor to make treatment choices that have been shown to work significantly better than no treatment or, if there are alternative treatments available, to choose the one that works most effectively with the fewest side effects. These statements are axiomatic but they aren’t often applied to education, an area of at least equal importance as health.

What constitutes scientific evidence?

A very brief description of the scientific method in relation to treatments is as follows:

  1. Select 2 equivalent groups. If they don’t share the same characteristics and the same level of skills before the intervention you can’t be certain that any differences observed after the treatment were due to the treatment itself or to the pre-existing differences between the groups.
  2. Gather pre-treatment data using well-validated and reliable instruments that clearly measure the outcome in question. For example, a comparison of a group versus a one-on-one reading program should be evaluated with tests of reading ability not of motor skill.
  3. Randomly allocate students to the groups. If students, parents or teachers actively select the groups it damages the results. For example, a recent study compared neurofeedback therapy for ADHD to a non-treatment group. The results showed that parent-report measures of ADHD symptoms improved in the neurofeedback group relative to the untreated group. However, because parents actively decided to enrol their child in the neurofeedback group or actively decided not to, all these data show is that if parents believe that neurofeedback is going to work they will report that it does work!
  4. Implement the treatment while making sure that the treatment is run the same for everyone and that additional teaching or therapies are not going on at the same time.
  5. Administer post-tests to determine outcome.

Additional points of note:

  1. Children’s development is not static; they are in a constant state of personal improvement. Observations that students seem to improve over the course of a program/intervention are therefore mostly meaningless. All students will improve over time. The only way to tell if a teaching method works is to compare a child (or preferably a group of children) to an equivalent group who receive a different teaching method.
  2. Placebo effects are large in children. They will often improve just because of the extra attention paid to them. Therefore, observations, or even more empirical data, collected on a treated group in the absence of a group receiving an equal amount of attention are prone to error.
  3. Regression to the mean is a statistical artefact that holds that extreme scores (e.g., low scores on a reading test) are likely to return closer to the mean (average) on repeat testing. Hence, proponents of a brief intervention, say of 2-weeks, may claim that their treatment resulted in the student improving from a score of 70 to 80 and that change was significant when in fact the reason for the improvement may simply be regression to the mean.
  4. Beware research using tests that measure what is taught in the treatment. See Merzenich et al. (1996) for an example.

An example of good data

Hatcher, Hulme, and Ellis (1994) compared four groups of children all of whom had comparable reading dififuclties. The children were randomly allocated to four groups each of which received a different treatment. One got phonological awareness training, one phonological awareness plus reading and another just did reading. The last group received no treatment. Although the 3 treated groups received different treatments, they received the same amount of instruction in terms of time, attention and contact with teachers. Because this study controlled for all other variables they were able to claim that the stronger reading growth in the phonological awareness plus reading group was due to that treatment being superior. These claims could not have been made if the students were not randomly assigned to groups, if they differed on some other variable before treatment, or if some other variable such as the time spent on intervention differed between the groups.

An example of poor data

I recently suggested to a teacher that their suggestion that a child with reading problems see a behavioural optometrist was not evidence-based. See here for a review of the evidence for visual therapy in reading difficulties. The person claimed that they had a student in a previous year who improved in their reading ability while doing vision therapy and that they therefore believed that vision therapy worked. Unfortunately, the belief simply cannot be supported by scientific principles. First, the improvements could have been a placebo effect. In fact, it is safe to assume that placebo effects represent at least part of all positive teaching outcomes for all students. Paying attention to them helps them improve. It is probably equally likely that the child would have improved if they recited the alphabet while standing on a wobble board each day. Second, children improve in almost all skills over the course of a year as a natural course of events. In this case, there is no way of being certain that the improvement wouldn’t have occurred anyway. Finally, teachers obviously want students to improve and history is full of examples where even eminent scientists have deluded themselves into believing something because they were keen for it to be true (e.g., see the case of cold fusion). In summary, beware of using observations of single cases like this as the basis for educational decisions.

Types of evidence

Carter and Wheldall (2008) have proposed 5 levels of evidence that can be used to guide interpretation of educational research.

Level 1 

Level 1 programs or practices meet two criteria. First, they are consistent with existing scientific evidence in terms of current theory and recommended practice. Second, there is evidence of efficacy from a number of independent randomised controlled trials. Carter and Wheldall (2008) refer to Level 1 as the ‘gold standard’ and suggest that programs and practices meeting these criteria may be recommended with confidence.

The Hatcher et al. (1994) study described above represents an example of the gold standard Level 1 evidence.

Level 2

Like programs or practices that meet Level 1 criteria, Level 2 programs or practices are consistent with existing scientific evidence in terms of current theory and recommended practice. They also have empirical evidence supporting their efficacy but the design of the studies may not quite meet the gold standard of a randomised controlled trial necessary for Level 1 rating. These programs represent the silver standard and can be recommended with reasonable confidence.

An example, of a Level 2 program is my own Understanding Words reading intervention program. The data we have on Understanding Words is summarised briefly below.

  1. A clinic study showed that a group of students made significant improvements in response to two terms of Understanding Words teaching. The strength of the gains were strong and similar to the average growth seen in randomised trials reported in the literature. However, because the study didn’t have a control group we can’t guarantee that the changes were the result of the intervention rather than to some other variable.
  2. A controlled study that showed that a group of Grade 1 students with reading difficulties, made significantly greater gains than a control group of average readers. In other words, the poor readers ‘closed the gap’ on the good readers as a result of the intervention. This study goes close to meeting the gold standard criteria except that the students were not randomly allocated to groups and the research was not independent of the program developer.
  3. We also have four studies using well-controlled case series designs. These studies have showed that introduction of the Understand Words treatment prompts increased reading growth in individual students compared to baseline periods in which no treatment or an alternative treatment was being provided.

Together, these studies fall short of the gold standard of Level 1 evidence but the program fits into the Level 2 strata based on its theoretical soundness and treatment-outcome data.

Level 3

Level 3 programs and practices make theoretical sense. These programs could be said to be based on evidence because there is often empirical data showing the effectiveness of the type of teaching contained in the program. However, there have been no scientific studies documenting the effectiveness of the program or practice itself. These programs might be used in the absence of an alternative that has stronger evidence. However, they should be used with caution. An example may be the ELF reading program. Arguably, ELF has a reasonably sound theoretical basis; however, there is no evidence beyond observations that the program works. Teachers and clinicians who want to be evidence-based practitioners would be cautious about selecting the program when there are other programs with stronger evidence bases.

Level 4

Level 4 programs are Not Recommended. They provide little or no empirical evidence for efficacy. They often rely on testimonials and observational ‘data’ to support their claims. Examples include fish oil as a treatment for ADHD and behavioural optometry as a treatment for reading difficulties.

Level 5

Level 5 programs and practices represent those for which there is evidence that the program is unsafe or results in negative outcomes. These programs and practices should be avoided at all costs.

Summary

Teachers can do a lot of good by becoming evidence-based teachers. At present, most of teachers’ professional reading involves practically-oriented periodicals or books rather than research-based and peer-reviewed journals (Rudland & Kemp, 2004). It has also been reported that regular and special education teachers value the opinion of colleagues, workshops and in-service activities (which may present opinions with no evidence-base) as more trustworthy than professional journals (Landrum, Cook, Tankersley, & Fitzgerald, 2002). Further, Boardman, Arguelles, Vaughn, Hughes, and Klinger (2005) reported that when making decisions about classroom implementation of practices, special education teachers did not consider it important that they be research-based. I suspect that this is understandable as practical strategies may seem to be more applicable for teachers. However, it would be nice if these things began to change as they have in medicine. Teachers could become more critical about the claims made by proponents of educational practices and more critical of their own teaching methods. They could begin by asking themselves what the evidence is to support the use of programs and practices. They could actively seek out that evidence from peer-reviewed sources rather than relying on books, the Internet or the opinions of colleagues presenting PD. They could ask themselves: “Would I be happy if my GP used a Level 3, 4 or 5 treatment on me just because s/he believed that it worked?“. If not, they could ask another question: “Should I therefore be cautious in selecting educational programs that have limited evidence?“.

Final note

The last thing I intended was for this blog to be interpreted as teacher-bashing. I could write a similar blog about some of my psychologist, occupational therapist, or paediatrician colleagues. Nor am I immune to human foibles and biases. However, the fact is that we should all strive to do better for students who have learning difficulties and indeed all children. To do that we need to recognise the dangers of our belief system and of relying on the opinions of peers. We all need to strive to base decisions on science, not on philosophy or pseudoscience. We would also do better to recognise the limits of our knowledge. To paraphrase Donald Rumsfeld, the best teachers (and other clinicians) know what they don’t know.


 

 

 

Review of Early Literacy Foundations (ELF) program

February 14th, 2012

The Early Literacy Foundations (ELF; UQ, 2006) program is produced by the Speech Pathology and Occupational Therapy faculties at the University of Queensland. It is designed for “boosting a range of literacy skills in year one students” (p. 8). The program uses the term ‘literacy’ in a broad sense to encompass the skills of reading (at the word-level), spelling and handwriting. It is designed as a withdrawal program for small groups. This feature will be sure to grab the attention of cash strapped learning support co-ordinators. The aim of the program is to “provide students with strategies to boost their literacy, including listening, spelling, reading, handwriting, and a range of the motor skills important for school participation” (p. 9).

The program consists of a resource manual and a theme book that provides instructions and student materials. The teacher instructions are clear and could be followed by a paraprofessional. Being largely developed by speech pathologists it is unsurprising that there is a strong emphasis on phonological awareness. There is also a strong emphasis on postural, sensory and motor skills and here’s where the first problem arises. It is true that motor coordination weaknesses co-exist with learning difficulties (e.g., Kaplan, Wilson, Dewey, & Crawford, 1998). However, far from 100% of students with reading difficulties have motor weaknesses and there is no evidence that motor weaknesses are causal in the reading difficulties. It is therefore strange that a literacy program would include a motor component. In this author’s opinion, motor activities have no place in a reading program and it would be far better to select the students who have motor disorders for a separate program. The rest of this review will ignore the motor component of the program and focus on the ‘literacy’.

Teaching is preceded by a screening test that consists of various phonological awareness activities, a spelling task and a nonword spelling task from the SPAT. I like the author’s suggestion to rank order scores and select all students who score below the mean for intervention. This is unlikely to occur in the real-world but it shows the right intent.

The program has 12 “themes”. Each theme consists of a number of activities. Together, the activities in each theme take approximately 1-1.5 hours to administer.  If true, this means that students will receive a maximum of 18 hours of instruction. In reality, the amount of reading instruction will be even less as a large part of the program involves motor skill activities. This seems light for an intervention program.

The phonological awareness part of ELF progresses through the developmental stages of this metalinguistic skill (e.g., Adams, 1990; Yopp, 1992). Themes 1 and 2 consist of rhyming, segmenting sentences into words and syllabification activities. Themes 3 and 4 focuses on onset-rime activities while later themes focus on phoneme-level activities. Here’s the next problem.

There is certainly evidence suggesting that phonological awareness is correlated with reading and many draw the inference that it is involved in learning to read (e.g, Foorman, Francis, Novy & Liberman, 1991; Hatcher, Hulme & Ellis, 1994; see Snowling, 2000 for review) but the case is far from proven (see Castles & Coltheart, 2004 for review). There are cases of reading difficulties in which no phonological problems are present (Castles, 1996; Zoccolotti & Friedmann, 2010) and, despite popular belief, there is actually limited evidence showing that teaching phonological awareness has any additional benefit above and beyond teaching letter-sound conversion rules.

Even if one accepts that phonological awareness is a skill required for learning to read, the question becomes how much phonological awareness is required? Many people agree that being able to segment and blend words of 4-5 phonemes is sufficient. This makes the phoneme manipulation, deletion and substitution activities in the later Themes of ELF somewhat redundant. It should be noted that the major concern is not that these activities are bad, but that they are unnecessary. Reading programs need to target reading and spelling skills, not distal factors like phonological awareness. Students need as many repetitions of letter-sound conversion rules and decoding and spelling attempts using the letter-sound rules as teachers can possibly give them; excessive teaching of phonological awareness distracts from this essential requirement.

A positive is that ELF teaches letter sounds. The letter sequence is t, f, j, g, m, n, h, v, w, y, sh, th, ch, k, p, b, d, i, a, u, o, e, r, l, s and z. However, the sequence is somewhat odd with easily confused letters (e.g., p, d and b) taught together and low-frequency letter-sounds (e.g., v, w, y and z) taught before more frequently occurring letter-sound conversion rules.

It is not until Theme 5 that students begin spelling nonsense words using the letter-sound conversion rules. It is worth noting that the reading and spelling activities provide a limited number of repetitions compared to other intervention programs (e.g., Understanding Words and Minilit).

So I have a few problems with the program, but does it work? The answer is that we don’t know. There are no published peer-reviewed studies on effectiveness.

Finally, I was surprised to read that the authors recommended using the program in semester 2 of Grade 1 or even in Grade 2. They claim that this will give students the opportunity for 6-months of classroom instruction and gain some exposure to both phonics (a dangerous assumption) and handwriting. They provide no evidence for this suggestion and it seems an odd one. They are almost recommending a wait-and-see-who-fails approach.  Surely an early literacy foundations program should target Prep/Kinder students or at least from the very start of Grade 1?

Conflict of Interest:

Craig Wright is the author of the Understanding Words reading intervention program.

 


Comments Off