Article for Category: ‘Learning Difficulties – General’

Cogmed improves working memory scores but not attention or academic skills

October 01st, 2012

ADHD and learning disabilities (LD) co-exist in many children. Many of these students have problems with working memory. Although a little crude, the best description I have for working memory is that it is a set of cognitive functions that help you keep your s@#* together while performing a complex task.

Working memory predicts parent ratings of inattentive behaviours and has been found to be below average in LD and ADHD samples in a large number of studies. It has also been shown to be a predictor of academic success. Previously it had been thought that working memory was a fixed trait. However, recent evidence (see for e.g., Klingberg, 2010; Klingberg et al., 2005) is suggesting that it can be modified via a computerised memory training program called Cogmed. Note that the claim has been that Cogmed improves working memory when all we can really conclude is that a period of training using the program leads to improved scores on tests of working memory. There is an important difference between these two claims.

While there may be evidence for improved working memory scores there is limited evidence of transfer to important functional skills. The sort of transfer one would want to see includes reductions in symptoms of ADHD and improved academic performance in students who have ADHD/LD.

A recent study (Gray et al., 2012) in Journal of Child Psychology & Psychiatry performed a randomised controlled trial on the Cogmed program. 60 adolescents (age 12-17) were recruited from a residential school for students with severe LD and ADHD. Inclusion criteria were (a) full time attendance, (b) diagnosis of LD and ADHD made in the community before entering the school, (c) IQ >80, and (d) English as primary language.  Data from standardised achievement tests indicated that 82% of the sample scored <25th percentile in reading, spelling and maths. 72% of the sample were <25th percentile on the WISC-IV Working Memory Index. Almost all were receiving psychostimulant medication.

Participants were allocated randomly to a Cogmed or maths training group. Working memory tests included digits forward and backward and spatial span, the D2 Test of Attention and the Working Memory Rating Scale. Transfer tests were the WRAT-4 Progress Monitoring Version tests, which includes tests of reading, spelling, maths, and sentence comprehension. Parent and teacher ratings of attention and other symptoms of ADHD were also obtained.

Results showed that the Cogmed group performed better at post-test on the measures of backwards digit span and spatial span. No group differences were found for forwards digit span. Cogmed had no effect on teacher ratings of attention and behaviour. No effects were found for any of the academic measures.

Taken together, the data showed two important things. First, they added to the evidence that working memory is trainable. Second, and this is the most important point, improving working memory via Cogmed did not lead to any improvements in teacher- and parent-rated behaviour or to improvements in any academic skill relative to a group who received maths intervention.

These conclusions are fairly consistent with the whole “brain training” (or as I call it, “neurobabble”) literature. Great claims are made by program developers about improvements in “brain function” but few gains are seen on real-life skills.

Comments Off

When smile becomes slime: Subtypes of dyslexia

September 22nd, 2012

The dyslexia research world has many divisions. One of those divides is between those who think that subtypes of dyslexia exist and those that consider ‘subtypes’ to be an individual variation arising from a core deficit in phonological processing (see Stanovich et al., 1997; Thomson, 1999).

Pure subtypes exist in acquired dyslexia which occurs following an acquired brain injury (ABI). ABIs can produce clear dissociations in reading skill in previously good readers producing numerous subtypes. These include but are not limited to:

  1. Surface dyslexia – the non-lexical reading route (the pathway we use for ‘sounding out’) is spared and the patient can therefore read nonwords and regular words. However, access to the orthographic lexicon (our ‘word dictionary’) is impaired resulting in problems with reading irregular words.
  2. Phonological dyslexia – Lexical access is spared so the patient can read words they had in their ‘word dictionary’ before the ABI. In contrast, they can’t read new words or nonwords because the non-lexical (sounding out) pathway is impaired.
  3. Mixed dyslexia – a combination of surface and phonological dyslexia.
  4. Deep dyslexia – difficulty reading nonwords and the presence of semantic reading errors. For example, they might read ‘orchestra’ as symphony or ‘river’ as ocean (see Plaut & Shallice, 1993)
  5. Attentional dyslexia – a deficit in which letters migrate between neighbouring words but are correctly identified and keep their relative position within the word. For example, ‘fig tree’ can be read as fig free or even tie free.
  6. Neglect dyslexia – results in omissions or substitutions of letters at the beginning or end of words depending on whether the patient has left or right neglect.
  7. Letter position dyslexia – the patient predominately makes errors of letter migration within words, such as reading ‘board’ as broad or ‘pirates’ as parties. The migrations occur for medial letters within the word.

While these pure subtypes certainly exist in acquired dyslexia, as I mentioned, there is considerable controversy about whether they exist in developmental cases. For mind, they can be found but they are unlikely to be as pure as in acquired dyslexia. For example, I recently saw an 11 year old who read irregular- and regular-words well but had great difficulty reading non-words. This pattern indicates that the lexical pathway works quite nicely but that he has deficits in the non-lexical pathway. However,  he could, of course, read some non-words which is not what one usually sees in a pure case of acquired phonological dyslexia.

One really interesting subtype of developmental dyslexia that has been identified in Hebrew and Arabic, but never before in English has been reported by Kohnen et al (2012). Letter position dyslexia is a peripheral dyslexia in which the major reading errors are characterised by migrations of middle letters within words (e.g., ‘smile’ is read as slime). They make no more letter identity errors (e.g., reading ‘form’ as farm) or between word migrations (e.g., ‘dark part’ read as park dart) than expected for age. Using various tasks, Kohnen et al showed that the problem is specific to the letter position encoding function in the visual analyser part of the word-reading system.

I suspect that children who have weaknesses in the peripheral parts of the reading system, as in letter position dyslexia, and in the non-lexical pathway as in the case of the 11 year old I mentioned above, are quite common in the population. We just don’t see them much because they probably still read ‘okay’ and aren’t referred to specialist clinics as a result. It is probably only in the odd case where the child has severe spelling problems or similar that they present to a clinician skilled enough to identify the problem.

Regarding letter position dyslexia, we currently don’t have normed tests that allow us to accurately measure migration errors. I wait on the wonderful Saskia Kohnen to produce such a test; a feat she will inevitably accomplish sometime soon.

 

 

Comments Off

Coloured overlays and glasses as a treatment for dyslexia

September 09th, 2012

I probably have a thing variously called Irlen-Meares syndrome, scotopic sensitivity syndrome, visual stress or visual discomfort. I use the term visual discomfort in deference to my old PhD supervisor Liz Conlon (my PhD is old not Liz) who is a leader in the field although she hardly ever gets cited. Here are some of her papers you bums! (Conlon et al, 1999; Conlon, Sanders, & Wright, 2009; Conlon & Humphreys, 2001; Conlon et al., 1998; Conlon & Sanders, 2011).

What is visual discomfort?

No one really knows what visual discomfort is. My own view, somewhat consistent with the literature, is that it is an abnormal sensory sensitivity to stimuli of high contrast, and/or low spatial and/or high temporal frequencies. Black text on a white page is an example of a high contrast stimulus. Single spaced text with small font size is an example of a stimulus of low spatial frequency. One of the reasons universities require assignments to be typed in 12-point font, double spaced is that double spaced text is more comfortable to read than single spaced. High temporal frequency stimuli are characterised by rapid flashing. Think of the rapid flicker emitted by strobe lights or a fluorescent bulb. These stimuli lead to excessive neuronal firing that can lead to perceptual distortions or, in my experience, simply make the stimuli uncomfortable to be around.

My own visual discomfort manifests as light sensitivity, too much time in harsh sunlight sans sunglasses results in eye strain and a headache (although this may be psychosomatic as I spent many years as a kid surfing and playing cricket sans sunglasses with no ill effects). I dislike fluorescent lights, which unfortunately light our clinic offices. One office has a bulb that runs directly along my left eye line as I sit in the therapy chair. After a heavy day of consulting I can actually feel a “buzzing” and some days a bad headache in the part of my head that seems to match where the light runs. Again, this may be neurotic but needless to say I attempt to avoid this room. I had trouble with the old CRT computer screens at university; LCD screens were still rare back then. Flashing lights drive me nuts. Laser shows, strobe lights at concerts, and my 3-year old son’s flashing Batman toothbrush (a light on the brush flashes, not Batman) all make me a little more grumpy than usual.

Visual discomfort, reading and dyslexia

There is a theory that visual discomfort causes print to become distorted, which affects word reading and comprehension in turn. Visual discomfort is also claimed to affect reading efficiency such that sufferers can only read for short periods and are prone to reading related headaches.

Visual discomfort has been reported to be more prevalent in dyslexic populations. However, the relationship between dyslexia and visual discomfort remains controversial. Visual complaints are made by many healthy people and visual discomfort also exists in skilled readers (I’m an example). That visual discomfort exists in skilled readers makes a nonsense of claims that it is a form of “visual dyslexia”. Dyslexia and visual discomfort are separate conditions.

Visual discomfort, dyslexia and coloured overlays/lenses

Coloured overlays or lenses are a common treatment for visual discomfort (Allen, Gilchrist and Hollis, 2008; Wilkins, 19952003; Wilkins, Huang and Cao, 2004). Coloured overlays are thin transparent coloured films that are placed over a page of text. They are designed to colour the page without affecting clarity of the text.

The evidence for whether coloured overlays improve reading is mixed. A lot of the existing data published in “scientific” journals are plagued by methodological concerns, including no controls on other therapies/intervention or poorly matched intervention groups. Of the better studies Singleton and Trotter (2005) used undergraduate students with (n =  10) and without (n =  10) dyslexia. Each group had 5 students with high visual discomfort (HVD) scores and 5 with low visual discomfort (LVD) scores. All participants read faster using their chosen overlay. The dyslexics with HVD scores made significant gains in reading speed with an overlay while the other groups made non-significant change (gains of 3-4%).  Singleton and Henderson (2007) showed children (6-14 yoa) made greater improvement in reading rate with coloured overlays relative to reading-age matched controls. In contrast, Ritchie, Della Sala and McIntosh (2011) reported on 61 children (7-12 yoa) with reading difficulties (77% were diagnosed by an Irlen diagnostician as having the visual discomfort). There was limited evidence that individually prescribed Irlen coloured overlays had any immediate benefit for reading rate.

A recent study from the lab of respected reading scientist, Maggie Snowling, investigated the effect of coloured overlays in a well-designed experiment. They took 26 controls and 16 people with dyslexia, all undergraduate students, matched for IQ. Both were tested on two reading tests. The Wilkins Rate of Reading Test (WRRT) measures the impact of overlays on reading. The WRRT requires speeded oral reading of a passage of text comprising 15 high-frequency words (familiar to children from 7 years) that are repeated in random order, ensuring that no word can be guessed from the context. The test was administered with and without the chosen overlay placed over the text to test for an immediate benefit in reading rate. Reading rate was calculated as the number of words read correctly per minute (wpm) not including errors, omitted words and omitted lines. They also used two passages adapted from passages in the secondary school edition of the York Assessment of Reading for Comprehension (YARC). Passage 1 consisted of 311 words, and Passage 2 consisted of 302 words. Five  comprehension questions followed each passage.

Both groups read more words per minute in the Overlay versus No Overlay condition. The group with dyslexia showed marginally greater gains relative to controls. However, these data need to interpreted with a healthy dose of salt because the dyslexics were slower readers to begin with and therefore had more room to improve.

When reading real text (YARC), there was an effect for Group on passage reading time. Unsurprisingly, the dyslexic group was slower than controls in both Overlay and No Overlay conditions. But there was no effect for Overlay (the overlay made no difference to reading rate for either group) or a Group by Overlay interaction (there was no relative advantage for the dyslexic group in the Overlay condition v No Overlay). Reading comprehension scores did not change in either group as a result of using an overlay. These data are consistent with those reported by Ritchie, Della Sala and McIntosh (2011) in children. They suggest that coloured overlays are not as effective as claimed for improving reading accuracy or fluency.

 

 

Comments Off

Extra large letter spacing improves reading in dyslexia. Or does it?

June 12th, 2012

High prevalence, high impact disorders like dyslexia are prone to sensational claims from quacks, scientists, journal editors and journalists. The latest is a sensational claim from an article in the high-profile journal Proceedings of the National Academy of Sciences (PNAS) that increasing letter spacing “ameliorated the reading performance of dyslexic children” (Zorzi et al., 2012).

The popular media has picked up these claims. For example the ABC quoted the lead author Marco Zorzi as saying: “Our findings offer a practical way to ameliorate dyslexics’ reading achievement without any training”. But are the claims fact or fiction?

The idea for the study seems to have been grounded in effects of crowding in dyslexia (see Martelli et al., 2009). Crowding occurs when stimuli from the periphery interfere with processing of stimuli in the focus of vision. I am not an expert in this aspect of the dyslexia literature and perhaps someone else may comment. However, my non-expert eye suggests two problems with this literature. First almost all studies (see here for an exception) have used word and/or letter stimuli which confounds reading ability and ‘crowding’ effects. Second, most studies have used age-matched controls rather than reading-age matched controls which leaves open the possibility that the effects on crowding tasks are the consequence of poor reading rather than the cause.

For the purposes of this post, let’s accept that crowding affects people with dyslexia more than good readers. Zorzi et al. (2012) predicted that widening the space between letters in words would decrease the effects of crowding and lead to better reading. They tested this idea in Italian and French people diagnosed with dyslexia (aged 8-14 years). The children had to read 24 short meaningful sentences taken from the Test for the Reception of Grammar. Print was Times New Roman 14-point. One reading condition had normal spacing between letters and the other had letter-spacing 2.5 points greater than normal (normal letter spacing is 2.7 pt in normal text; who knew?). Why they didn’t use single words and nonwords instead of the text-reading task is unclear given that dyslexia is widely acknowledged to be a deficit in single-word reading. People with dyslexia read better in context than they read words in lists (see here). Surely if crowding was the/a cause of dyslexia we would see it more in reading of word lists rather than stories and if increasing letter-spacing improved reading in dyslexia we would see larger effects in single word task?

Anyway….The results of one experiment showed that both the French and Italian groups with dyslexia made less reading errors in the condition in which letter-spacing was greater than normal. However, that on its own tells us nothing other than that doing something led to less errors. It doesn’t say that the specific manipulation (increased letter-spacing) was the key factor. It may be that chewing gum while reading does the same thing. Zorzi et al recognised this and suggested that if crowding really does affect reading and extra letter-spacing reduces crowding effects it is more important to show that people with dyslexia make less errors in the increased letter-spacing condition than reading-age matched controls. This they attempted to do in a second experiment.

The data from Experiment 2 (Zorzi et al) are shown in the figure below. Zorzi et al claimed that the increased letter-spacing condition improved the reading (i.e., they made fewer errors) of their French and Italian groups with dyslexia compared to the reading-age matched controls. These data are what the sensational claims reported in the media are based on. The problem is that their ‘control’ group were not of the same reading-age. Groups with the same reading ability should perform equally in the normal-spaced condition. The Figure below shows that this was not the case. The “reading-age matched controls” were significantly better readers in the first place.

 

What does this methodological flaw mean? Essentially it means that the claims Zorzi et al (or at least the media) made cannot be supported by the data. Using a group of younger children who were already better readers than the people with dyslexia is irrelevant to the research question. It leaves a data set that is subject to the same criticism as their first experiment. That is, it tells us nothing about the specific manipulation (increased letter-spacing) and it remains possible that any experimental manipulation, including the ridiculous like chewing gum, produces the same results.

Furthermore, it is possible, indeed likely in my view, that the reason the children in the “reading-age matched control” group did not improve as much in the increased-spacing condition is that they didn’t have much room to improve. They were already good readers and were at ceiling on the test. It is unlikely that any kind of experimental manipulation will make a good reader a better reader. Which leads me to my suggestion for replicating this study. Don’t replicate it!

I can’t see how using either age- or reading-age matched controls (i.e., good readers) will allow adequately testing of the hypothesis that increased letter-spacing results in improved reading ability in people with dyslexia because of the point I made above: it is unlikely that any kind of experimental manipulation will make a good reader a better reader. In my view, the next piece of research will need to use equivalent groups of people with dyslexia, one of which receives the extra-letetr spacing manipulation and who does not. It is also worth noting that recent research has shown that the effects of another visual manipulation (coloured overlays) on reading ability is not reliable on repeat testing (Henderson et al., 2012) so any future research should probably run the test multiple times for each condition. Finally, if the research is conducted in English, it would be interesting to see if increased letter-spacing changes error rates (for better or worse) for words that involve single grapheme-to-phoneme correspondence compared to those that have digraphs (e.g., chip and rain) or trigraphs (e.g., slight). It might also be interesting to see if increased letter-spacing reduces errors for words in which letter-positiuon errors can occur (e.g., trail-trial).

Until we see these data I’m keeping my ink dry.

 

NAPLAN and learning difficulties Part II

June 11th, 2012

In a recent blog post I identified what I saw as problems with the Australian literacy and numeracy testing (NAPLAN) for grades 3, 5, 7 and 9. A number of colleagues have questioned my position and, being the “social phobe” that I am, I am compelled to clarify my position.

There is certainly a lobby opposed to a national literacy and numeracy test full stop but I’m not buying their arguments. A national literacy and numeracy test has many advantages. We know that teachers are not great at ranking student reading achievement for example (see Madelaine & Wheldall, 2007). An objective test may be helpful in identifying struggling students who would otherwise not be identified if we relied on teacher judgment alone. A standardised test can also allow us to see how literacy and numeracy standards are changing across time. For example, a Grade 3 cohort with good maths skills who became mediocre by Grade 5 might highlight a need for better maths instruction in Grades 4 and 5.

What I am arguing is that NAPLAN in its current form fails in two important ways.

First, it begins in Grade 3 by which stage most children who are going to fail have already failed. This is a problem because early intervention is crucial for children who have learning difficulties. If we take reading as the example, the effects of intervention halve after Grade 1. For example, the US National Reading Panel report (NICHHD, 2000) reported that the mean effect size for systematic phonics in kindergarten was d = 0.56; d = 0.54 for first grade; and d = 0.27 for grades 2-6. Clearly we have to get in early. One might argue that schools have early identification and intervention in hand before Grade 3 NAPLAN. I strongly suspect this isn’t the case in most schools. I recently published a book chapter that looked at the growth in reading skills of a group of 61 poor readers and 52 good readers over the course of a school year. All poor readers were engaged in learning support interventions of some description. The outcome was that only one of the 61 children who were poor readers at the beginning of the year made meaningful growth in reading skills. All the others essentially stayed at the same level. Figure 1 below taken from Wright and Conlon (2012) shows standard scores from the Woodcock Basic Reading Skills Cluster (a combination of nonword and real word reading skills) taken at the beginning of each of the four school terms. The age-standardised scores of controls (good readers) didn’t change as one would expect. Unfortunately, the same thing occurred for the poor readers. If they were a poor reader at the beginning of the year they remained so at the end of the year. The conclusion was the same as Denton et al. (2003). Normal school learning support services often lack the specificity and intensity to make a meaningful change in the reading skills of struggling readers. That is, they do little to “close the gap”.

Figure 1 (from Wright & Conlon, 2012). Poor reader and control group means and standard deviations (standard scores with mean of 100 and SD of 15) on the Basic Reading Cluster at the beginning of each of the four school terms.

 

The second problem with NAPLAN in its current form has been discussed in the previous post. That is, the test format does not provide data that helps teachers identify what specific parts of the reading, writing and numeracy (and arguably language) processes are going wrong and, most importantly, does not provide data that on its own allows design of effective interventions. See also Kerry Hempenstall’s comments.

The answer may lie in the quality response-to-intervention (RTI) approach in Australian schools that I admitted yearning for in my previous post. I would like to see every Kindergarten/Prep teacher employ the very best methods for the teaching of language, reading, spelling and maths skills/knowledge. A sample of children should be subject to weekly tests on curriculum-based measures of the above skills. Estimates of normal growth rates can then be obtained. Every child in Kindy/Prep should then be assessed on these measures weekly and their growth plotted. Any child with a lower than average growth rate should be hit with extra instruction. These children should again be assessed at the beginning of Grade 1 on standardised tests and, if they are still behind their peers, should be hit with another round of intervention using a systematic program (see examples here and here). A NAPLAN style test in May-June of Grades 1, 3, 5, 7 and 9 can then be used to ensure that these children maintain their gains and to identify any children missed by the previous procedure.