Article for Category: ‘Education Issues’

Systematic failure leads to unacceptable prevalence of dyslexia and other reading problems

December 22nd, 2012

The release of international literacy results and Australia’s poor performance (27th out of 45 countries in year 4 reading, the lowest of any English-speaking nation) has prompted leading academics and clinicians to write an open letter to Federal and State Ministers of Education. The letter points to problems with the way teachers are trained to teach reading; that is, they are not. It urges Ministers to take the vast scientific literature on reading and reading difficulties into account when making policy. We can but hope.

The letter and on a companion piece from The Australian newspaper is attached below. Here is a link to Jennifer Buckingham’s excellent op-ed in The Financial Review on the same topic.

Open Letter to all Federal and State Ministers of Education copy

A decade of lost action on literacy _ The Australian

Please like the Understanding Minds page on Facebook

Comments Off

When smile becomes slime: Subtypes of dyslexia

September 22nd, 2012

The dyslexia research world has many divisions. One of those divides is between those who think that subtypes of dyslexia exist and those that consider ‘subtypes’ to be an individual variation arising from a core deficit in phonological processing (see Stanovich et al., 1997; Thomson, 1999).

Pure subtypes exist in acquired dyslexia which occurs following an acquired brain injury (ABI). ABIs can produce clear dissociations in reading skill in previously good readers producing numerous subtypes. These include but are not limited to:

  1. Surface dyslexia – the non-lexical reading route (the pathway we use for ‘sounding out’) is spared and the patient can therefore read nonwords and regular words. However, access to the orthographic lexicon (our ‘word dictionary’) is impaired resulting in problems with reading irregular words.
  2. Phonological dyslexia – Lexical access is spared so the patient can read words they had in their ‘word dictionary’ before the ABI. In contrast, they can’t read new words or nonwords because the non-lexical (sounding out) pathway is impaired.
  3. Mixed dyslexia – a combination of surface and phonological dyslexia.
  4. Deep dyslexia – difficulty reading nonwords and the presence of semantic reading errors. For example, they might read ‘orchestra’ as symphony or ‘river’ as ocean (see Plaut & Shallice, 1993)
  5. Attentional dyslexia – a deficit in which letters migrate between neighbouring words but are correctly identified and keep their relative position within the word. For example, ‘fig tree’ can be read as fig free or even tie free.
  6. Neglect dyslexia – results in omissions or substitutions of letters at the beginning or end of words depending on whether the patient has left or right neglect.
  7. Letter position dyslexia – the patient predominately makes errors of letter migration within words, such as reading ‘board’ as broad or ‘pirates’ as parties. The migrations occur for medial letters within the word.

While these pure subtypes certainly exist in acquired dyslexia, as I mentioned, there is considerable controversy about whether they exist in developmental cases. For mind, they can be found but they are unlikely to be as pure as in acquired dyslexia. For example, I recently saw an 11 year old who read irregular- and regular-words well but had great difficulty reading non-words. This pattern indicates that the lexical pathway works quite nicely but that he has deficits in the non-lexical pathway. However,  he could, of course, read some non-words which is not what one usually sees in a pure case of acquired phonological dyslexia.

One really interesting subtype of developmental dyslexia that has been identified in Hebrew and Arabic, but never before in English has been reported by Kohnen et al (2012). Letter position dyslexia is a peripheral dyslexia in which the major reading errors are characterised by migrations of middle letters within words (e.g., ‘smile’ is read as slime). They make no more letter identity errors (e.g., reading ‘form’ as farm) or between word migrations (e.g., ‘dark part’ read as park dart) than expected for age. Using various tasks, Kohnen et al showed that the problem is specific to the letter position encoding function in the visual analyser part of the word-reading system.

I suspect that children who have weaknesses in the peripheral parts of the reading system, as in letter position dyslexia, and in the non-lexical pathway as in the case of the 11 year old I mentioned above, are quite common in the population. We just don’t see them much because they probably still read ‘okay’ and aren’t referred to specialist clinics as a result. It is probably only in the odd case where the child has severe spelling problems or similar that they present to a clinician skilled enough to identify the problem.

Regarding letter position dyslexia, we currently don’t have normed tests that allow us to accurately measure migration errors. I wait on the wonderful Saskia Kohnen to produce such a test; a feat she will inevitably accomplish sometime soon.

 

 

Comments Off

Coloured overlays and glasses as a treatment for dyslexia

September 09th, 2012

I probably have a thing variously called Irlen-Meares syndrome, scotopic sensitivity syndrome, visual stress or visual discomfort. I use the term visual discomfort in deference to my old PhD supervisor Liz Conlon (my PhD is old not Liz) who is a leader in the field although she hardly ever gets cited. Here are some of her papers you bums! (Conlon et al, 1999; Conlon, Sanders, & Wright, 2009; Conlon & Humphreys, 2001; Conlon et al., 1998; Conlon & Sanders, 2011).

What is visual discomfort?

No one really knows what visual discomfort is. My own view, somewhat consistent with the literature, is that it is an abnormal sensory sensitivity to stimuli of high contrast, and/or low spatial and/or high temporal frequencies. Black text on a white page is an example of a high contrast stimulus. Single spaced text with small font size is an example of a stimulus of low spatial frequency. One of the reasons universities require assignments to be typed in 12-point font, double spaced is that double spaced text is more comfortable to read than single spaced. High temporal frequency stimuli are characterised by rapid flashing. Think of the rapid flicker emitted by strobe lights or a fluorescent bulb. These stimuli lead to excessive neuronal firing that can lead to perceptual distortions or, in my experience, simply make the stimuli uncomfortable to be around.

My own visual discomfort manifests as light sensitivity, too much time in harsh sunlight sans sunglasses results in eye strain and a headache (although this may be psychosomatic as I spent many years as a kid surfing and playing cricket sans sunglasses with no ill effects). I dislike fluorescent lights, which unfortunately light our clinic offices. One office has a bulb that runs directly along my left eye line as I sit in the therapy chair. After a heavy day of consulting I can actually feel a “buzzing” and some days a bad headache in the part of my head that seems to match where the light runs. Again, this may be neurotic but needless to say I attempt to avoid this room. I had trouble with the old CRT computer screens at university; LCD screens were still rare back then. Flashing lights drive me nuts. Laser shows, strobe lights at concerts, and my 3-year old son’s flashing Batman toothbrush (a light on the brush flashes, not Batman) all make me a little more grumpy than usual.

Visual discomfort, reading and dyslexia

There is a theory that visual discomfort causes print to become distorted, which affects word reading and comprehension in turn. Visual discomfort is also claimed to affect reading efficiency such that sufferers can only read for short periods and are prone to reading related headaches.

Visual discomfort has been reported to be more prevalent in dyslexic populations. However, the relationship between dyslexia and visual discomfort remains controversial. Visual complaints are made by many healthy people and visual discomfort also exists in skilled readers (I’m an example). That visual discomfort exists in skilled readers makes a nonsense of claims that it is a form of “visual dyslexia”. Dyslexia and visual discomfort are separate conditions.

Visual discomfort, dyslexia and coloured overlays/lenses

Coloured overlays or lenses are a common treatment for visual discomfort (Allen, Gilchrist and Hollis, 2008; Wilkins, 19952003; Wilkins, Huang and Cao, 2004). Coloured overlays are thin transparent coloured films that are placed over a page of text. They are designed to colour the page without affecting clarity of the text.

The evidence for whether coloured overlays improve reading is mixed. A lot of the existing data published in “scientific” journals are plagued by methodological concerns, including no controls on other therapies/intervention or poorly matched intervention groups. Of the better studies Singleton and Trotter (2005) used undergraduate students with (n =  10) and without (n =  10) dyslexia. Each group had 5 students with high visual discomfort (HVD) scores and 5 with low visual discomfort (LVD) scores. All participants read faster using their chosen overlay. The dyslexics with HVD scores made significant gains in reading speed with an overlay while the other groups made non-significant change (gains of 3-4%).  Singleton and Henderson (2007) showed children (6-14 yoa) made greater improvement in reading rate with coloured overlays relative to reading-age matched controls. In contrast, Ritchie, Della Sala and McIntosh (2011) reported on 61 children (7-12 yoa) with reading difficulties (77% were diagnosed by an Irlen diagnostician as having the visual discomfort). There was limited evidence that individually prescribed Irlen coloured overlays had any immediate benefit for reading rate.

A recent study from the lab of respected reading scientist, Maggie Snowling, investigated the effect of coloured overlays in a well-designed experiment. They took 26 controls and 16 people with dyslexia, all undergraduate students, matched for IQ. Both were tested on two reading tests. The Wilkins Rate of Reading Test (WRRT) measures the impact of overlays on reading. The WRRT requires speeded oral reading of a passage of text comprising 15 high-frequency words (familiar to children from 7 years) that are repeated in random order, ensuring that no word can be guessed from the context. The test was administered with and without the chosen overlay placed over the text to test for an immediate benefit in reading rate. Reading rate was calculated as the number of words read correctly per minute (wpm) not including errors, omitted words and omitted lines. They also used two passages adapted from passages in the secondary school edition of the York Assessment of Reading for Comprehension (YARC). Passage 1 consisted of 311 words, and Passage 2 consisted of 302 words. Five  comprehension questions followed each passage.

Both groups read more words per minute in the Overlay versus No Overlay condition. The group with dyslexia showed marginally greater gains relative to controls. However, these data need to interpreted with a healthy dose of salt because the dyslexics were slower readers to begin with and therefore had more room to improve.

When reading real text (YARC), there was an effect for Group on passage reading time. Unsurprisingly, the dyslexic group was slower than controls in both Overlay and No Overlay conditions. But there was no effect for Overlay (the overlay made no difference to reading rate for either group) or a Group by Overlay interaction (there was no relative advantage for the dyslexic group in the Overlay condition v No Overlay). Reading comprehension scores did not change in either group as a result of using an overlay. These data are consistent with those reported by Ritchie, Della Sala and McIntosh (2011) in children. They suggest that coloured overlays are not as effective as claimed for improving reading accuracy or fluency.

 

 

Comments Off

NAPLAN and learning difficulties Part II

June 11th, 2012

In a recent blog post I identified what I saw as problems with the Australian literacy and numeracy testing (NAPLAN) for grades 3, 5, 7 and 9. A number of colleagues have questioned my position and, being the “social phobe” that I am, I am compelled to clarify my position.

There is certainly a lobby opposed to a national literacy and numeracy test full stop but I’m not buying their arguments. A national literacy and numeracy test has many advantages. We know that teachers are not great at ranking student reading achievement for example (see Madelaine & Wheldall, 2007). An objective test may be helpful in identifying struggling students who would otherwise not be identified if we relied on teacher judgment alone. A standardised test can also allow us to see how literacy and numeracy standards are changing across time. For example, a Grade 3 cohort with good maths skills who became mediocre by Grade 5 might highlight a need for better maths instruction in Grades 4 and 5.

What I am arguing is that NAPLAN in its current form fails in two important ways.

First, it begins in Grade 3 by which stage most children who are going to fail have already failed. This is a problem because early intervention is crucial for children who have learning difficulties. If we take reading as the example, the effects of intervention halve after Grade 1. For example, the US National Reading Panel report (NICHHD, 2000) reported that the mean effect size for systematic phonics in kindergarten was d = 0.56; d = 0.54 for first grade; and d = 0.27 for grades 2-6. Clearly we have to get in early. One might argue that schools have early identification and intervention in hand before Grade 3 NAPLAN. I strongly suspect this isn’t the case in most schools. I recently published a book chapter that looked at the growth in reading skills of a group of 61 poor readers and 52 good readers over the course of a school year. All poor readers were engaged in learning support interventions of some description. The outcome was that only one of the 61 children who were poor readers at the beginning of the year made meaningful growth in reading skills. All the others essentially stayed at the same level. Figure 1 below taken from Wright and Conlon (2012) shows standard scores from the Woodcock Basic Reading Skills Cluster (a combination of nonword and real word reading skills) taken at the beginning of each of the four school terms. The age-standardised scores of controls (good readers) didn’t change as one would expect. Unfortunately, the same thing occurred for the poor readers. If they were a poor reader at the beginning of the year they remained so at the end of the year. The conclusion was the same as Denton et al. (2003). Normal school learning support services often lack the specificity and intensity to make a meaningful change in the reading skills of struggling readers. That is, they do little to “close the gap”.

Figure 1 (from Wright & Conlon, 2012). Poor reader and control group means and standard deviations (standard scores with mean of 100 and SD of 15) on the Basic Reading Cluster at the beginning of each of the four school terms.

 

The second problem with NAPLAN in its current form has been discussed in the previous post. That is, the test format does not provide data that helps teachers identify what specific parts of the reading, writing and numeracy (and arguably language) processes are going wrong and, most importantly, does not provide data that on its own allows design of effective interventions. See also Kerry Hempenstall’s comments.

The answer may lie in the quality response-to-intervention (RTI) approach in Australian schools that I admitted yearning for in my previous post. I would like to see every Kindergarten/Prep teacher employ the very best methods for the teaching of language, reading, spelling and maths skills/knowledge. A sample of children should be subject to weekly tests on curriculum-based measures of the above skills. Estimates of normal growth rates can then be obtained. Every child in Kindy/Prep should then be assessed on these measures weekly and their growth plotted. Any child with a lower than average growth rate should be hit with extra instruction. These children should again be assessed at the beginning of Grade 1 on standardised tests and, if they are still behind their peers, should be hit with another round of intervention using a systematic program (see examples here and here). A NAPLAN style test in May-June of Grades 1, 3, 5, 7 and 9 can then be used to ensure that these children maintain their gains and to identify any children missed by the previous procedure.

 

NAPLAN and learning difficulties

June 01st, 2012

May was a busy time in Australian schools with Grades 3, 5, 7 and 9 involved in the national literacy and numeracy tests (NAPLAN). The stress I see in parents and learning support colleagues during NAPLAN time often causes me to reflect on the purpose of the test(s) and how useful they are for students who have learning difficulties.

 The Australian Curriculum, Assessment and Reporting Authority (ACARA) claim that the purpose of NAPLAN is to “measure the literacy and numeracy skills and knowledge that provide the critical foundation for other learning”. They also claim that introduction of NAPLAN has led to “consistency, comparability and transferability of information on students’ literacy and numeracy skills”. (Don Watson would have a field day with these weasel words).
NAPLAN is useful because it identifies students who are struggling with the broad academic skills. Having an objective measurement is important because research has shown that teachers are not particularly accurate in identifying struggling students. For example, Madelaine and Wheldall (2007) randomly selected twelve students from 33 classes and asked their teachers to rank the students based on perceptions of reading performance. They also assessed the students on a passage reading test. Only 50% of teachers identified the same poorest reader as the objective test and only 15% of teachers identified the same three lowest performing readers as the test. We can certainly argue about whether NAPLAN in its current form is the most effective and/or cost-effective method of gathering data on student achievement, however, it seems that we cannot rely on teacher judgment alone.
On the downside, NAPLAN represents a test, not an assessment. All good clinicians and educators know, there is a difference, or should be, between testing and assessment (see here and here). Assessment is a process that starts with the history and clearly defines the presenting problem or set of problems. The clinician develops an hypothesis or set of hypotheses on the basis of the history. They then gather data (e.g., observations, interviews, tests, and base rates) that is designed to shed light on the hypotheses. It is worth noting that a good clinician looks equally for data that confirms and disconfirms the initial hypotheses. Good assessment should lead directly to treatment and/or appropriate teaching for the presenting problem(s) and provide pre-treatment data that allows monitoring of progress. Testing on the other hand simply tells us how good or bad a student is on a particular test. For example, a student with a low score on a reading comprehension test can be said to have poor reading comprehension. The problem with tests is they don’t tell why a student performed poorly and, if they measure a complex process like reading comprehension, writing, or mathematical reasoning, they don’t tell what component of that complex process is weak.
 
That is precisely the problem with NAPLAN. The NAPLAN tasks are complex and provide little information useful for designing interventions for students with learning difficulties and for monitoring response to intervention. An example from NAPLAN illustrates this point.
A mathematics question asked: $4 is shared equally among 5 girls. How much does each girl get? An incorrect response tells us that the student can’t do the task. So what? The child’s teacher probably knew that already. What would be useful would be to know if the student failed the item because (1) they couldn’t read the question, (2) they didn’t know what ‘shared’ or ‘equally’ meant, (3) they didn’t recognise the item required a division operation, (4) they didn’t know to convert $4 to 400c to make the division easier, (5) they didn’t know the fact 40 divided by 5, (6) they knew all of the above but have attention problems and got ‘lost’ during the multi-step division process.
Similarly, if a student performs poorly on the writing component of NAPLAN no information useful for treatment is obtained. The test doesn’t tell us if the child (a) has a form of dyspraxia and struggles with handwriting, (b) has an impoverished spelling lexicon, (c) has poor knowledge of sound-to-letter conversion rules and therefore struggles to spell unfamiliar words, (d) poor knowledge of written grammatical conventions, (e) poor knowledge of written story grammar, (f) oral language weaknesses in semantics and/or grammar, (g) poor oral narrative skills, (h) attention problems so therefore s/he can’t keep his you know what together while doing a complex task, or (i) autism and therefore doesn’t give a toss about the writing topic. The list could go on.
Unfortunately, NAPLAN provides none of these specific data. It simply tells us how bad the child performs relative to some arbitrary benchmark. So where does this leave us? Or more to the point, where does it leave students who have learning difficulties?
Both of which lead me to think that NAPLAN is probably not all that useful for students who have learning difficulties or for the parents, clinicians and teachers who work with them. It also leads me to yearn even more for a Response-to-Intervention approach in which schools recognise learning problems early in the child’s school career, assess to define the problem(s), and provide evidence-based interventions that target the problem(s).
References
Madelaine, A., & Wheldall, K. (2007). Identifying low progress readers: Comparing teacher judgment with a curriculum-based measurement procedure. International Journal of Disability, Development and Education, 52, 33-42.
Comments Off