Article Archive for ‘June, 2012’

Extra large letter spacing improves reading in dyslexia. Or does it?

June 12th, 2012

High prevalence, high impact disorders like dyslexia are prone to sensational claims from quacks, scientists, journal editors and journalists. The latest is a sensational claim from an article in the high-profile journal Proceedings of the National Academy of Sciences (PNAS) that increasing letter spacing “ameliorated the reading performance of dyslexic children” (Zorzi et al., 2012).

The popular media has picked up these claims. For example the ABC quoted the lead author Marco Zorzi as saying: “Our findings offer a practical way to ameliorate dyslexics’ reading achievement without any training”. But are the claims fact or fiction?

The idea for the study seems to have been grounded in effects of crowding in dyslexia (see Martelli et al., 2009). Crowding occurs when stimuli from the periphery interfere with processing of stimuli in the focus of vision. I am not an expert in this aspect of the dyslexia literature and perhaps someone else may comment. However, my non-expert eye suggests two problems with this literature. First almost all studies (see here for an exception) have used word and/or letter stimuli which confounds reading ability and ‘crowding’ effects. Second, most studies have used age-matched controls rather than reading-age matched controls which leaves open the possibility that the effects on crowding tasks are the consequence of poor reading rather than the cause.

For the purposes of this post, let’s accept that crowding affects people with dyslexia more than good readers. Zorzi et al. (2012) predicted that widening the space between letters in words would decrease the effects of crowding and lead to better reading. They tested this idea in Italian and French people diagnosed with dyslexia (aged 8-14 years). The children had to read 24 short meaningful sentences taken from the Test for the Reception of Grammar. Print was Times New Roman 14-point. One reading condition had normal spacing between letters and the other had letter-spacing 2.5 points greater than normal (normal letter spacing is 2.7 pt in normal text; who knew?). Why they didn’t use single words and nonwords instead of the text-reading task is unclear given that dyslexia is widely acknowledged to be a deficit in single-word reading. People with dyslexia read better in context than they read words in lists (see here). Surely if crowding was the/a cause of dyslexia we would see it more in reading of word lists rather than stories and if increasing letter-spacing improved reading in dyslexia we would see larger effects in single word task?

Anyway….The results of one experiment showed that both the French and Italian groups with dyslexia made less reading errors in the condition in which letter-spacing was greater than normal. However, that on its own tells us nothing other than that doing something led to less errors. It doesn’t say that the specific manipulation (increased letter-spacing) was the key factor. It may be that chewing gum while reading does the same thing. Zorzi et al recognised this and suggested that if crowding really does affect reading and extra letter-spacing reduces crowding effects it is more important to show that people with dyslexia make less errors in the increased letter-spacing condition than reading-age matched controls. This they attempted to do in a second experiment.

The data from Experiment 2 (Zorzi et al) are shown in the figure below. Zorzi et al claimed that the increased letter-spacing condition improved the reading (i.e., they made fewer errors) of their French and Italian groups with dyslexia compared to the reading-age matched controls. These data are what the sensational claims reported in the media are based on. The problem is that their ‘control’ group were not of the same reading-age. Groups with the same reading ability should perform equally in the normal-spaced condition. The Figure below shows that this was not the case. The “reading-age matched controls” were significantly better readers in the first place.

 

What does this methodological flaw mean? Essentially it means that the claims Zorzi et al (or at least the media) made cannot be supported by the data. Using a group of younger children who were already better readers than the people with dyslexia is irrelevant to the research question. It leaves a data set that is subject to the same criticism as their first experiment. That is, it tells us nothing about the specific manipulation (increased letter-spacing) and it remains possible that any experimental manipulation, including the ridiculous like chewing gum, produces the same results.

Furthermore, it is possible, indeed likely in my view, that the reason the children in the “reading-age matched control” group did not improve as much in the increased-spacing condition is that they didn’t have much room to improve. They were already good readers and were at ceiling on the test. It is unlikely that any kind of experimental manipulation will make a good reader a better reader. Which leads me to my suggestion for replicating this study. Don’t replicate it!

I can’t see how using either age- or reading-age matched controls (i.e., good readers) will allow adequately testing of the hypothesis that increased letter-spacing results in improved reading ability in people with dyslexia because of the point I made above: it is unlikely that any kind of experimental manipulation will make a good reader a better reader. In my view, the next piece of research will need to use equivalent groups of people with dyslexia, one of which receives the extra-letetr spacing manipulation and who does not. It is also worth noting that recent research has shown that the effects of another visual manipulation (coloured overlays) on reading ability is not reliable on repeat testing (Henderson et al., 2012) so any future research should probably run the test multiple times for each condition. Finally, if the research is conducted in English, it would be interesting to see if increased letter-spacing changes error rates (for better or worse) for words that involve single grapheme-to-phoneme correspondence compared to those that have digraphs (e.g., chip and rain) or trigraphs (e.g., slight). It might also be interesting to see if increased letter-spacing reduces errors for words in which letter-positiuon errors can occur (e.g., trail-trial).

Until we see these data I’m keeping my ink dry.

 

NAPLAN and learning difficulties Part II

June 11th, 2012

In a recent blog post I identified what I saw as problems with the Australian literacy and numeracy testing (NAPLAN) for grades 3, 5, 7 and 9. A number of colleagues have questioned my position and, being the “social phobe” that I am, I am compelled to clarify my position.

There is certainly a lobby opposed to a national literacy and numeracy test full stop but I’m not buying their arguments. A national literacy and numeracy test has many advantages. We know that teachers are not great at ranking student reading achievement for example (see Madelaine & Wheldall, 2007). An objective test may be helpful in identifying struggling students who would otherwise not be identified if we relied on teacher judgment alone. A standardised test can also allow us to see how literacy and numeracy standards are changing across time. For example, a Grade 3 cohort with good maths skills who became mediocre by Grade 5 might highlight a need for better maths instruction in Grades 4 and 5.

What I am arguing is that NAPLAN in its current form fails in two important ways.

First, it begins in Grade 3 by which stage most children who are going to fail have already failed. This is a problem because early intervention is crucial for children who have learning difficulties. If we take reading as the example, the effects of intervention halve after Grade 1. For example, the US National Reading Panel report (NICHHD, 2000) reported that the mean effect size for systematic phonics in kindergarten was d = 0.56; d = 0.54 for first grade; and d = 0.27 for grades 2-6. Clearly we have to get in early. One might argue that schools have early identification and intervention in hand before Grade 3 NAPLAN. I strongly suspect this isn’t the case in most schools. I recently published a book chapter that looked at the growth in reading skills of a group of 61 poor readers and 52 good readers over the course of a school year. All poor readers were engaged in learning support interventions of some description. The outcome was that only one of the 61 children who were poor readers at the beginning of the year made meaningful growth in reading skills. All the others essentially stayed at the same level. Figure 1 below taken from Wright and Conlon (2012) shows standard scores from the Woodcock Basic Reading Skills Cluster (a combination of nonword and real word reading skills) taken at the beginning of each of the four school terms. The age-standardised scores of controls (good readers) didn’t change as one would expect. Unfortunately, the same thing occurred for the poor readers. If they were a poor reader at the beginning of the year they remained so at the end of the year. The conclusion was the same as Denton et al. (2003). Normal school learning support services often lack the specificity and intensity to make a meaningful change in the reading skills of struggling readers. That is, they do little to “close the gap”.

Figure 1 (from Wright & Conlon, 2012). Poor reader and control group means and standard deviations (standard scores with mean of 100 and SD of 15) on the Basic Reading Cluster at the beginning of each of the four school terms.

 

The second problem with NAPLAN in its current form has been discussed in the previous post. That is, the test format does not provide data that helps teachers identify what specific parts of the reading, writing and numeracy (and arguably language) processes are going wrong and, most importantly, does not provide data that on its own allows design of effective interventions. See also Kerry Hempenstall’s comments.

The answer may lie in the quality response-to-intervention (RTI) approach in Australian schools that I admitted yearning for in my previous post. I would like to see every Kindergarten/Prep teacher employ the very best methods for the teaching of language, reading, spelling and maths skills/knowledge. A sample of children should be subject to weekly tests on curriculum-based measures of the above skills. Estimates of normal growth rates can then be obtained. Every child in Kindy/Prep should then be assessed on these measures weekly and their growth plotted. Any child with a lower than average growth rate should be hit with extra instruction. These children should again be assessed at the beginning of Grade 1 on standardised tests and, if they are still behind their peers, should be hit with another round of intervention using a systematic program (see examples here and here). A NAPLAN style test in May-June of Grades 1, 3, 5, 7 and 9 can then be used to ensure that these children maintain their gains and to identify any children missed by the previous procedure.

 

NAPLAN and learning difficulties

June 01st, 2012

May was a busy time in Australian schools with Grades 3, 5, 7 and 9 involved in the national literacy and numeracy tests (NAPLAN). The stress I see in parents and learning support colleagues during NAPLAN time often causes me to reflect on the purpose of the test(s) and how useful they are for students who have learning difficulties.

 The Australian Curriculum, Assessment and Reporting Authority (ACARA) claim that the purpose of NAPLAN is to “measure the literacy and numeracy skills and knowledge that provide the critical foundation for other learning”. They also claim that introduction of NAPLAN has led to “consistency, comparability and transferability of information on students’ literacy and numeracy skills”. (Don Watson would have a field day with these weasel words).
NAPLAN is useful because it identifies students who are struggling with the broad academic skills. Having an objective measurement is important because research has shown that teachers are not particularly accurate in identifying struggling students. For example, Madelaine and Wheldall (2007) randomly selected twelve students from 33 classes and asked their teachers to rank the students based on perceptions of reading performance. They also assessed the students on a passage reading test. Only 50% of teachers identified the same poorest reader as the objective test and only 15% of teachers identified the same three lowest performing readers as the test. We can certainly argue about whether NAPLAN in its current form is the most effective and/or cost-effective method of gathering data on student achievement, however, it seems that we cannot rely on teacher judgment alone.
On the downside, NAPLAN represents a test, not an assessment. All good clinicians and educators know, there is a difference, or should be, between testing and assessment (see here and here). Assessment is a process that starts with the history and clearly defines the presenting problem or set of problems. The clinician develops an hypothesis or set of hypotheses on the basis of the history. They then gather data (e.g., observations, interviews, tests, and base rates) that is designed to shed light on the hypotheses. It is worth noting that a good clinician looks equally for data that confirms and disconfirms the initial hypotheses. Good assessment should lead directly to treatment and/or appropriate teaching for the presenting problem(s) and provide pre-treatment data that allows monitoring of progress. Testing on the other hand simply tells us how good or bad a student is on a particular test. For example, a student with a low score on a reading comprehension test can be said to have poor reading comprehension. The problem with tests is they don’t tell why a student performed poorly and, if they measure a complex process like reading comprehension, writing, or mathematical reasoning, they don’t tell what component of that complex process is weak.
 
That is precisely the problem with NAPLAN. The NAPLAN tasks are complex and provide little information useful for designing interventions for students with learning difficulties and for monitoring response to intervention. An example from NAPLAN illustrates this point.
A mathematics question asked: $4 is shared equally among 5 girls. How much does each girl get? An incorrect response tells us that the student can’t do the task. So what? The child’s teacher probably knew that already. What would be useful would be to know if the student failed the item because (1) they couldn’t read the question, (2) they didn’t know what ‘shared’ or ‘equally’ meant, (3) they didn’t recognise the item required a division operation, (4) they didn’t know to convert $4 to 400c to make the division easier, (5) they didn’t know the fact 40 divided by 5, (6) they knew all of the above but have attention problems and got ‘lost’ during the multi-step division process.
Similarly, if a student performs poorly on the writing component of NAPLAN no information useful for treatment is obtained. The test doesn’t tell us if the child (a) has a form of dyspraxia and struggles with handwriting, (b) has an impoverished spelling lexicon, (c) has poor knowledge of sound-to-letter conversion rules and therefore struggles to spell unfamiliar words, (d) poor knowledge of written grammatical conventions, (e) poor knowledge of written story grammar, (f) oral language weaknesses in semantics and/or grammar, (g) poor oral narrative skills, (h) attention problems so therefore s/he can’t keep his you know what together while doing a complex task, or (i) autism and therefore doesn’t give a toss about the writing topic. The list could go on.
Unfortunately, NAPLAN provides none of these specific data. It simply tells us how bad the child performs relative to some arbitrary benchmark. So where does this leave us? Or more to the point, where does it leave students who have learning difficulties?
Both of which lead me to think that NAPLAN is probably not all that useful for students who have learning difficulties or for the parents, clinicians and teachers who work with them. It also leads me to yearn even more for a Response-to-Intervention approach in which schools recognise learning problems early in the child’s school career, assess to define the problem(s), and provide evidence-based interventions that target the problem(s).
References
Madelaine, A., & Wheldall, K. (2007). Identifying low progress readers: Comparing teacher judgment with a curriculum-based measurement procedure. International Journal of Disability, Development and Education, 52, 33-42.
Comments Off