Article for Category: ‘Education Issues’

Does medication improve reading ability?

September 05th, 2016

The Journal of Child and Adolescent Psychopharmocology has published an article by Shaywitz et al that investigated the affects of the drug atomoxetine (sold as Strattera) on reading and attention in children with dyslexia, ADHD and comorbid dyslexia + ADHD.

My understanding is that the experiment was part of a broader study on the effectiveness of atomoxetine as a treatment for ADHD. The study appears to have been funded by Eli Lilly and the second author is reportedly an employee and minor shareholder in Eli Lilly.

Children were defined as having dyslexia if (a) there was at least a 22-point discrepancy (approx. 43 percentile ranks) between IQ and word-reading ability (defined by the WJIII Word Attack test OR the Word Identification OR the WJIII Basic Reading Cluster – an average of Word Attack and Word Id) or (b) the child had a standard score of 89 or lower on at least one of the WJIII tests.

The first thing that springs to mind is that these criteria are not particularly stringent. A standard score of 89 represents the 23rd percentile. It is still within a standard deviation of the mean. Further, the discrepancy criterion presumably means that a child with an IQ score of 122 and a reading score of 100 (exactly average) met criteria for the dyslexia group.

Using these criteria, 209 children who were allocated at random to an atomoxetine or placebo group. Numbers in each group were: dyslexia only (n = 29 atomoxetine, n = 29 placebo), ADHD + dyslexia (n = 64 atomoxetine, n = 60 placebo) and ADHD only (all of whom received atomoxetine).

The drug trial lasted for 16-weeks. Changes in reading skills were measured by:

WJIII Word Attack and Word Identification tests

WJIII Spelling

WJIII Reading Comprehension and Reading Vocabulary

Comprehensive Test of Phonological Processing (CTOPP)

Gray Oral Reading Tests-4 (GORT-4)

Test of Word Reading Efficiency (TOWRE)

It is unclear why but on many variables the Placebo group had higher pre-treatment reading/spelling skills than the Atomoxetine group. For example, within the Dyslexia only group the Placebo sub-group had an average standard score on the Word Attack subtest of 88.05 while the Atomoxetine group had an average score of 84.11. It was 83.86 vs 80.47 for Word Identification, 83.32 vs 77.32 for Spelling. This may seem a minor point but consider the effect of regression to the mean The more extreme a score is at pre-test the more likely it is to “bounce’ back towards average at post-testing. Thus, the Atomoxetine group was more likely to “benefit” from regression effects than the Placebo group.

A summary of the interesting bits of the results:

  • Children with dyslexia who were treated with atomoxetine made greater gains than children with dyslexia given the placebo treatment on WJIII Word Attack, WJIII Basic Reading Cluster and WJIII Reading Vocabulary.
  • Children with dyslexia + ADHD who were treated with atomoxetine made greater gains than children with dyslexia + ADHD given the placebo treatment on the CTOPP Elision test (a peripheral/distal reading sub-skill).

So what?

  1. At first glance it seems odd that children with dyslexia would respond better to a medication designed for ADHD than children who actually had ADHD. However, there are reasonable explanations. First, medication isn’t a panacea. It doesn’t cure ADHD. It improves symptoms. It is possible that the ADHD + dyslexia remained more impaired than the dyslexia group even when given medication. Second, children with dyslexia often have sub-threshold symptoms of ADHD that may in fact respond better to medication than the more severe symptoms seen in children actually diagnosed with ADHD.
  2. It is possible that at least some of the effects were due to greater regression to the mean in the Atomoxetine vs Placebo group (see above).
  3. It is possible that the medication simply improved test behaviour rather than reading ability per se.

The data are arguably the beginning of a research journey. They provide some preliminary support for the idea that psycho-stimulant medication can improve academic skills even in children without ADHD. However, I am sceptical about this. I can see how medication might improve test-taking behaviour in some children. I can also see how it might “smooth out” the inconsistency seen in children with attention problems and with dyslexia. However, medication doesn’t teach. It doesn’t matter how good your attention is; if you don’t know it you don’t know it.

I am more interested in how medication affects response to reading intervention. The graph below shows data from a single case seen in my clinic. The case is of a male with ADHD + mixed dyslexia. He was receiving reading intervention delivered 4/week. See here and here for descriptions of the program.


The data points represent a weekly nonword reading test. The test items were constructed using the grapheme-phoneme conversion rules taught in the intervention program. If the student learns ~4-5 new GPCs weekly they should be able to read ~5 extra nonwords each week. The “flat lines” represent data from 2 x baseline periods and 2 x treatment periods in which the participant was receiving reading intervention only. One can see that not much progress was being made. It wasn’t that the boy wasn’t learning new things. It was more than his “recall” was inconsistent and his test-taking behaviour was poor.

Upwards growth in test scores were seen almost immediately upon beginning to take a stimulant medication. Later data, not shown in the graph, showed that removing the reading intervention so the only treatment was medication resulted in a return to a flat line. That is, the medication didn’t teach skills.

We have seen this pattern in several cases now; although the results of other cases have not been as dramatic as this first case.  (I should also point out that the skeptic in me thinks that these data are too perfect and I need to see them replicated before I truly “believe” them).

Our very preliminary conclusions are that attention is necessary for new learning to take place and that the medication helps set the conditions for learning to happen (the batter into which the teaching is stirred if you will). However, medication will probably not make you a good reader sans the teaching.

At this point there is no justification for trialling medication in students who just have dyslexia. However, we hope to see a larger trial of the response of students with ADHD + dyslexia to reading intervention versus medication + reading intervention.

For help with dyslexia, ADHD, autism spectrum disorders and other developmental and learning disorders in the Gold Coast and Tweed regions contact the Understanding Minds Clinic.


Comments Off

Similar but different: differences in comprehension diagnosis on the Neale Analysis of Reading Ability and the York Assessment of Reading for Comprehension

May 04th, 2016

Psychometric tests are behavioural tests. We use measures of behaviour (which we can see directly) to infer something about a latent variable (something that can’t be seen or measured directly). Take tests of reading ability as an example. Reading occurs in the brain and is therefore a latent variable. We can’t see or measure it directly. The consequence is that different tests will provide different results depending on a number of factors, including the skills/items that the test samples and the normative population. 

Danielle ColenbranderLyndsey Nickels and Saskia Kohnen have just published a study that investigated differences in the content and scores obtained from two commonly used reading tests: The NARA and the YARC. The whole can be found in the Journal of Research in Reading. 

The Abstract

Identifying reading comprehension difficulties is challenging. There are many comprehension tests to choose from, and a child’s diagnosis can be influenced by various factors such as a test’s format and content and the choice of diagnostic criteria. We investigate these issues with reference to the Neale Analysis of Reading Ability (NARA) and the York Assessment of Reading for Comprehension (YARC).


Ninety-five children were assessed on both tests. Test characteristics were compared using Principal Components and Regression analyses as well as an analysis of passage content.


NARA comprehension scores were more dependent on decoding skills than YARC scores, but children answered more comprehension questions on the NARA and passages spanned a wider range of difficulty. Consequently, 15–34% of children received different diagnoses across tests, depending on diagnostic criteria.


Knowledge of the strengths and weaknesses of comprehension tests is essential when attempting to diagnose reading comprehension difficulties.


For help with dyslexia, ADHD, autism spectrum disorders and other developmental and learning disorders in the Gold Coast and Tweed regions contact the Understanding Minds Clinic.

Like us on Facebook for updates on dyslexia related matters, information on other developmental disorders like autism spectrum disorders, Asperger’s and ADHD, and general mental health info.

Comments Off

The influence of whole language philosophy on classroom reading strategies

February 20th, 2013

Click to download  reading strategies. The link will take you to another page. Click on the ‘reading strategies’ file name a second time to download the PDF.


For help with dyslexia on the Gold Coast and Tweed regions contact the Understanding Minds Dyslexia & Reading Difficulties Clinic .

Like us on Facebook for updates on dyslexia related matters, information on other developmental disorders like autism spectrum disorders, Asperger’s and ADHD, and general mental health info.

Comments Off

Kevin Wheldall on why Australia sucks at reading

January 21st, 2013

Kevin Wheldall is an Emeritus Professor of Macquarie University. He is a Director of the reading intervention MultiLit and has a list of awards as long as my Dad’s arm.

Here is Kevin’s opinion on Why Australia sucks at reading.

Don’t forget Jennifer Buckingham’s op-ed piece on the same topic that I blogged about here.



Comments Off

To repeat or not to repeat: That is the question

January 10th, 2013

This article was published in the Learning Difficulties Australia Bulletin, Volume 39, No 4, December 2007.

Towards the end of each school year, teachers and parents can find themselves faced with a vexing question: should my child repeat his/her school grade?  Some may be driven to this question on the basis of social immaturity while others may be driven by failures to achieve the academic standards set for each grade level. Although the prevalence of grade retention in Australia is far lower than in countries such as the USA, anecdotal evidence suggests that its use as an intervention method continues. This brief article will summarise the available evidence to assist teachers and parents in making a difficult decision.

Who is retained?

The characteristics of students who are retained in a grade are wide and varied and there is very little Australian-based literature. However, it is safe to say that those who are retained tend to be:

  •     Male
  •    Experience academic failure or delay
  •    Have poor classroom conduct
  •    Display emotional immaturity
  •    Perceived as being less competent by both parents and teachers.

Does grade retention improve student outcomes?

Academic achievement

There have been scores of studies conducted since the 1970’s on the issue of grade retention. Many of them, however, suffer from significant methodological and statistical flaws. One should be careful therefore in relying too much upon the data from a single study; particularly if one is not familiar with sound research and statistical methodology. Fortunately a number of reviews and meta-analyses have been conducted which obviate the need for interpretation of individual studies (e.g., Holmes, 1989; Holmes & Matthews, 1984; Jackson, 1975; Jimerson, 2001).

On balance, these reviews have indicated that grade retention either has a negative impact on academic achievement (relative to equivalent promoted peers) or that the effect is null. That is, using retention as an intervention tool has little effect on academic achievement. When positive effects on academic achievement are reported they tend to diminish over time. Indeed, any benefits on achievement are lost when the retained children and their equivalent promoted peers face new material (e.g., Jimerson, Carlson, Rotert, Egeland & Sroufe, 1997).

Mental health

Two studies have reported that older primary school children view grade retention as being in the top three stressful life events: along with losing a parent and going blind (e.g., Anderson, Jimerson & Whipple, 2005). Young children view retention as a punishment and experience sadness, fear and anger when not promoted. In the short-term retained children can face social isolation. For example, there is some evidence showing that peers choose younger same age peers with whom to play rather than the older retained child. In the longer-term retained students tend to experience poorer social adjustment and emotional health, including lower self-esteem and perceived competence, than equivalent promoted peers (e.g. Jimerson et al., 1997).

Student behaviour

The presence of behaviour problems is a predictor of grade retention. Yet the evidence suggests that retention in a grade actually exacerbates the problem (e.g., Jimerson et al., 1997). In male students, grade retention can has long lasting adverse effects on inattentiveness, oppositional behaviour and aggressiveness (Pagani et al., 2001). A similar ‘spike’ in disruptive behaviour is typically seen in female students; however, unlike their male counterparts females display these behaviours for just a short period.

Does timing of retention affect outcomes?

Some authors have argued that age and maturity are significant factors in early school success and that perhaps holding children back will lead to better academic outcomes. Despite the intuitive appeal of holding back a student seen as immature, the evidence does not support the practice. While retention in later grades may be more harmful than when conducted in early grades, the effect is relative and does not mean that early retention is useful or effective.

The alternatives

The most often quoted alternative to grade retention is grade (or social) promotion, where the student is promoted along with his or her grade-peers. While some studies have reported small benefits for promoted students over retained peers, both groups perform more poorly than control students (those without any learning, emotional or behavioural difficulties; Silbergitt, Jimerson, Burns, Appleton & James, 2006). In other words, in the best possible case the promoted student will do slightly better than the retained student. However, both will continue to experience significant difficulties within the areas of function identified as being impaired. Grade promotion on its own then is hardly an alternative to retention.

What is required is grade promotion coupled with intensive intervention methods designed to specifically target the identified weaknesses. Even before this occurs, schools can reduce the possibility of retention through a process of early identification of children ‘at-risk’ (e.g., of reading or learning difficulties). Theoretically-driven and evidence- based early intervention programs (e.g., Direct Instruction programs for word-reading and oral comprehension skills, social skills programs and teacher training in behaviour change) can prevent the failure that leads to the dreaded question of to repeat or not to repeat.


When I wrote this article in 2007 I suggested that “What is required is grade promotion coupled with intensive intervention methods designed to specifically target the identified weaknesses.”

In fact, there is no evidence for this statement. To settle the question of grade retention forever we would have to conduct a study with four equivalent groups. Group 1 is retained with no additional intervention. Group 2 is retained with the “intensive intervention” I suggested. Group 3 is promoted with no additional intervention. Group 4 is promoted and given “intensive intervention”.

Of course, this study will never pass an ethics committee and therefore will never be conducted. We are therefore stuck with making decisions on less than perfect evidence. On balance, the probabilities still favour grade promotion + intensive intervention.



 Anderson, G.E., Jimerson, S.R., & Whipple, A.D. (2005). Students’ ratings of stressful experiences at home and school: Loss of apparent and grade retention as superlative stressors. Journal of Applied School Psychology, 21(1), 1-20.

Holmes, C.T. (1989). Grade-level retention effects: A meta-analysis of research studies. In L.A. Shepard & M.L. Smith (Eds.). Flunking grades: Research and policies on retention (pp. 16-33). London: The Falmer Press.

Holmes, C.T. & Matthews, K.M. (1984). The effects of nonpromotion on elementary and junior high school pupils.: A meta-analysis. Reviews of Educational Research, 54, 225-236.

Jimerson, S.R. (2001). Meta-analysis of grade-retention: Implications for practice in the 21st  century. School Psychology Review, 30, 420-438.

Jimerson, S. R. Carlson, E., Rotert, M., Egeland, B., & Sroufe, L.A. (1997). A prospective, longitudinal study of the correlates and consequences of early grade retention. Journal of School Psychology, 35, 3-25.

Pagani, L., Tremblay, R.E., Vitaro, F., Boulerice, B., & McDuff, P. (2001). Effects of grade retention on academic performance and behavioural development. Development and Psychopathology, 13, 297-315.

Silbergitt, B. Jimerson, S.R., Burns, M.K., Appleton, J.J. (2006). Does the timing of Grade retention make a difference? Examining the effects of early versus later retention. School Psychology Review, 35(1).





Comments Off