Article for Category: ‘NAPLAN’

Kevin Wheldall on why Australia sucks at reading

January 21st, 2013

Kevin Wheldall is an Emeritus Professor of Macquarie University. He is a Director of the reading intervention MultiLit and has a list of awards as long as my Dad’s arm.

Here is Kevin’s opinion on Why Australia sucks at reading.

Don’t forget Jennifer Buckingham’s op-ed piece on the same topic that I blogged about here.

 

 

Comments Off

NAPLAN and learning difficulties Part II

June 11th, 2012

In a recent blog post I identified what I saw as problems with the Australian literacy and numeracy testing (NAPLAN) for grades 3, 5, 7 and 9. A number of colleagues have questioned my position and, being the “social phobe” that I am, I am compelled to clarify my position.

There is certainly a lobby opposed to a national literacy and numeracy test full stop but I’m not buying their arguments. A national literacy and numeracy test has many advantages. We know that teachers are not great at ranking student reading achievement for example (see Madelaine & Wheldall, 2007). An objective test may be helpful in identifying struggling students who would otherwise not be identified if we relied on teacher judgment alone. A standardised test can also allow us to see how literacy and numeracy standards are changing across time. For example, a Grade 3 cohort with good maths skills who became mediocre by Grade 5 might highlight a need for better maths instruction in Grades 4 and 5.

What I am arguing is that NAPLAN in its current form fails in two important ways.

First, it begins in Grade 3 by which stage most children who are going to fail have already failed. This is a problem because early intervention is crucial for children who have learning difficulties. If we take reading as the example, the effects of intervention halve after Grade 1. For example, the US National Reading Panel report (NICHHD, 2000) reported that the mean effect size for systematic phonics in kindergarten was d = 0.56; d = 0.54 for first grade; and d = 0.27 for grades 2-6. Clearly we have to get in early. One might argue that schools have early identification and intervention in hand before Grade 3 NAPLAN. I strongly suspect this isn’t the case in most schools. I recently published a book chapter that looked at the growth in reading skills of a group of 61 poor readers and 52 good readers over the course of a school year. All poor readers were engaged in learning support interventions of some description. The outcome was that only one of the 61 children who were poor readers at the beginning of the year made meaningful growth in reading skills. All the others essentially stayed at the same level. Figure 1 below taken from Wright and Conlon (2012) shows standard scores from the Woodcock Basic Reading Skills Cluster (a combination of nonword and real word reading skills) taken at the beginning of each of the four school terms. The age-standardised scores of controls (good readers) didn’t change as one would expect. Unfortunately, the same thing occurred for the poor readers. If they were a poor reader at the beginning of the year they remained so at the end of the year. The conclusion was the same as Denton et al. (2003). Normal school learning support services often lack the specificity and intensity to make a meaningful change in the reading skills of struggling readers. That is, they do little to “close the gap”.

Figure 1 (from Wright & Conlon, 2012). Poor reader and control group means and standard deviations (standard scores with mean of 100 and SD of 15) on the Basic Reading Cluster at the beginning of each of the four school terms.

 

The second problem with NAPLAN in its current form has been discussed in the previous post. That is, the test format does not provide data that helps teachers identify what specific parts of the reading, writing and numeracy (and arguably language) processes are going wrong and, most importantly, does not provide data that on its own allows design of effective interventions. See also Kerry Hempenstall’s comments.

The answer may lie in the quality response-to-intervention (RTI) approach in Australian schools that I admitted yearning for in my previous post. I would like to see every Kindergarten/Prep teacher employ the very best methods for the teaching of language, reading, spelling and maths skills/knowledge. A sample of children should be subject to weekly tests on curriculum-based measures of the above skills. Estimates of normal growth rates can then be obtained. Every child in Kindy/Prep should then be assessed on these measures weekly and their growth plotted. Any child with a lower than average growth rate should be hit with extra instruction. These children should again be assessed at the beginning of Grade 1 on standardised tests and, if they are still behind their peers, should be hit with another round of intervention using a systematic program (see examples here and here). A NAPLAN style test in May-June of Grades 1, 3, 5, 7 and 9 can then be used to ensure that these children maintain their gains and to identify any children missed by the previous procedure.

 

NAPLAN and learning difficulties

June 01st, 2012

May was a busy time in Australian schools with Grades 3, 5, 7 and 9 involved in the national literacy and numeracy tests (NAPLAN). The stress I see in parents and learning support colleagues during NAPLAN time often causes me to reflect on the purpose of the test(s) and how useful they are for students who have learning difficulties.

 The Australian Curriculum, Assessment and Reporting Authority (ACARA) claim that the purpose of NAPLAN is to “measure the literacy and numeracy skills and knowledge that provide the critical foundation for other learning”. They also claim that introduction of NAPLAN has led to “consistency, comparability and transferability of information on students’ literacy and numeracy skills”. (Don Watson would have a field day with these weasel words).
NAPLAN is useful because it identifies students who are struggling with the broad academic skills. Having an objective measurement is important because research has shown that teachers are not particularly accurate in identifying struggling students. For example, Madelaine and Wheldall (2007) randomly selected twelve students from 33 classes and asked their teachers to rank the students based on perceptions of reading performance. They also assessed the students on a passage reading test. Only 50% of teachers identified the same poorest reader as the objective test and only 15% of teachers identified the same three lowest performing readers as the test. We can certainly argue about whether NAPLAN in its current form is the most effective and/or cost-effective method of gathering data on student achievement, however, it seems that we cannot rely on teacher judgment alone.
On the downside, NAPLAN represents a test, not an assessment. All good clinicians and educators know, there is a difference, or should be, between testing and assessment (see here and here). Assessment is a process that starts with the history and clearly defines the presenting problem or set of problems. The clinician develops an hypothesis or set of hypotheses on the basis of the history. They then gather data (e.g., observations, interviews, tests, and base rates) that is designed to shed light on the hypotheses. It is worth noting that a good clinician looks equally for data that confirms and disconfirms the initial hypotheses. Good assessment should lead directly to treatment and/or appropriate teaching for the presenting problem(s) and provide pre-treatment data that allows monitoring of progress. Testing on the other hand simply tells us how good or bad a student is on a particular test. For example, a student with a low score on a reading comprehension test can be said to have poor reading comprehension. The problem with tests is they don’t tell why a student performed poorly and, if they measure a complex process like reading comprehension, writing, or mathematical reasoning, they don’t tell what component of that complex process is weak.
 
That is precisely the problem with NAPLAN. The NAPLAN tasks are complex and provide little information useful for designing interventions for students with learning difficulties and for monitoring response to intervention. An example from NAPLAN illustrates this point.
A mathematics question asked: $4 is shared equally among 5 girls. How much does each girl get? An incorrect response tells us that the student can’t do the task. So what? The child’s teacher probably knew that already. What would be useful would be to know if the student failed the item because (1) they couldn’t read the question, (2) they didn’t know what ‘shared’ or ‘equally’ meant, (3) they didn’t recognise the item required a division operation, (4) they didn’t know to convert $4 to 400c to make the division easier, (5) they didn’t know the fact 40 divided by 5, (6) they knew all of the above but have attention problems and got ‘lost’ during the multi-step division process.
Similarly, if a student performs poorly on the writing component of NAPLAN no information useful for treatment is obtained. The test doesn’t tell us if the child (a) has a form of dyspraxia and struggles with handwriting, (b) has an impoverished spelling lexicon, (c) has poor knowledge of sound-to-letter conversion rules and therefore struggles to spell unfamiliar words, (d) poor knowledge of written grammatical conventions, (e) poor knowledge of written story grammar, (f) oral language weaknesses in semantics and/or grammar, (g) poor oral narrative skills, (h) attention problems so therefore s/he can’t keep his you know what together while doing a complex task, or (i) autism and therefore doesn’t give a toss about the writing topic. The list could go on.
Unfortunately, NAPLAN provides none of these specific data. It simply tells us how bad the child performs relative to some arbitrary benchmark. So where does this leave us? Or more to the point, where does it leave students who have learning difficulties?
Both of which lead me to think that NAPLAN is probably not all that useful for students who have learning difficulties or for the parents, clinicians and teachers who work with them. It also leads me to yearn even more for a Response-to-Intervention approach in which schools recognise learning problems early in the child’s school career, assess to define the problem(s), and provide evidence-based interventions that target the problem(s).
References
Madelaine, A., & Wheldall, K. (2007). Identifying low progress readers: Comparing teacher judgment with a curriculum-based measurement procedure. International Journal of Disability, Development and Education, 52, 33-42.
Comments Off