A point made in all the articles summarised below is that standardised testing is important in order to ensure that standards are being maintained.
Davies, J., and Brember, I., 2001. A decade of change? Monitoring reading and mathematical attainment in Year 6 over the first ten years of the Education Reform Act. Research in Education No. 65, May 2001. The researchers administered a standardised reading test (the NFER Primary Reading Test) to a sample of Year 6 children each year from 1989 to 1998. After the national tests started in 1995, the researchers were able to compare the Key Stage 2 test results of the children in their sample with their NFER test results. The Key Stage 2 results suggested rising standards: the percentages of children nationally who reached Level 4 or above in 1995, 1996, 1997 and 1998 were 48%, 58%, 63% and 65%; the children in the research sample performed rather better, at 51%, 71%, 71% and 75%. By contrast, the Primary Reading Test results for these children showed no statistically-significant improvement: the average standard scores for the four years in question were 96.78, 98.41, 97.73 and 99.57. The researchers warn that their evidence does not support the government’s claims of rising standards. Their findings are in line with the recently-reported findings from the University of Durham.
Bates, C., and Nettlebeck, T., 2001. Primary school teachers’ judgements of reading achievement. Educational Psychology Vol. 21 No. 2, May 2001. This study was carried out in Australia. The reading accuracy and comprehension scores (Neale, 1988) from 108 school children aged 6-8 years were compared with their teachers’ judgements of their reading ability. It was found that most teachers made inaccurate judgements, and, in particular, that among teachers in state schools, ‘the extent of over-estimation among students with low achievement scores....was in excess of 1 year of reading age’ (the picture in private schools was slightly better). The researchers point out that the implications are serious: ‘...it is therefore possible that those needing most help do not receive the intervention necessary to maximise their learning opportunities’. This study is yet another which highlights the need for teachers to rely on objective measurement rather than purely subjective judgement.
The final paper summarised here is unusual: it is not published in a journal but on the Internet (www.educationnews.org), and it is an ‘unsolicited letter’ signed by an ‘international group of researchers’ (31 in all – many of them very well known).
The writers summarise evidence that ‘Reading Recovery is not successful with its targeted student population, the lowest-performing students’. They note that RR’s ‘in-house research’ often excludes the results of the weakest students and therefore presents a much healthier-looking picture than studies conducted by independent researchers. RR is also criticised for not being cost-effective, for not using standard assessment measures and for not changing ‘by capitalizing on research’. Two recommendations which are of particular interest are that RR should include ‘explicit instruction in phonics and phonemic awareness’ and should use ‘standardized outcome measures and continuous progress monitoring’.