RRF Newsletter 49 back to contents
A Prototype for Teaching the English Alphabet CodeProfessor Diane McGuinness

In mid 19th century England, a new movement was underway which anchored reading instruction to the sounds of the language instead of to letters and letter patterns, the outcome of applying insights from linguistics to reading instruction.  The pioneers in this new ‘linguistic-phonics’ movement were Isaac Pitman, A. J. Ellis, and Nellie Dale.  Dale’s programme (1898) was particularly successful and sold well on both sides of the Atlantic.  Universal education brought this important advance to a halt.  Self-appointed education gurus decreed that teaching whole words ‘by sight’ would work just fine and make it easier to teach large classes from the front of the room.  The century of the whole-word method was launched, and the alphabet code was soon to vanish without trace.  

The ‘Look-and-say’ (flash card) method was introduced in the 1920s, and was gradually overtaken by meaning-based, whole-word programmes produced by educational publishing houses.  They came to be known as Basal Readers in the US (‘basic’ and ‘comprehensive’) and Reading Schemes in the UK, and soon dominated the market across the English-speaking world.  By the 1960s, it was estimated that basal readers were used in 95% of American classrooms.  Meanwhile, phonics advocates were fighting a rear-guard action with no success. 

In the mid-1960s, modern reading research was launched in an experiment to end all experiments.  Educational psychologists Guy Bond and Robert Dykstra spearheaded an effort to determine once and for all which reading methods worked best.  Their study included over 9,000 children from school districts scattered across America.  Each project director was charged with comparing a phonics or language-based programme to the basal programme currently in use. 

At that time, a typical basal reader lesson consisted of 30 minutes or so of vocabulary work in which the teacher explained the meaning of the ‘sight words’ for the lesson (words the children already understood).  The children memorized the sight words, then read little readers with boring, repetitive texts:  See Janet, see.  Come, come, look at me.  I can swing.  See me, said John.”   There were occasional lessons on phonics, but these were largely incomprehensible.  Children were asked questions like: “What is the short sound of O?”  Sight word memorization proceeded so slowly, that children learned only 1500 words by the end of the 3rd year.  Spelling didn’t materialize until the 2nd year, and writing of any type was nowhere to be seen.   

One would imagine that almost any kind of phonics program would be better than this.  But when Bond and Dykstra began collating and analyzing their data, something incomprehensible happened.  They reduced the data to classroom means prior to computing the statistics.  There are fundamental mathematical and statistical reasons why this should not be done, but apart from this, it changed the focus of the study from a set of experiments comparing reading methods to experiments comparing classrooms (teacher’s skill perhaps?).  As a result, the outcome was inconclusive.  No one method was consistently better than any other on any test. 

The apparent failure of this study had a profound impact on the future direction of reading instruction and research.  The definitive study to end all studies seemed to show that:  1) No reading method is better than any other reading method.  2) The teacher is more important than the method.  If the method doesn’t matter, there is no reason to change.  If no method is better than any other, OR the ‘teacher effect’ is so powerful it obliterates the impact of a method, then no further research of this type should be funded. 

A small group of educators had a different reaction.  If no one method is better than any other, then why shouldn’t learning to read be enjoyable?  If, as they believed, reading was simply an extension of natural language, children shouldn’t have to endure those dreadful readers when there’s so much good children’s literature around.  Instead, children should learn to read ‘naturally’ by reading books written in natural language.  They should learn to write by doing lots of creative writing, and while they’re at it, invent their own spelling system as they go.  This new movement (whole language/real books) took the English-speaking world by storm and has the dubious honour of being the last whole-word method of the 20th century.  (It also has the dubious honour of producing the highest functional illiteracy rates in the history of national testing.) 

As funding for applied research dwindled, research on reading methods all but disappeared.  In 2000, the National Reading Panel released a report on their review of the research on reading instruction since 1970.  The review covered classroom methods, remedial methods, phoneme awareness training, plus fluency, vocabulary, and comprehension.  What they found was shocking.  After scouring the databases, they located 1100 studies on reading instruction.  The papers were carefully screened to meet basic requirements for a valid scientific study: an experimental research design, a control group, proper statistics, published in a peer review journal.  Only 38 studies met these minimal requirements, and half of these were remedial programmes.  This means that since 1970, only 20 bona fide research studies on the efficacy of beginning reading methods have been published. It’s no wonder there’s so little understanding about how to teach reading.    

During the next thirty-five years, research funding was redirected towards a more clinical focus, and targeted to ‘basic’ or ‘pure’ research, meaning that researchers don’t have to concern themselves with the applied relevance of their work.  Perhaps there’s a way around the missing research, and I think I have found it.  In part, it has to do with going back to basics, to the history of how writing systems work, how they were designed, and how they were supposed to be taught at the time they were designed.  In part, it has to do with reanalyzing the data from Bond and Dykstra’s study to show what they really found.  In part, it comes from studying the research from non-English speaking countries with an alphabetic writing system.  I  am tracing this journey in a new book, and I can only reveal some footprints.  You will have to take my word that there is convincing proof for what I’m about to say.   

Lessons from the Past

We have a 5,000 year history on the development and design of writing systems which provides conclusive proof of the following:

 

1.       No writing system, living or dead, was ever based on the whole word.  All writing systems are sound-based systems, not meaning-based systems.  This is true even though meaning and phonetic units can overlap, as they do in one-syllable words written in a syllabary writing system.

2.       Humans are constitutionally incapable of memorizing more than about 2,000 word-symbol pairs.  This is an ultimate limit, taking years of study.  The Japanese have 1800 word symbols (kanji) in their basic writing system, and it takes children about 12 years to learn them.  Mastering an additional 2,000 kanji is the mark of a highly educated person and can take up to a decade more of study. A good college dictionary contains 250,000 words.  In short, no one can ever be a whole-word (sight word) reader. 

3.         It would have been impossible for early scholars to design a writing system without a thorough knowledge of the phonemes in their language.  Phonemes play a major role in setting up the system.  The ideas proposed by some reading researchers, that writing systems ‘evolved’ from word signs to syllable signs to phoneme signs because scholars were unaware of phonemes, and that children’s awareness of speech sounds mimics this in development, are false.

4.        Four (and only four) phonetic units have ever been used in the writing systems of the world.  The choice of this unit is based on the phonetic and grammatical structure of a particular language.  These are the syllable, the consonant-vowel unit (CV diphone), consonants alone, and phonemes (consonants+vowels).

5.        No writing system ever mixes these phonetic units.  A reading method that teaches more than one of these phonetic units is essentially teaching two or more writing systems simultaneously.  This will cause enormous confusion, making it difficult to impossible for many children to learn to read.

6.         A proper writing system has to be comprehensive in order to work.  It should be possible to write every word, every name, and every potential new word with relative ease.  Because of this, writing systems are designed rather quickly -- all of a piece.

7.         Evidence from the schools that were established at the time a new writing system was designed provide important lessons about how the authors of the system thought it should be taught.  The clearest example comes from Sumer at around 3,200 B.C., which I describe below. 

This history, based on over 5000 years of evidence, shows that no whole-word method can possibly work, and no scholars ever thought that it could.  No ‘eclectic’ or ‘balanced’ methods (multiple sound-units) will ever work either. 

The Prototype

At this point, I want to begin assembling what I call ‘The Prototype,’ the essentials of good reading instruction based on what we know from the past about how writing systems (in general) should be taught, and what we know from the present about how a particular writing system should be taught.  In essence, the Prototype functions as a ‘predictor.’  If the elements that constitute the Prototype are correct, then the methods most similar to it ought to produce the best results in experimental research.     

Five thousand years ago, the character and design of the lessons in Sumerian schools provided a basic formula for teaching any writing system, no matter which phonetic unit is involved.     

1.  The complete structure of the writing system is worked out (or thoroughly understood) before a method of instruction is developed. 

2.  Teach the specific sound units that are the basis for the code.  Don’t teach other sound units that have nothing to do with the code.

3.  Link each sound to its arbitrary, abstract symbol.  These symbols constitute the code.

4.  Teach the elements of the system in order from simple to complex.

5.  Make the students aware that a writing system is a code and that codes are reversible (decoding/encoding) by linking writing/spelling to reading (copy-recite). 

6.  Design lessons to ensure that spelling and reading are connected at every level of instruction via looking (visual memory), listening (auditory memory) and writing (kinaesthetic memory). 

It is hard to think of a better list of guidelines.  Unfortunately, education practice has strayed so far from its roots that hardly any educators or classroom teachers adhere to even one of these principles, let alone all six.

Why the English Alphabet Isn’t Like Any Other Alphabet.  

The Sumerians had a ‘transparent’ syllabary writing system, one syllable symbol for each legal syllable in the language.  The English alphabet code lacks a one-to-one correspondence between sound and symbol.  We call this kind of writing system ‘opaque.’  This is due to an accident of history in which Norman French and classical Latin merged with the English vernacular, with the result that two alien spelling systems were superimposed on the original Anglo-Saxon system.  No writing reform ever took place to correct this problem.  Samuel Johnson standardized the spelling for words in 1755, but he did not standardize the spelling for phonemes.  If he had, I wouldn’t be writing this paper.  As a result, there are multiple ways to spell most phonemes, and multiple ways to read most letters and digraphs, and these multiple ways don’t match.     

It’s difficult for us to imagine what it’s like to have a transparent (or nearly transparent) alphabet code, like those in Italy, Spain, Germany, Finland, Sweden, and Norway.  Teaching a transparent alphabet is incredibly easy, because it’s transparent how the writing system works.  The sound /b/ is always spelled b, and the letter b is always decoded /b/, and so on through all the phonemes in the language. With only one spelling (or nearly) for each sound in the language, if a child can ‘sound out’ a word, he will always be able to spell it correctly.  Learning this is so easy, that children start to read late (age 6 or older) and finish early, by the end of the school year.  So easy, that no country with a transparent alphabet tests reading skill by decoding accuracy.  Everybody can decode.  In English-speaking countries, tests of decoding accuracy (word recognition, word attack) are the major tests (often the only tests) that educators and researchers rely on to measure reading skill and to define ‘dyslexia.’ 

Some children in countries with transparent alphabets do have reading problems, but these have to do with fluency and comprehension.  Yet even this is relative.  Normal readers from Salzburg were compared to normal readers from London on tests of reading accuracy and speed.  Seven year-olds from Salzburg read as fast as the 9 year olds from London, making half the number of errors.  The Austrian 7 year-olds had one year of reading instruction, the English 9 year-olds, four or five.  Further, when the worst readers in the entire city of Salzburg (incredibly slow) were compared to ‘dyslexics’ in London (incredibly inaccurate), the Salzburg children read comparable texts (translations of the same words) at twice the rate of the English-speaking children, with 7% errors.  The English children not only read much slower, but also misread 40% of the words.  When reading skill is so entirely tied to a particular writing system, there can be no validity to the notion that poor reading or ‘dyslexia’ is a property of the child, or to the mistaken belief that “there will always be poor readers.”   A ‘poor reader’ is a statistical concept, not a reality. 

There is more.  Phoneme awareness is highly correlated to reading skill (decoding accuracy) in English speaking countries, and considered to be the major predictor of reading acquisition.  Yet phoneme awareness is completely unrelated to reading accuracy in countries with a transparent alphabet.  Think about this.  How can phoneme awareness be so important in English speaking countries when it doesn’t matter anywhere else?  It certainly isn’t because we have to learn an alphabet.  These other countries have alphabets too.  If phoneme awareness ‘deficits’ are the hallmark of ‘dyslexia’ as many researchers claim, and the diagnosis for ‘dyslexia’ is poor decoding, where are all the ‘dyslexics’ in the countries listed above?   There aren’t any by this definition, though reading researchers in these countries are doing their best to create some.  It is currently fashionable to call Salzburg’s slow readers ‘dyslexic,’ even though they read twice as fast as English ‘dyslexics,’ and can decode and spell perfectly.  Sweden has a 3.4% functional illiteracy rate among 16-25 year olds -- statistically equivalent to zero (this rate is around 11% in Canada and 26% in the US). Yet Sweden has an active ‘dyslexia’ society. 

There’s a lesson here.  Our reading problems are largely a product of our opaque alphabet code and the way it is taught, so here’s one part of the solution.  As a transparent alphabet is easy to learn, why not create an ‘artificial transparent alphabet’ to teach beginning readers?  This would consist of the 40+ English phonemes and their most common (probable) spellings.  This way young children would start off with the same advantage as their European counterparts.  They could easily grasp the code nature of a writing system, and the fact that reading and spelling are reversible processes, the primary requirement of a code.  This is certainly not a new idea.  It was the fundamental breakthrough of Pitman, Ellis, and Dale in the 19th century.  Their programmes embody this principle, plus a number of other insights.  Unfortunately, we can never know how well they worked, because there was no way to test reading in the 19th century. 

On the Trail of More Clues to the Prototype

This brings us back the study of Bond and Dykstra and their fatal mistake.  I decided to reanalyze the data.  With 9,000 children in the study, the data will approximate the normal curve, and the truest measure of these data will be the grand mean, or average.  This allowed me to compare programmes and determine what happened.  Two methods in the study used an artificial transparent alphabet and were similar to the Prototype.  One was the initial teaching alphabet (i.t.a.), designed by James Pitman (Pitman’s grandson), and the other was the Lippincott programme, named for its American publisher.  i.t.a. uses an artificial script for a portion of English phonemes, which adds an extra processing step.  Children are obliged to unlearn something and relearn it in a new way, not good pedagogical practice.  Nellie Dale pointed out long ago: “Never teach anything you have to discard later.”  As a result, many children didn’t complete the transition to normal print by the end of the school year, and this impacted test scores. 

The Lippincott programme was the undisputed star in this study.  The children were 6 months ahead of their basal reader counterparts and the other phonics programmes, and national norms, on nearly every test.  These included word recognition, word attack, spelling, reading comprehension, and fluency.   Here are the characteristics of this programme: 

1.  It uses an artificial transparent alphabet with a main spelling for each of the 40+ sounds in English. It introduces some alternative spellings as well. 

2.  Lessons are anchored in the sounds (phonemes) in the language, and reading is taught from sound to print.  Children learn that the letters are symbols for the sounds in their own speech, and that the number of these sounds is finite (40).  There are sound-targeting stories which feature a particular phoneme. 

3.  Children learn to segment and blend phonemes in words for both reading and spelling. 

4.  Children learn letters by writing them, not from looking at them or from letter tiles.  They say the sound the letter(s) stands for as they write it (not the letter name). 

5.  Reading and spelling are integrated.  

6.  Reading materials are designed to correspond to the level the child has achieved.   

Another revelation in my analysis was the very poor showing of the basal reader children on all types of decoding tests, including tests featuring irregularly spelled words (sight words), the very words they had been memorizing over the course of the year.  Here is more evidence that no one can learn to read by memorizing words ‘by sight.’    

Since this study, several research studies have added new dimensions or new elements to The Prototype.  In 1966, Chall and Feldman discovered during classroom observations, that what classroom teachers said they did bore only a vague resemblance to what the researchers saw they did, and had recorded on their check lists.  This is perfectly understandable.  It’s hard to monitor your own behaviour.  Nevertheless, this discovery showed us that the only way to measure the impact of the teacher in the classroom, and disentangle the teacher from a method, is to sit in the classroom, hour-by-hour, for weeks or months, and record what is going on.  This is the only way to answer the question: is there any relationship between what the children are learning in the classroom and reading skill measured on standardized tests, and, if so, what is it?   

Three large-scale studies have been carried out on this topic, beginning with Carr and Evans in Canada, in 1985.  All three studies produced identical results.  Very few ‘literacy’ activities for young children (ages 5 to 8) have a positive impact on reading skill.  Those that do are: learning the phonemes in the language and how they are represented by letter symbols, segmenting and blending sounds in words, plus the amount of time spent writing (all types).  There was some evidence that silent reading (not reading aloud, or group reading) is helpful, especially if the whole class is reading silently at the same time.  As a large number of classrooms, using a variety of different methods, were observed in these studies, the consistency of the results is rather remarkable.   

They discovered much more.  Many literacy-type activities had no impact on reading skill (neutral), and some had a strongly negative impact, meaning that the more time the children spent doing that activity, the poorer their reading scores were.  The strongest negative predictor was teacher-directed language arts lessons involving vocabulary training and reading stories.  This was a powerful effect in all three studies, and negative correlations ranged as high as r = -0.80 (1.0 is perfect).  Time spent memorizing ‘sight words’ was also a strong, negative predictor.  Activities that had no impact, positive or negative (correlations at zero), were ‘pretend reading’ or ‘group reading,’ learning ‘concepts of print’ such as print direction and how to turn pages, tasks involving letter names, time spent on larger phonetic units, such as clapping out syllable beats, and time spent on auditory phoneme awareness tasks (no letters).  The message is clear:  If your goal is to teach the alphabet code, then teach the alphabet code, and get on with it.   

The positive impact of writing was seen in all three studies, and was something of a surprise.  Experimental studies have shown that copying letters is the best way to learn them.  Not only this, but copying out spelling words halves the learning rate compared to using letter tiles or a computer keyboard.  There is also evidence that learning to spell produces higher scores on a reading test than the same amount of time spent learning to read, a result that confirms Montessori’s insight about the greater generality (transfer) of spelling over reading.             

On the basis of these and other studies, the Prototype can be updated to be more specific.  In addition to the original framework, here’s what a successful reading/spelling program for teaching the English alphabet code should adhere to: 

·         NO SIGHT WORDS  (except for truly undecodable words)     

·         NO LETTER NAMES

  •   A sound-to-print orientation.  Phonemes, not letters, are the basis for the code. 

·         Phonemes are finite.  They provide an endpoint or ‘pivot point’ which allows
an opaque writing system to reverse.  Learning the ‘sounds’ of all possible
spellings does not.

  •   Teach phonemes only and no other sound units.
  •   Begin with an artificial transparent alphabet: a one-to-one correspondence between 40+ phonemes and their most common spelling.
  • Teach children to identify and sequence sounds in real words by segmenting and blending, using letters.   Don’t do this in the auditory mode alone.
  • Teach children how to write each letter.  Integrate writing into every lesson.
  • Link writing (spelling) and reading to ensure children understand how the code works. 
  • Teach spelling alternatives (“there’s more than one way to spell this sound”) not ‘reading alternatives.’  
  • Spelling should be accurate or, at a minimum, phonetically accurate  (all things within reason).

The final step is to introduce the entire advanced spelling code -- the 134 remaining common spellings beyond the basic code level, a process that has yet to find its way into any reading programme.  This issue has received no attention from researchers so far.   

 The National Reading Panel Results

I have constructed a fairly detailed Prototype despite the lack of direct evidence from experimental research on reading methods.  The interesting question is whether the Prototype is a ‘good fit’ to the most effective programmes in the NRP database.  For the most part, these studies compared a phonics-type programme to some other method, most often ‘real books’ or a well-known reading scheme.  In some studies, the contrasting programme was another type of phonics, such as ‘analytic phonics,’ a method employing multiple types of phonetic units.   

The studies were evaluated by the NRP using the technique of Meta-Analysis which is based on ‘effect sizes.’  An effect size is a statistical value that allows you to compare the size of the difference between two methods on any similar type of test in standard deviation units.  A Meta-Analysis is essentially a grand average of individual effect sizes.   Here’s a simple guide to interpreting an effect size.  ES values around .30 or less are marginal, and probably not significant.  Values above .50 start to become interesting, indicating that one method is clearly superior (significantly different).  A value of 1.0 means the effect is large, equal to a one standard deviation difference between the two methods.  On a standardized reading test, this would be the difference between scoring at the 50th percentile (100 standard score) and the 15th (85 standard score).  Effect sizes can go higher than this. 

As noted above, the NRP located only 38 bona fide studies, and half were remedial programmes.  Subtracting this group of studies, the combined effect size for reading test scores for beginning readers is .55 in favour of phonics over real books/reading schemes.  This value will go higher if less successful phonics methods are excluded.  For example, the average effect size for ‘rhyme-analogy’ programmes was .28.   

I identified the programmes in the NRP database that were the closest fit to the Prototype and computed independent effect sizes.  I also computed effect sizes for the i.t.a. and the  Lippincott versus basal reader comparisons in Bond and Dykstra’s report, as this study didn’t make the NRPs 1970 cut-off.   i.t.a. was not particularly successful for reasons given above.  Effect sizes reflect this: word recognition (.49), phonics knowledge (.31), spelling (.11), and comprehension (.03).  The Lippincott programme had a much higher success rate, with effect sizes as follows: word recognition (1.12), phonics knowledge (.62), spelling (.61), comprehension (.57).  These values are the benchmark for this programme, as they’re based on over 1,000 children.  Several small-scale studies using Lippincott were in the NRP database.  These produced somewhat lower effect sizes, though still impressive.   

Another method that fits the prototype is the Open Court programme.  Only one study on Open Court appeared in the NRP database.  Here is Barbara Foorman’s description of this programme:

“Components of the first grade program include: 1) Phonemic awareness activities during the first 30 lessons (10-15 minutes daily);  2)  Forty-two sound/spellings are introduced in the first 100 lessons, one per day in lessons 11-30, and at a slower pace thereafter; phonics principles are reinforced through sound/spelling cards, alliterative stories, and practice stories whose vocabulary is tightly controlled for the sound/spelling just taught;  3) Blending is regarded as the key strategy for applying the alphabetic principle, and, therefore, 8-10 new words are blended daily; 4) Dictation activities move from letter cards to writing words sound by sound, to whole words (by lesson 17), to whole sentences (by lesson 27); 5) Shared reading of Big Books; 6) Text anthologies (with uncontrolled vocabulary), plus workbooks are introduced in the middle of first grade, when all sound/spellings have been introduced; and 7) Writing workshop activities are available in individual and small group formats.”  (Foorman et al, 1997). 

Unfortunately, this study had serious design flaws.  Only the poorest readers in each class were tested.  These children also received 2½ hour of tutorials each week in which the tutorial might or might not match the classroom programme, and this was neither controlled nor accounted for.  As a consequence of these and other problems, effect sizes (first two years combined) were not impressive: word recognition (.53), spelling (.37), comprehension (.34).   This was not a fair evaluation of Open Court, and better research is called for. 

The last programme is Jolly Phonics, developed by Sue Lloyd.  The NRP included only one JP study in its database (Stuart, 1999).  I have added other studies that appeared after the NRP’s published report, plus a study they missed.  This programme goes beyond the Prototype in many respects, and represents what can happen when popular myths about classroom instruction are challenged.    

First to go was the myth that reading is hard to teach.  Second to go was the notion that a linguistic-phonics program can’t be taught to the whole class at the same time.  Third to go was the age barrier.  Jolly Phonics is taught to four-year olds.  Fourth to go was the belief that young children can’t ‘pay attention’ for more than about 10-15 minutes.  Fifth to go was the related belief that if young children are kept at a task for longer than about 15 minutes they become bored and frustrated, and are unable to learn.  Sixth to go was the idea that teachers need extensive training to teach the alphabet code properly. 

Lloyd’s initial goal was to reduce the lessons to the essential elements and present them at an optimum rate -- as quickly and as in-depth as possible.  Undoubtedly, her greatest insight was in figuring out what these elements are.  She discovered that young children forget when lessons are spaced too far apart, necessitating constant reteaching and review.  It transpired that very young children can be taught to read in a whole class format if three conditions obtain:  1) The lessons are fun and stimulating, and engage all the children.  2) There are sufficient backup materials for individual work to support what is taught in the lessons.  3) Parents are involved enough to understand the programme and know how to support their child at home.  When lessons are enjoyable; when children see that they and their classmates are actually learning to read, they have no trouble paying attention for more than 15 minutes. 

Lloyd found ingenious ways to engage the whole class and keep them on task.  She invented simple action patterns to represent each phoneme.  Children say each phoneme aloud accompanied by the appropriate action.  Apart from being fun for the children, the action patterns fulfil a number of functions.  They help anchor the speech sounds in memory.  Because the actions are visible to everyone, including the teacher, they ensure that all children are engaged (no daydreamers allowed).  In this sense, they function as gentle ‘peer support’ for everyone to get on board and learn quickly.  Of course, it is possible that these actions aren’t essential to the success of the programme.  Research is needed to sort this out. 

Jolly Phonics proceeds rapidly.  Children learn about one phoneme per day, along with the accompanying actions, and their letter symbols.  They get handwriting training almost from the beginning, and are soon able to write simple words made up of the phonemes taught so far.  The basic ‘transparent alphabet’ is taught in 11 to 12 weeks, about 60 hours of direct instruction.  After this, children move on to simple ‘phonics’ books, and learn spelling alternatives (21 spellings alternatives are taught).  Little teacher training is necessary.  There is a handbook with brief, clear instructions, and an excellent video.  

None of the JP studies were exactly alike (a problem with all this research). The study truest to Lloyd’s intentions was the study by Johnston and Watson (1997) which was carried out at Lloyd’s school. The children were matched on a wide range of skills (IQ, phoneme awareness, etc.) to a control group in Scotland who were taught ‘analytic phonics,’ the traditional Scottish method.  The effect sizes for reading were strong in favour of JP: .90 on the British Ability Scales immediately after training, 1.0 at a one year follow up, and 1.10 a year after that, showing the lasting impact of the programme. 

Stuart’s famous ‘Dockland’s study was carried out in an impoverished area of London’s East End.  The children arrive at school with little or no spoken English (53% of children in this study knew no English words).  Stuart followed the JP format closely, and children had about 60 hours of instruction.  The comparison group was from a similar school that was using real books.  Despite the fact that these children had such poor English language skills, results were much the same as Johnston and Watson’s, and held up well over time.   

At the first post-test, when the children were 5½ years old, the effect sizes were: BAS reading (1.2), Young reading (.63), nonword decoding (1.5), Willows spelling (1.4). At the second post-test, one year later, these values were respectively, .71, .62, .74, .86. These children scored well above national norms in reading and spelling, and made these outstanding gains despite the fact that their vocabulary scores lagged far behind British norms.  

In a study by Sumbler and Willows in Toronto, the duration of the JP lessons was shortened considerably, and lessons were extended over the entire school year. These results provide scientific support for Lloyd’s discovery that learning should be fast and intense for maximum effect. The problem was not in the study design, but in the reluctance of the kindergarten teachers to teach at this pace. Instead, lessons were reduced to 20 minutes or less, and total time was nearly cut in half. Sumbler and Willows are the authors of one of the observational studies reported earlier. During these observations, they discovered that the JP lessons were intermingled with a variety of irrelevant language activities, reducing time still further. The impact of the slower delivery, plus the lack of focus, is shown by lower effect sizes on the Woodcock reading tests: word recognition: .52, word attack: .68, and spelling: .44. 

Johnston and Watson carried out a study using a programme designed by Watson called Fast Phonics First (FPF). It is similar to JP, but departs from it on a number of levels. The study involved all beginning readers (337) from the Clackmannanshire county in Scotland. Children in the FPF classes were more likely to be from impoverished families (more free lunches and clothing allowances). The remaining children constituted the control groups. In the FPF classes, phonemes were introduced in the same order as JP, but at a slower rate (just under 1 per day). Lessons were shorter, more spread out, and took less total time (20 minutes daily for 16 weeks, a total of 26 hours). No action patterns were used. Lessons on phoneme identification (initial, middle, final position) were taught with Lloyd’s ‘Finger Phonics’ book. Segmenting and blending were taught with magnetic letters on a large board. Children gradually transitioned to writing over the course of the 16 weeks. No consonant blends (CCVC/CVCC words) or spelling alternatives were taught. 

There were two control groups. Both received the typical Scottish ‘analytic phonics’ instruction, but with constraints. They learned the same letter-sound correspondences in the same order, and used the same practice words as the FPF groups, but at a much slower rate. Both control groups learned one phoneme per week (16 total) in word-initial position only. They had handwriting training from the beginning.  

The first control group (Analytic ONLY) did numerous activities during the week to reinforce memorization of the letter-sound pair, including colouring and drawing activities. The second control group (Analytic + PHON) split the 20 minute time period between learning the 16 letter-sound pairs and phonological exercises (auditory only). These included onset-rime and phoneme segmenting and blending. The colouring and game-type activities were dropped. 

In the comparisons between FPF and Analytic ONLY, effect sizes were large: BAS reading (.91), Schonell spelling (1.45). ES values were slightly lower in the comparison between FPF and the Analytic+ PHON group: Reading (.85); spelling (1.17). The children were retested one year later. By this time, the entire county had switched to FPF, and this eliminated the group differences in reading and comprehension. The original FPF group maintained an advantage in spelling. 

It’s informative to compare the UK studies where possible, as they used similar tests reported in age-equivalent scores. (Canadian researchers use different tests reported in standard scores.)  I compared the JP and FPF programmes (Johnston and Watson 1997 and 2000) at post-testing when children were 5½ years old. JP children scored 16 months above national norms in BAS reading. The FPF children scored 6 months above norms in reading and spelling (Schonell). The FPF children gained over the following year (age 6½ ), and now scored 10 months above norms in reading, 12 months in spelling, and 6 months in comprehension.  There were no data for the JP children at this age. However, at the end of third year (7½ years), JP children were 9 months ahead of norms in reading comprehension.  

Unfortunately, Stuart reported the data mainly as raw scores. However, age-equivalent scores were reported for the final testing at age 6½. Recall that the children in the JP and Big Books classrooms were largely non-English speaking. For BAS reading, the JP classes were 7 months above national norms, and were 9 months ahead of the children in the Big Books classroom. On Schonell spelling, JP classes were 4 months above national norms, and 12 months ahead of controls.   

It is clear that the JP training at Lloyd’s school (Johnston and Watson 1997) produced much greater gains, but we don’t know why. Is this a function of the speed and intensity of the instruction, a function of curriculum variations, a function of different populations, or all three? These are important questions for future research.  

So far, no programme fits the Prototype in providing adequate and complete spelling instruction (mastery of the 134 additional common spellings for English phonemes and Latin suffixes). Despite this, the JP spelling scores were surprisingly high, certainly much higher than national norms. As norms are based on the current status quo, this shows us what a parlous state spelling instruction is in. Merely teaching the code the right way around, getting the logic straight, and adding 20 or so of the 134 spelling alternatives that need to be taught, makes an enormous difference.   

The three UK studies also measured phoneme awareness, revealing an enormous impact of the JP and the FPF programme on segmenting skills. The overall average ES value was 1.65. Johnston and Watson found that the impact of FPF was far greater than isolated phonological awareness training (the AP + PHON group). In this comparison, the effect size was .73 in favour of FPF.  

This is a serious (fatal) blow to the advocates of supplemental phonological awareness programmes. And this is supported by the findings of the NRP in their survey of phoneme awareness training studies. The NRP located nearly 2,000 studies on ‘phoneme awareness.’ Only 52 studies passed the basic screening. They looked at several contrasts, with important results. Training phonological larger units has no impact on reading or spelling skills. Phoneme awareness training in the auditory mode only (no letters) is not much better (effect sizes around .35). If reading and spelling are measured by standardized tests, which is rarely the case in these studies, the effect sizes are similarly low. Of the studies I could locate which did use standardized reading and spelling tests, the impact of phoneme awareness training on reading was negligible (ES values ranging from .30 to .40).   

More surprising, phoneme awareness training didn’t impact phoneme awareness skill any better than a good linguistic-phonics programme. The average effect size of phoneme awareness training on phoneme awareness was .85 across all studies. After subtracting the weaker studies from the database (remedial studies, foreign language, brief duration, small sample size, etc.), this value didn’t increase by much (ES = 1.10). This isn’t close to 1.65 (JP + FPF). 

This is unassailable evidence that a phoneme awareness training component (even a good one) provides no extra advantage over a good linguistic-phonics programme. Here’s the message: If phoneme analysis/synthesis is integrated into reading and spelling lessons from the outset, this has a much greater impact on phoneme awareness skill than if phoneme awareness is taught in isolation.   

I realize that, for some readers, I’m not telling you anything you didn’t already know. What I have tried to do is put this ‘knowing’ into a wider frame, one that comes as close as possible to being scientifically and logically unassailable. Due to the fact that reading instruction has gone so far awry, and spelling instruction hasn’t even begun (there’s virtually no research on spelling instruction, and no section on ‘spelling’ in the NRP report), there’s a long, hard battle ahead, and we may not win it.  We certainly won’t win it if we don’t understand precisely why some methods work and some do not. 

Brief Biography

Diane McGuinness is emeritus Professor of Psychology, University of South Florida. She has conducted research on perceptual and cognitive development and on reading instruction and remediation. She has published over 100 research papers, theoretical papers, chapters, and books on these and other topics. She received her undergraduate and graduate degrees in psychology at the University of London: B.Sc. at Birkbeck College (first class honours) and Ph.D at University College.

 

 

 

Copyright Notice
All rights, including copyright, in the content of these RRF web pages are owned or controlled for these purposes by the RRF. In accessing the RRF's web pages, you agree that you may only download the content for your own personal non-commercial use. You are not permitted to copy, broadcast, download, store (in any medium), transmit, show or play in public, adapt or change in any way the content of these RRF web pages for any other purpose whatsoever without the prior written permission of the RRF.
© Reading Reform Foundation 2010
Home  |   RRF Conferences  |   Resources / Articles  |   Newsletter Archive  |   About Us  |   Contact  |   Donate

Sites for Teachers