discussion: the upper limits of whole word memorisation

Please contribute past postings from the message board that you have have found particularly inspiring.

Moderators: maizie, Lesley Drake

Post Reply
JAC
Posts: 517
Joined: Tue Nov 15, 2005 1:51 am

discussion: the upper limits of whole word memorisation

Post by JAC » Fri Sep 28, 2007 12:37 am

This is from a thread entitled 'Fluency' started on Sept 10th 2007.

We often wonder what are the limits for children memorising as opposed to sounding all-through-the word. This 'classic' contains posts from the original thread, including one from Diane McGuinness regarding her reasons behind the suggestion of 2000 symbols as an upper limit, plus posts from Susan S. about individual cases where this limit is exceeded.
the whole thread is worth readingIMHO!!!

http://www.rrf.org.uk/messageforum/viewtopic.php?t=3021

from Susan G.
Back in this thread, Susan S. wrote the following:
Remember that D. McGuinness gives a limit to the number of "abstract symbols" that can be memorized (nowhere in her books does she cite explicit references or specific evidence -- it may be a hypothesis of her own, not a verified fact), but she does NOT equate this hypothesized memory limit to a specific limit on the number of English WORDS that can be memorized, even though many assert that she says this. It has become a sort of urban legend, like alligators in the sewers. I am not aware of any hard data on how many "sight words" in English a reader can memorize, using primarily visual cues. The literature on teaching students with severe hearing loss or language disabilities such as autism might shed light on this question. The most outstanding example of this phenomenon that I have personally evaluated was with a student aged 13-14 who had never heard of sounding out and blending, was not aware that English letters had a phonemic connection (knew letter names), could not sound out even 2-letter combinations, could relate a "sound" only to the letters in her initials (may have been simply a fortunate coincidence), in short, did not have any alphabet knowledge whatever. Yet according to extensive testing her sight vocabulary was in the 11-12 year range, estimated between 8 000 and 10 000 words based on inventories from the Fry list. She did not present as a "reading delayed" student but was referred because of difficulties with written assignments (she turned out not to b e able to encode any words at all). This student, who spoke a non-Western language, had been taught an exclusively visual approach to English reading. She did not even use the first letter of words as a phonological cue, because she was not aware of the phonological information carried by letters. This student had strengths in the memory for patterns and visually-presented information as assessed by cognitive testing. The good news is that she was able to learn and apply phonemic decoding skills rapidly when taught to do so.
As Diane McGuinness's work was mentioned and several people seemed unclear about exactly what she meant, I decided to ask Diane McG, herself, for clarification. Her reply is as follows:

I am a little confused by the writer's statement that there may be an upward memory limit for abstract symbols, but that there is NO upward limit for repeated sequences of random strings of symbols based on a 26 symbol set. The latter scenario is even more horrendous than memorizing 2,000 unique symbols due to the enormous redundancy. Anyone seriously trying to learn to read like this would go mad in a short space of time. In my own research, I have never found a reader who memorized whole words purely by sight. In a study on reading strategies, even the poorest readers had some minor success with the sound-symbol code for initial consonants, though they did not "know" (consciously) they were doing this. i.e. the word 'cat' might be read "cab" "come" "car" "cave" etc. - but was very unlikely to be read "fish" "bag" "dog" "farm." I referred to these readers as "whole-word guessers" who cued every word off the initial consonant [they had much less success with words starting with vowels], then guessed the rest of the word based on word length and word "shape." These students, apart from the very few who were complete non-readers, were uniformly the worst readers in the class. As for the student the writer refers to, who appears to have memorized about 8,000 words purely by sight - this would take some looking into. She may have 'unconscious' ability to decode parts of words and not be aware she is doing this, just as children pick up words automatically from conversations as they are learning to talk. Many children learn to read fluently without the slightest idea how they are doing it. I was one of them.

As for the evidence on the 2,000 symbol limit --- initially I took this from the writing systems themselves (past and present), observing that not one of them exceeded a 2,000 symbol ceiling, OR it if did, the system was changed quite dramatically. Interested readers should look at additional data I review in Early Reading Instruction (MIT Press 2004) which lays out more precise documented evidence that writing systems fail when they reach about 2,000 symbols, and that other solutions are sought. This is obvious in the archaeological records of Sumer, Egypt, and of the Mayans in central America - all of which started with a whole word system and had to change course. Of course, a 2,000 maximum isn't remotely sufficient to constitute a writing system. There are 1,000,000 words in English (all accessible, I might add, to those who know the English alphabet code). A whole word writing system, whatever its structure, is totally unworkable in the face of this, and would be a logistical nightmare. It would require a decision of a National "writing symbol board" to create new symbols (or symbol sequences) for every new word entering the language, and for every surname in the phone book.

The upper limit of 2000 symbols has been corroborated recently by Marr, the Sumerian scholar, who discovered that the Sumerian writing system was not logographic, but basically a syllabary - as is the Chinese writing system. There is living proof of the 2000 limit in modern Japan where it takes Japanese school children from an early age to the end of secondary school to memorize the 1860 Kanji symbols for Chinese loan words they still use in the writing system. It takes a genuine scholar about 10 more years to memorize another 2,000 Kanji symbols. The main writing system in Japan consists of two (redundant) diphone systems, with symbols marking each CV unit in the language.

Finally there is Aaron's research which shows that the profoundly deaf - without any ability to 'hear' phonemes -- reach a ceiling somewhere around age 7 to 8 reading level and this is far from universal for this population.



Susan S.

Thanks, Jenny, for getting more background information on this question. I find all the research about what occurs in other languages fascinating. Certainly Dr. McGuinness makes a persuasive argument from analogy about the situation facing English readers. It's pretty convincing to me, as to (likely) most of us here who strongly favour a code-based approach to teaching beginning reading.

But -- it isn't the same thing as hard data. My observation, that we don't have hard data about what the memory limits are in learning English words, went uncontradicted. I think Dr. McGuinness did appear to misunderstand my point (she seemed to believe I had an opinion or argument to make about what types of abstract symbols might be easier to remember -- in fact I have absolutely no opinion on the subject whatever!). It's a principle of empirical science that, if a result in a related field appears to show the same phenomenon as the one you are considering, you need to replicate the findings in your particular context to verify whether the conclusions would apply. Someone needs to do research into the limits of word-memorization in English. We already know it's a very ineffective strategy for most children -- especially children who struggle with early reading -- and THAT is the main point. But we don 't have any real information on what the upper limits are.

While Dr. McGuinness mentioned studies on the reading levels of the profoundly deaf (the specific study she cited is in line with others I know of, and with results here at our provincial and district schools for the deaf), those studies refer to average achievement. The fact is that some profoundly deaf individuals do become proficient adult-level readers, and they do not do so by the phonological route, although they do somehow activate that part of the brain. Are there any studies that investigate how proficient deaf readers become fluent? I am not aware of any, but that would be one possible area of investigation. The fisherman in the Indian Ocean who reeled in a single live coelocanth disproved the assertion that coelocanths were extinct. Similarly, even one fluent reader who is reading via word memorization disproves the assertion that no one can memorize more than 2000 words (I realise Dr. McGuinness did not assert exactly this, but others have quoted her as saying this).

Dr. Richard Feynman, the Nobel-winning physicist, wrote about the need to empirically demonstrate what appear to be simple phenomena, rather than make the error of assuming we know what is occurring. One of his essays -- a humorous one, but with a serious point -- related a story about an experiment he and a colleague carried out in graduate school at Princeton, which conclusively demonstrated that the simple act of counting to 60 was a different mental procedure, with different operations and engagement of brain regions, for different people. It seemed simple and "the same thing" but was not. His caution about making assumptions and the need to verify what we believe to be occurring is one that we in education (and "soft sciences" generally, like psychology and sociology) need to heed and consider carefully.

My concern is similar to the one you have about the letter name/letter sound controversy. We have a "better mousetrap," and plenty of data to show that phonics fast and first is the best way to teach young children early reading skills. It compromises the integrity of our position to take an overly-dogmatic stand on matters that are not yet conclusively demonstrated by research. The letter names issue is one such, the issue of a specific memory limit in English is another. The theoretical underpinnings are strong. Related research is suggestive. But where are the controlled studies or comparison groups with hard evidence that we can cite to sceptics? It isn't there, yet.

I must say that the student I described was a shock to me, because I had believed (following along with Dr. McGuinness' reasoning) that it was impossible for a student to memorize so many words. I had to re-consider the question after evaluating this student (who, remember, was considered an average reader -- she did not come up on the radar when struggling readers were considered). To rule out the possibility that the student was using code knowledge or syllable knowledge, we did three things. First, we assessed the student's code knowledge using the test that Dr. McGuinness recommends. The student was unable to give the sounds for any graphemes. She either had no response, or she gave the letter names. On two letters, she held the letter name slightly ("immmmm???) in a tentative way, which made me wonder if she had some idea of the sound-symbol connection. I could not ask clarifying questions without compromising the validity of the testing, so I marked that item with a question mark. Apart from the two question marks, she scored zero on code knowledge. At another time, I showed her a chart with the letters and letter-groups representing the common GPC's, and asked her to point to the grapheme that represented the sounds I said. She was unable to do this. Again, when asked to write the letter(s) for sounds, she scored zero. She did not understand what to do, except to write letter names. Given common syllable patterns, like ap, ock, ing she was unable to read any of them. This pretty well eliminated any deep understanding of the alphabetic principle. Also, unlike the students Dr. McGuinness cites, this child did not make substitutions based on first letter (or any letters). She substituted "school" for "academy," "apartment" for "building," "dog" for "pet." These were clearly meaning-based. Is it possible she learned words by linking them with some kind of visual, ideational representation? It would have made an interesting study. She did do one other very surprising thing -- in a subtest of the Woodcock-Johnson, the Visual-Auditory learning subtest, the child has to memorize an abstract symbol for a word and read increasingly complex sentences and stories. I have a better than average visual memory, but I got lost in the middle. This child delivered a flawless, lightning-fast performance. She hit the ceiling with a perfect score, in the 100th percentile. Our psychologist told me this was quite unusual. It did indicate her symbol memory was far superior to most people's -- although her overall ability, assessed later, was average. She did not have any learning disabilities.

I think it might be difficult to find students in the UK or USA who have gotten to middle or secondary levels in reading and who show not even a cursory familiarity with the Alphabetic Code. However, once we started looking for them -- screening our intake of 11 and 12 year olds -- we found several similar cases every year. These would be children who were average to good readers, but who completely lacked decoding skills. This phenomenon, along with that of proficient deaf readers, and possibly of children with autism who become good readers through an exclusively visual route, is one that needs to be investigated empirically by those with the qualifications to do so. Unfortunately, I had to contaminate my student's suitability for such research -- it was my job to teach her to really read, and quickly. We had only six months. She made rapid progress in that time, learning the code and how to blend and segment. She came back to visit last spring, now in Grade 11 at a fairly selective high school. Once she got the hang of how the code works she was off and running.

So the real point here, I think is twofold: we must continue to advocate for good teaching for children, and refer to concepts that are not empirically proven as open questions or matters of informed opinion. We can say, it is likely that children learn best when the letter names are postponed until letter sounds are firmly established, and it is likely that most children will find whole word learning an ineffective strategy, as studies in Japan and elsewhere show it takes many years to master even a few thousand words by sight alone. Secondly we need to encourage researchers to study these issues in a controlled way so that we can confirm or disconfirm our own perceptions. The "outliers" -- apparent exceptions, such as my student (and others I have seen, once I started looking for them), and proficient deaf readers , need to be investigated in depth. As Dr. Feynman posited, it is possible to demonstrate what processes are going on in the brain so as to eliminate guessing and assumptions. We need to continue to do this in order better to understand the reading process and how to teach children most effectively.

Post Reply

Who is online

Users browsing this forum: No registered users and 1 guest