Prof. D.McG's analysis of the Project X CODE research report
Posted: Thu Oct 18, 2012 6:29 pm
The following report appeared a few days ago:
An independently analysed research trial of OUP's Project X CODE by Ros Fisher.
http://fds.oup.com/www.oup.com/pdf/oxed ... report.pdf
I sent it to Prof. Diane McGuinness for a thorough analysis and here is what she had to say:
''Well, it's hard to know where to begin. The so-called "research results" from this project are a sham. In a proper study, one needs to eliminate all sources of error (variance or variability) by restricting subjects to a uniform group, in a specific (contained) environment. With such a small sample of children per school (4 to 6 children), using 13 different schools, taught by multiple teachers with no particular training for this task, any one of these uncontrolled variables will void the study. When your study contains multiple uncontrolled variables, you have chaos. No one really knows what is causing changes in reading skill, assuming these skills are actually changed. We don't know, for example, whether there is overlap between the Letters and Sounds instruction and the stories the children are asked to read. We have no idea from reading this report, exactly how the children are taught. Do they learn to read isolated words, or in sentences, or guessing by looking at the pictures, or all of the above? Nor do we know whether the words on the pretest are different to the words on the post-test on the Hodder reading assessment test. To make matters worse, the Hodder test (PERA) is based on Letters and Sounds, which means the test is measuring some or all of what many children have already been taught.
To do this kind of study correctly you have to eliminate (rule out) all possible contaminating variables. One needs a large sample of children in the same age range, who are taught by the same teacher/trainer throughout, one who is properly trained to teach reading. This simply isn't happening. And there are other problems. There appears to be no strict time lines in this study (i.e. exact number of hours taught per day, number of days in the study).
The goal in research of this type is to isolate the "independent variable" (the only variable which is free to vary - i.e. teaching a method of reading). All other variables must be controlled (made constant across all conditions). The data per se (the dependent variable) is the reading test score. When these variables are locked down and in place, we can argue that a particular method improves learning a particular skill. Reading test scores should have mainly one cause - the method employed by a knowledgeable teacher. Instead, we have contamination from multiple teachers, teaching in different schools, in different locations ( geographic or the hallway), with no standard approach, and no control over the exact hours, days, weeks, this teaching occurs.
As the early part of this instruction is tightly linked to Letters and Sounds, there is a strong possibility that the children have already been taught lessons using Letters and Sounds. Hence they would be familiar with the corpus of words in the PERA assessment test. We know nothing about this connection. This would make it easier to "teach to the test" if the test bears a strong relationship to what was taught in the classroom (Letters and Sounds). So there is serious issue of confounding''.
An independently analysed research trial of OUP's Project X CODE by Ros Fisher.
http://fds.oup.com/www.oup.com/pdf/oxed ... report.pdf
I sent it to Prof. Diane McGuinness for a thorough analysis and here is what she had to say:
''Well, it's hard to know where to begin. The so-called "research results" from this project are a sham. In a proper study, one needs to eliminate all sources of error (variance or variability) by restricting subjects to a uniform group, in a specific (contained) environment. With such a small sample of children per school (4 to 6 children), using 13 different schools, taught by multiple teachers with no particular training for this task, any one of these uncontrolled variables will void the study. When your study contains multiple uncontrolled variables, you have chaos. No one really knows what is causing changes in reading skill, assuming these skills are actually changed. We don't know, for example, whether there is overlap between the Letters and Sounds instruction and the stories the children are asked to read. We have no idea from reading this report, exactly how the children are taught. Do they learn to read isolated words, or in sentences, or guessing by looking at the pictures, or all of the above? Nor do we know whether the words on the pretest are different to the words on the post-test on the Hodder reading assessment test. To make matters worse, the Hodder test (PERA) is based on Letters and Sounds, which means the test is measuring some or all of what many children have already been taught.
To do this kind of study correctly you have to eliminate (rule out) all possible contaminating variables. One needs a large sample of children in the same age range, who are taught by the same teacher/trainer throughout, one who is properly trained to teach reading. This simply isn't happening. And there are other problems. There appears to be no strict time lines in this study (i.e. exact number of hours taught per day, number of days in the study).
The goal in research of this type is to isolate the "independent variable" (the only variable which is free to vary - i.e. teaching a method of reading). All other variables must be controlled (made constant across all conditions). The data per se (the dependent variable) is the reading test score. When these variables are locked down and in place, we can argue that a particular method improves learning a particular skill. Reading test scores should have mainly one cause - the method employed by a knowledgeable teacher. Instead, we have contamination from multiple teachers, teaching in different schools, in different locations ( geographic or the hallway), with no standard approach, and no control over the exact hours, days, weeks, this teaching occurs.
As the early part of this instruction is tightly linked to Letters and Sounds, there is a strong possibility that the children have already been taught lessons using Letters and Sounds. Hence they would be familiar with the corpus of words in the PERA assessment test. We know nothing about this connection. This would make it easier to "teach to the test" if the test bears a strong relationship to what was taught in the classroom (Letters and Sounds). So there is serious issue of confounding''.