Something important to keep in mind, and to continually remind others when discussing reported results: always check the actual parameters and details of the study or the experimental treatment. Without knowing those details, you can very easily be misled (and the public often *is* misled) by the published summary.
A good example is the recent Haan Foundation "Power 4 Kids" study in the U.S., which compared four reading intervention schemes with virtual non-readers in 3rd-5th grades in Pennsylvania. The study was very properly done, with randomized groups, controls, systematic training of the teachers and monitoring for fidelity of implementation.
Sounds like that should tell you what programs work, or not. Right?
Wrong. A critical factor omitted from most reporting of the results (so far only the preliminary reports have been published, as the final data-crunching is supposed to be much more extensive and broken down into detailed analyses) is the matter of instructional time.
Anyone who has taught seriously delayed older readers knows that it is a challenge to get them caught up to their same-age peers: they have not only much new learning to do, they have to unlearn bad habits or misrules, practice new skills over time to make them automatic, master different techniques for different types of text, and so forth. Pretty well all school-based data that show students closing this gap involves providing intensive daily reading tuition, usually double or triple the instructional time allocated to normally achieving pupils.
In the Haan study, the students received on average 20-25 minutes of group instruction daily -- to make up a 3-5 year gap. That's less than half the time allocated to average pupils, and less than a third of what has been shown effective with delayed readers. Say what?
Without knowing anything else about the study, one can confidently predict that few pupils will close the performance gap with so little instructional intensity. That amount of time is not enough to get through even one level of DI Corrective Reading (one of the four interventions studied, and one that does reliably "work" if implemented effectively over time -- usually 2-3 years, with instructional periods of 40-60 minutes). Also, the study required most of the programs used to modify their lessons, leaving out critical components. Thus, what is being measured is not the "real" program but an abridged and bowdlerized version, taught for less than half the usual time (and less than a third of the required time for optimal outcomes). But published discussions of the results do not point out these facts.
As jenny points out, these reports on how SP "doesn't work" fail to emphasize details, such as what kind of synthetic phonics teaching was introduced, how thoroughly, how consistently, and over what period of time (in duration and also in instructional intensity). Ten minutes here and there is not going to get the job done. Also, as we all know, if SP is introduced along with multi-cueing and guessing, it will be less effective.
In many reports of "results," however, what may well happen is that a district asks schools to report whether they are doing SP (or some other instructional initiative), and they answer Yes or No, without providing details or even any substantiation that the requested initiative is being implemented at all. I know well from experience that if teachers are told to do X (but are not provided with time and resources), they will indeed reply that they are "doing" it but the reality is likely very different. They know from past experience that, like Soviet 3-Year-Plans, "this, too, shall pass away."
Details, details. Get the details! And constantly remind others to do the same.