16 April 2025
Getting the numbers right matters for evidence-based policy
Michael Sanders and Dimitris Vallis
New resources can help researchers conduct better trials and make more efficient use of public money

This is the third instalment of the School for Government’s comment series on the future of evidence-based policy. Look out for more contributions to the series in the coming weeks.
The last decade or so has seen a widely remarked upon surge in the use of evidence-based policy, and in particular, the efforts by the UK government to find out “what works”, through the creation of a network of “what works centres”, and the Cabinet Office Evaluation Task Force. This effort has been substantially successful, with a dramatic increase in the number of randomised controlled trials being carried out across the policy spectrum, from education to homelessness, from early intervention to ageing.
This increase is to be welcomed and, we hope, continued, by the Labour government, for whom this inherited evidence infrastructure should be viewed as an unalloyed good. The selection of Professor Becky Francis, Chief Executive of the Education Endowment Foundation, to lead their curriculum review is an excellent sign.
There are a number of ways in which the evidence-based policy revolution is incomplete, and this is a refrain to which we will return over the coming months in this comment series.
One of the nerdier aspects of this is exemplified by our new paper, out this month in the journal Widening Participation and Lifelong Learning. The paper looks at the basic parameters of research into higher education progression, and in particular those that go into designing randomised controlled trials in that area. It follows another, similar paper focused on homelessness and published in the European Journal of Homelessness earlier this year, and is related to another paper, published in 2020, which looks at effect sizes in school based education research.
As mentioned, these papers are nerdy, and heavy on tables of correlations and parameters. However, they allow us to better design trials. Often the most contested part of a trial’s design is the sample size calculation, which determines how large a sample you need in order to be able to detect a plausible effect size, and hence how much work needs to be done by the people implementing the intervention, and how much the trial will cost. Design a trial that’s too small, and you risk falsely concluding that something that works is ineffective. Design one that’s too big, and you waste public money, and withhold a potentially beneficial intervention from participants for no reason.
These figures in and of themselves are of little use to ministers and (understandably) hold little interest for journalists, but they are useful, even indispensable, resources for researchers conducting trials. And for a government that wants to make the most of every pound of public spending, sensible parameters for trial design are essential for maximising efficiency.
So far, the what works movement has been too slow in creating these boring-but-important papers, which can help not just the centres themselves but other researchers and funders to design studies better. As it stands, we are too dependent on rules of thumb and a kind of oral history in many fields when designing studies.
We are grateful to the UK Cabinet Office, the Centre for Homelessness Impact, and the Centre for Transforming Access and Student Outcomes in Higher Education for supporting us in this work to date, which represents what we think is an important step in the right direction.