How to get your money's worth out of programs
Each year thousands of patients miss their hospital appointments.
It costs money – contributes to backlogs and delays – and means that appointments cannot be allocated to others in need.
Some 15 per cent of outpatient appointments at Sydney’s St Vincent’s Hospital used to be missed each year, despite patients being sent reminders.
St Vincent’s estimated that each missed appointment cost at least $125.
This could add up to $500,000 a year despite the hospital sending patients SMS reminders of their upcoming appointments.
Money that could have been spent treating other patients.
So a group of researchers ran two randomised trials to work out if changing their existing text messages could make a difference.
Text messages were sent to nearly 7,500 patients covering about 65 per cent of St Vincent’s outpatient appointments over 13 months.
The messages told patients what it would cost the hospital if they did not turn up for their appointment and that the money could have been used to help others.
Based on the trial outcomes the hospital adopted new text messaging techniques and reduced no-shows by 19 per cent.
Randomised trials have been shown over many decades to be one of the best ways of determining whether a program is effective, if it needs modification, or if it should be dropped all together.
And experiments and trials can surprise us by revealing where interventions are not as effective as we had hoped.
They enable us to test new interventions against what would have happened if we had changed nothing. Randomised trials help to us to understand causation and not just correlation.
Sometimes, we have strong expectations about what will work. But we still need to test ideas to confirm whether they do, because we can get surprising results that are counter to our predictions.
Take efforts to increase participation in the Adult Migrant English Program as an example.
The Department of Home Affairs runs the free language program which aims to increase social connectedness and improve employment outcomes.
Although there are about 50,000 migrants enrolled in the program at any one time, many migrants who are eligible for the program do not participate.
So a randomised trial tested whether sending letters, emails and text messages that were translated to the participant’s home language would boost takeup.
But, surprisingly, translating communications into someone’s home language did not increase engagement with the Adult Migrant English Program.
What this example demonstrates is that rigorous randomised trials always have something to teach us.
We should always be prepared to put the idea to the test to see if it works in practice.
In 2023, we established the Australian Centre for Evaluation within the Treasury.
The centre aims to provide leadership and make rigorous evaluation the norm for policy development.
Over time, it will improve the volume, quality, and impact of evaluations across the public service.
It will champion high quality impact evaluation and partner with other government agencies to initiate a small number of high-profile evaluations each year.
It will promote the use of evaluations and improve evaluation capabilities, practices, and culture across government.
It will put evidence at the heart of policy design and decision-making.
One such partnership between the Australian Centre for Evaluation and Department of Employment and Workplace Relations is testing changes to online employment services.
Employment services are a significant investment for government and affect many people – 4.6 per cent of the Australian population aged from 16 to 64 receive some form of unemployment support.
A series of five randomised trials will be conducted looking at various aspects of online employment services.
They will test variations of time spent in online services, improvements to communication methods, and support and tools for clients.
They will look at whether these changes improve employment outcomes.
The evidence generated will help improve and adapt online service delivery to meet the needs of the people using it.
Importantly, all these trials are subject to a robust ethical framework, consistent with the National Statement on Ethical Conduct in Human Research.
The trial outputs will inform the government’s response to the House Select Committee’s inquiry into Workforce Australia Employment Services.
At the heart of a randomised trial – in medicine or public policy – is chance. When people are allocated to the treatment or control based on luck, then we know that any observed differences must be due to the impact of the treatment.
We’re often told by our parents ‘don’t leave it to chance’. But by deliberately using chance in the structure of an evaluation, we set ourselves up to succeed.
We’re not hoping for dumb luck. We’re using luck to determine causal impacts – just as pharmaceutical manufacturers do when testing whether a new treatment helps patients.
From the perspective of the trial participant, chance decides which group they fall into.
From the standpoint of the researcher, all those chance allocations add up to an approach that is as rigorous as possible.
Making good public policy can be difficult.
We need to raise the bar by making sure claims about a program’s effectiveness are based on quality evidence.
We need to be working at the intersection of technology, policy, and people.
We need to better use data and technology to track our progress towards safe, fair and inclusive outcomes.
We know that we can achieve this by bringing together expertise from government, academia and industry.
We can connect people, resources, and opportunities to increase the benefits that rigorous evidence and data science can deliver.
By deploying chance in the service of policy change, we can shape the world for the better.
Originally published in The Canberra Times on the 22nd of February 2024.