Four steps to designing an RCT
By: Dr Andres Fonseca
On: 12th September 2014
Organisation name: Virtually Free
Or how to find out if what you are doing really makes a difference
We are busy putting the finishing touches on Agoraphobia Free, an app we have created to deliver computerised cognitive behavioural therapy for the treatment of agoraphobia. When we first started the project we designed a double-blind randomised controlled trial (RCT for short) to find out if it actually treats the condition. We have only gained ethical approval about 2 months ago and we have not yet officially started recruitment. As you can see this is a lengthy process. Unfortunately it’s the only process that will really answer the question. An RCT is a type of experiment that has a very specific design: it is a trial that is randomised, double-blind and controlled. All of those words are important and mean something very specific. I promise you will understand them all by the end of the post.
How to design your own home-grown RCT
Step 1: find a measurable outcome
Say that you want to know if your ‘getting youth back into work’ app really does get young people back into work. In this case you have a clear outcome: number of young people you have managed to get back into work. This is important as on many occasions people in social ventures don’t have clear outcomes. You cannot do an experiment if you have no way of measuring the results. In Agoraphobia Free we use the Panic and Agoraphobia Scale as our main outcome measure. It is a validated symptom severity scale for the condition.
Step 2: controls
You have designed an app to curb violent street crime. So you launch it and measure violent street crime, right? The problem is that the crime rate may be going down by itself for other reasons and you may think your app made all the difference when it changed nothing; it was always going to go down. Or say that crime rate it is skyrocketing and—unknown to you—your app is actually amazingly effective. If it had not been there crime would have gone up much higher.
How do you make sure it is your intervention that is changing things and that things are not just changing by themselves? The answer is controls.
In our case we will select a group of 500 people suffering from agoraphobia. Half will have the intervention and half will have a placebo intervention. We will see how their results compare at the end of the trial.
Step 3: tossing coins
Imagine you are testing an app to reduce childhood obesity. You have carefully created two groups of kids to try it with. You have tried to make sure that the groups are comparable using a technique called matching. This is simply making sure that your controls are as similar as possible to your intervention group; particularly for those factors you know make a difference to the thing you are measuring. The percentage of boys to girls is the same, their ages are in the same range, they come from families with the same socio-economic background and they have the same family history of diabetes, etc.
Imagine now that that one of your groups has lots of kids with a particular genetic mutation that affects childhood obesity. They have ended up in the same group because you matched kids from two schools and it turns out all members of the extended family that has this genetic trait all live in this particular area and they all send their children to the same school. There is also the opportunity to introduce a bias when creating the groups and assigning participants who you think will benefit more to your intervention group and the ones you think are likely not to benefit to the control group.
The solution is randomising. Instead of dividing the children by school for convenience simply get someone independent to literally toss a coin so that there is an equal chance of ending up on either the control or intervention groups. This way you can’t influence where they end up and any factors you don’t know about will be distributed in your groups more or less equally by chance provided your groups are big enough.
In our trial a simple algorithm randomly decides what app the participant will download without us even knowing about it. We use a particular form of block randomisation to ensure equal numbers but I’m not going to go into that here.
Step 4: Blinding
We humans are very good at finding patterns—so good we tend to fool ourselves into seeing patterns when there is nothing there. If we have a pet theory we will inevitably seek out evidence that proves we are right and ignore evidence that shows we are wrong. This is called confirmation bias and it affects all of us. Yes, even you. When there are incentives involved our ability to fool ourselves is nothing short of astonishing.
This is the reason placebos are used when trying to decide whether a medicine works or not, neither the person taking the tablet nor the doctor giving the tablet should know if it’s the ‘real’ or the ‘dummy’ one. The doctor might be unconsciously favour the treatment when measuring borderline results or the person might try harder when they know they are on the ‘real treatment’ than when they are on placebo.
In our trial we use what is called an ‘active placebo’. If you are using an app that clearly is talking about agoraphobia or you are playing super mario you know whether you are on the treatment or not. We decided to use our existing app Stress Free that has a very similar design and actually helps reduce anxiety, but does not specifically address agoraphobia.
So those are the basics for design. There is one other concept that is important: power which tells you how many participants you need to find a difference if there is one. This one is very technical and my advice is to get a good statistician that speaks human on board early.
For an excellent guide on how to design an RCT here’s a 2003 paper written by J Kendall in the emergency medicine journal. It is medical as the RCT is usually used in medicine. Just ignore the medical bits, the methodology applies to almost anything where you think an intervention might bring about change.