How to Use Behavioral Experiments to Reduce Avoidance

Cognitive challenge alone doesn’t shift avoidance. Your client can have an excellent thought record on Wednesday, fully understand that the panic isn’t dangerous, and still cancel the train trip on Friday.
The reason is straightforward: avoidance is maintained by behaviour, and behaviour changes through doing, not knowing. Bennett-Levy and the Oxford group made this point in the early 2000s with a series of trials showing behavioural experiments outperformed thought-record-only protocols by roughly a factor of two on avoidance-driven presentations. The data has held up since.
A behavioural experiment is structurally simple. The client predicts what will happen if they approach the feared situation, takes the action, and records what actually happened. The outcome is the lever. Their prediction was almost always more catastrophic than the reality, and watching that gap show up across five or six experiments produces something a thought record cannot: real evidence in their own life that the prediction system is miscalibrated.
The challenge with running behavioural experiments on paper is that the structure breaks. The client writes the prediction Sunday evening, the action happens Wednesday afternoon, and the outcome write-up doesn’t happen at all because by then the page is in their bag and they forgot they were tracking it. You’re left with a partial record at the next session and a client who says “I think it was OK?”
In my-cbt, you can build the behavioural experiment as a three-section worksheet: prediction, action, outcome. Assign the prediction section due before the event. Assign the outcome section due immediately after. Each section is a separate small form that takes a minute. Your client opens both forms from their portal at the right time, with the personal message you wrote at the top of each one cueing when to fill it in.
What you get back is paired data with timestamps. The prediction was filed at 10pm Tuesday. The outcome was filed at 4:15pm Wednesday, twenty minutes after the train arrived. The gap between what they predicted (“I’ll have a panic attack and have to get off”) and what happened (“anxiety peaked at 6, came down by stop three”) is measurable in their own data, in their own words.
Run five or six of these on related themes, and you have a body of evidence the cognitive work can sit on. The thought record review has a new ingredient: actual data from their own week showing the prediction was wrong. They no longer need you to talk them out of it.
The work moves faster. Cancellation rates drop, because the avoidance has been replaced with experiments the client is doing. You spend less session time challenging their cognitions, because the data already shows the cognition was off.