Editor’s note: This is a Twitter thread from John Wilbanks, Sage’s chief commons officer.
New from Abishek Pratap and a few more of us – Indicators of retention in remote digital health studies: a cross-study evaluation of 100,000 participants
A few thoughts on the paper:
- Hurrah for data that’s open enough to cross-compare.
- When someone shows you overall enrollment in a digital health study, ask about engagement % on day 2. It’s a way better metric.
- Over-recruit the under-represented with intent from the start or your sample won’t be anywhere close to diverse enough.
- Design your studies for broad, shallow engagement – your protocol and analytics will be better matched.
- Pay for participation and clinician involvement make a huge difference. Follow @hollylynchez who writes very clearly on the payment topic.
- Clinician engagement is going to need some COI norms because whew it’s easy to see where that can go sideways.
- When your study is flattened down to an app on a screen, the competition is savage for attention and you’ll get deleted really quickly if there isn’t some sense of value emerging from the study.
- Meta-conclusion: perhaps start with the question: how does this give value the participant when the app is in airplane mode?
- On “pay to participate” – the first time I ever talked to @FearLoathingBTX, he immediately foresaw studies providing a “free” phone for participation, but cutting service off for low engagement. That is, sadly, definitely on track absent some intervention.
Related content and resources:
- Evaluation of Participation in Digital Health Studies by Abhishek Pratap
- A Framework for Ethical Payment to Research Participants
- The Influence of Risk and Monetary Payment on the Research Participation Decision Making Process