Discovering Keys That Could Unlock Better Personalized Treatments to Destroy Cancer

International neoantigen initiative Tumor Neoantigen Selection Alliance (TESLA) identifies parameters for cancer vaccine or cell therapy advancement

 

SAN FRANCISCO – Neoantigens, tiny markers that arise from cancer mutations, flag cells as cancerous and could be the key to unlocking a new generation of immunotherapies. Targeting the “right” neoantigens – in a cancer vaccine or a cell therapy – has the promise to eliminate a patient’s cancer with minimal side effects. But hundreds of mutations can exist in a tumor, and only some can give rise to neoantigens that can trigger an immune response against cancer. The question is, which ones?

Scientists from an initiative launched by the Parker Institute for Cancer Immunotherapy (PICI) and the Cancer Research Institute called the Tumor Neoantigen Selection Alliance (TESLA) have discovered parameters to better predict which neoantigens can stimulate a cancer-killing effect. TESLA brings together a constellation of 36 top biotech, pharma, university and scientific nonprofit research teams. Their findings were published online today in Cell and could spawn a new generation of more effective, personalized cancer immunotherapies. Read full release…


Kristen Dang and Justin Guinney led the Sage Bionetworks team that collaborated on this study and paper.

Uncovering Therapeutic Strategies for Neurofibromatosis Type 1

Drug discovery studies are challenging to conduct for rare diseases because there often isn’t enough relevant biological data. Small, or underpowered, datasets hinder the effectiveness of the statistical methods that researchers typically use to identify potential drug targets and to generate hypotheses for experimentation. But, in neurofibromatosis type 1 (NF1) research, there has been an effort among patients, researchers, clinicians, and funding partners to increase the availability and accessibility of data. In a recent article published in the journal Genes, we share how we were able to apply sophisticated computational methods to an aggregated group of small NF1 datasets to generate insights about potential drug targets.

Title: Integrative Analysis Identifies Candidate Tumor Microenvironment and Intracellular Signaling Pathways that Define Tumor Heterogeneity in NF1
Journal: Genes
Authors: Jineta Banerjee, Robert J. Allaway, Jaclyn N Taroni, Aaron Baker, Xiaochun Zhang, Chang In Moon, Christine A. Pratilas, Jaishri O. Blakeley, Justin Guinney, Angela Hirbe, Casey S. Greene, and Sara JC Gosline
Link: https://www.mdpi.com/2073-4425/11/2/226/htm

NF1 is a rare genetic disorder that affects over 2.5 million people globally. The disease is a result of mutations in the NF1 gene and it can cause heterogeneous tumors, including cutaneous neurofibromas (cNFs), plexiform neurofibromas (pNFs), and malignant peripheral nerve sheath tumors (MPNSTs). While there is an incredible amount of research on NF1, there are very few safe and effective drugs to treat the various types of NF1 tumors. But, machine-learning methods can be powerful tools for accelerating drug discovery in NF1.

In the study, we relied on the NF community’s data-sharing efforts to identify important biological signatures in NF1 tumors. To do this, we applied machine learning methods that first learn biological patterns from large collections of data and then look for these patterns in different datasets, such as the ones that exist in NF1. We then characterized these patterns using statistical approaches and systems biology methods and were able to identify enrichment of signals related to immune cells as well as possible drug classes for follow up in NF, pNF, and MPNST research. We further found that histone deacetylase (HDAC) inhibitors, which have been observed to work well in preclinical models of  MPNSTs, may be worth exploring as a potential therapy for cNFs.

Our re-analysis of NF1 data in this study, enabled by access to data generated by the NF community researchers and encouraged by research-forward funding partners like Neurofibromatosis Therapeutic Accelerator Program (NTAP), showcases how shared data from various groups can together power sophisticated analyses which would otherwise not be possible for each of the datasets separately. For rare diseases, this approach is extremely valuable since patient data is sparse and precious. We hope that our efforts and the results showcased in this article will not only inform experimental researchers of probable hypotheses to test, but also encourage them to share their data more readily to power even more sophisticated analyses in the future.


Jineta Banerjee and Robert Allaway are co-lead authors on this study.


 

 

Sage Perspective: Retention in Remote Digital Health Studies

Editor’s note: This is a Twitter thread from John Wilbanks, Sage’s chief commons officer.

 

New from Abishek Pratap and a few more of us – Indicators of retention in remote digital health studies: a cross-study evaluation of 100,000 participants

A few thoughts on the paper:

  1. Hurrah for data that’s open enough to cross-compare.
  2. When someone shows you overall enrollment in a digital health study, ask about engagement % on day 2. It’s a way better metric.
  3. Over-recruit the under-represented with intent from the start or your sample won’t be anywhere close to diverse enough.
  4. Design your studies for broad, shallow engagement – your protocol and analytics will be better matched.
  5. Pay for participation and clinician involvement make a huge difference. Follow @hollylynchez who writes very clearly on the payment topic.
  6. Clinician engagement is going to need some COI norms because whew it’s easy to see where that can go sideways.
  7. When your study is flattened down to an app on a screen, the competition is savage for attention and you’ll get deleted really quickly if there isn’t some sense of value emerging from the study.
  8. Meta-conclusion: perhaps start with the question: how does this give value the participant when the app is in airplane mode?
  9. On “pay to participate” – the first time I ever talked to @FearLoathingBTX, he immediately foresaw studies providing a “free” phone for participation, but cutting service off for low engagement. That is, sadly, definitely on track absent some intervention.

Related content and resources:

 

Evaluation of Participation in Digital Health Studies

The widespread use of smartphones has offered a valuable opportunity to biomedical researchers. Using mobile apps, scientists are now able to design large-scale health research studies in a cost-effective way and, importantly, gather diverse real-world lived experiences of disease over time by recruiting participants from broader geographic regions – at least that is the hope. The real-world data (RWD) gathered through the health research apps also complements traditional research by capturing disease fluctuations at important moments that are often missed between periodic in-person clinic visits.

Title: Indicators of retention in remote digital health studies: across-study evaluation of 100,000 participants
Journal: Nature Digital Medicine
Authors: Abhishek Pratap, Elias Chaibub Neto, Phil Snyder, Carl Stepnowsky, Noémie Elhadad, Daniel Grant, Matthew H. Mohebbi, Sean Mooney, Christine Suver, John Wilbanks, Lara Mangravite, Patrick J. Heagerty, Pat Areán, and Larsson Omberg
Link: https://www.nature.com/articles/s41746-020-0224-8

In the last five years, several digital health studies, including remote interventions and clinical trials, have been conducted using smartphone technology. Despite the success where researchers were able to enroll thousands of research participants in a short amount of time, participant retention and long-term engagement in fully remote research remain a significant barrier for generating robust real-world evidence from RWD. In the study Indicators of retention in remote digital health studies: a cross-study evaluation of 100,000 participants, published in the journal Nature Digital Medicine on Feb. 17, researchers pooled data from eight digital health studies across nearly 110,000 study participants and discovered key factors that affect participant retention.

To avoid collecting biased real-world data there is an urgent need to assess underlying patterns in people’s participation in fully remote studies. If you can’t measure it, you can’t fix it.

A screenshot of a table that shows data from a collection of 8 digital health studies. The table comes from the paper Indicators of retention in remote digital health studies: a cross-study evaluation of 100,000 participants.

The study compiled user-engagement data from eight digital health studies that targeted different diseases ranging from asthma, endometriosis, heart disease, depression, sleep health, and neurological diseases. The compilation of individualized user-level engagement data is one of the largest and most diverse user-engagement datasets to date and has been made publicly available for the broad research community. The data analysis surfaced two key results 1) Half of the participants dropped out of studies within the first week and 2) most studies ultimately weren’t able to recruit demographically or geographically representative participants.

Despite the limitations, several factors, such as partnerships with clinicians and providing research participants fair compensation for their time in the study, could help researchers retain diverse participants in future digital health studies. Unsupervised analysis of engagement data also revealed broadly consistent underlying patterns of participation in remote research. App-usage behavior fell into four clusters, with distinct differences that have semantic and demographic ramifications.

The insights from this research have the potential to inform user enrollment and engagement strategies for improving retention and engagement in future digital health studies.

Related content:

New Papers: Remote Retention in Digital Health Studies, Machine Learning, Reproducible Benchmarking

Detecting the impact of subject characteristics on machine learning-based diagnostic applications

Journal: NPJ Digital Medicine

Authors: Elias Chaibub Neto, Abhishek Pratap, Thanneer M. Perumal, Meghasyam Tummalacherla, Phil Snyder, Brian M. Bot, Andrew D. Trister, Stephen H. Friend, Lara Mangravite and Larsson Omberg

Read the paper…


Indicators of retention in remote digital health studies: A cross-study evaluation of 100,000 participants

Preprint: arXiv:1910.01165 [stat.AP]

Authors: Abhishek Pratap, Elias Chaibub Neto, Phil Snyder, Carl Stepnowsky, Noémie Elhadad, Daniel Grant, Matthew H. Mohebbi, Sean Mooney, Christine Suver, John Wilbanks, Lara Mangravite, Patrick Heagerty, Pat Arean, Larsson Omberg

Read the paper…


Reproducible biomedical benchmarking in the cloud: lessons from crowd-sourced data challenges

Journal: Genome Biology

Authors: Kyle Ellrott, Alex Buchanan, Allison Creason, Michael Mason, Thomas Schaffter, Bruce Hoff, James Eddy, John M. Chilton, Thomas Yu, Joshua M. Stuart, Julio Saez-Rodriguez, Gustavo Stolovitzky, Paul C. Boutros, Justin Guinney

Read the paper…

NEW PAPER: Are people willing to share digital data for biological research?

Question: Are people willing to participate in research advertised on the internet, and is willingness to participate associated with type of study sponsor?

Findings: This mixed-methods survey and qualitative study of 914 respondents indicated that they were more likely to participate and share their social media data with researchers in university-led research studies than in studies conducted by the US federal government or pharmaceutical companies. However, only 49.3% indicated they would share their social media data at all.

Meaning: These findings indicate that researchers may face challenges in recruiting representative samples when recruiting from internet platforms.

Read Paper: Contemporary Views of Research Participant Willingness to Participate and Share Digital Data in Biomedical Research