AI as Your Cancer Care Co-Pilot


How You Can Use AI to Answer Your Cancer Questions

AI as Your Cancer Care Co-Pilot: A New Guide for Informed Patients

BLUF (Bottom Line Up Front)

The Cancer Patient Lab has developed a structured framework to help cancer patients and caregivers safely use AI chatbots like ChatGPT and Claude for medical decision-making. The approach emphasizes AI literacy, structured workflows, and critical evaluation of AI-generated information while acknowledging both the promise and pitfalls of these powerful tools.


As artificial intelligence transforms how we access medical information, cancer patients face both unprecedented opportunities and significant risks. The Cancer Patient Lab, led by Brad Power and Dr. Chris Appel, has created a comprehensive guide to help patients navigate this new frontier safely and effectively.

The Growing Role of AI in Cancer Care

The statistics tell a compelling story: where 90% of patients once turned to "Dr. Google" for medical questions, many are now consulting "Dr. ChatGPT" instead. This shift represents a fundamental change in how patients research their conditions, explore treatment options, and prepare for medical appointments.

However, AI chatbots come with a critical caveat: they can "hallucinate," or generate confident-sounding but entirely fabricated information. This makes AI literacy—understanding how to use these tools effectively while recognizing their limitations—essential for patient safety.

A Four-Step Framework for AI-Assisted Decision Making

The Cancer Patient Lab's approach centers on four required steps, each scalable from "light" to "deep" depending on the patient's needs:

1. Define Your Profile and Preferences

Before diving into medical questions, patients should establish their communication preferences with the AI. This includes specifying education level (eighth-grade language versus medical professional terminology), requesting that abbreviations always be defined, and articulating personal values around quality of life versus length of life.

Dr. Appel emphasized that patients have different personas: some want quick, anonymous answers to specific questions, while others prefer an ongoing relationship where the AI remembers their complete medical history and previous conversations.

2. Understand Your Disease

This step involves using AI to clarify diagnosis, staging, prognosis, and treatment options. Patients can upload medical records (being mindful of privacy considerations) and engage in dialogue to gain clarity on their specific cancer type and situation.

3. Explore Testing Options

The framework encourages patients to investigate both standard and advanced testing options. This includes genomic testing, biomarker analysis, and functional precision oncology platforms that can help identify which treatments are most likely to be effective for their specific tumor.

Dr. Appel noted that even standard guideline-recommended tests aren't always ordered by physicians, making patient self-education particularly important.

4. Evaluate Treatment Options

Beyond understanding NCCN (National Comprehensive Cancer Network) guidelines, patients can use AI to explore clinical trials, complementary treatments, and emerging therapies. However, the framework stresses that AI shouldn't be the sole resource for clinical trial matching—respected companies specializing in this service should also be consulted.

Critical Principles for Safe AI Use

The Cancer Patient Lab has identified 11 core principles, organized into three categories:

Cancer Care Decisions:

  • Understand standard cancer care guidelines as your foundation
  • Recognize the value and limitations of different tests
  • Consider evidence-based options beyond standard guidelines when appropriate
  • Stay centered and maintain control of your care journey

Managing Your Team:

  • Build and coordinate a diverse support team
  • Engage effectively with medical professionals and loved ones
  • Commit to continuous learning
  • Adapt as your situation evolves

Working With AI:

  • Practice with AI tools to develop literacy
  • Articulate your needs and preferences clearly
  • Check information through multiple methods
  • Understand privacy and security implications

Guardrails Against AI Hallucinations

Power emphasized several strategies for verifying AI-generated information:

  • Ask the AI to critique its own answers ("An AI gave me this information—what do you think?")
  • Feed one AI's response to a different AI platform for cross-checking
  • Request confidence ratings on specific claims
  • Ask the AI to identify areas of uncertainty
  • Demand citations and sources for all claims
  • Instruct the AI to flag when recommendations deviate from guidelines

Bill Passman, who helped develop the framework, suggested that patients can constrain AI behavior through initial instructions, such as "always use NCCN guidelines" or "only reference peer-reviewed research." He noted that different AI platforms have different biases—ChatGPT, for instance, tends to be particularly "sycophantic," agreeing with users rather than challenging their assumptions.

The Privacy Question

Privacy concerns emerged as a significant discussion point. The framework acknowledges that free AI versions offer less privacy protection than paid versions, and that AI providers may use conversations to train future models.

For patients like Bill Passman and Jeff, who have terminal diagnoses, data privacy takes lower priority than accessing potentially life-saving information. Others may need to balance convenience against confidentiality concerns, particularly regarding employment or insurance implications.

Strategies for enhanced privacy include:

  • Using paid AI versions when available
  • Creating dedicated email addresses for AI interactions
  • Redacting personally identifiable information
  • Checking platform settings to disable data collection for training
  • Understanding that in most cases, only the AI provider accesses conversation data

Real-World Application: A Clinical Trial Search Example

When asked about using AI for clinical trial matching, Power cautioned that while AI can search ClinicalTrials.gov, the information there is often outdated or incomplete. Researchers may be deliberately vague about inclusion criteria to avoid revealing competitive information. Therefore, the framework recommends using specialized clinical trial matching services like Massive Bio, My Tomorrows, or Anora alongside AI tools.

Dr. Appel shared an example of how prompt specificity matters: when researching treatment options for an oligodendroglioma with sarcomatoid components, asking only about "approved chemotherapies" yielded different results than asking about "all treatment options including devices." The latter revealed FDA-approved electromagnetic field therapy (tumor treating fields) that the patient had never heard of—illustrating how AI responses depend heavily on how questions are framed.

The Metronomic Chemotherapy Paradox

Participant Roger raised a thought-provoking challenge: some speakers at Cancer Patient Lab sessions have strongly advocated for metronomic chemotherapy (frequent low doses rather than high-dose cycles), claiming it's superior to standard care with solid research backing. Yet AI platforms don't typically recommend this approach, highlighting a fundamental tension between emerging evidence, clinical opinion, and what AI has been trained to recommend.

This underscores a critical limitation: AI tends to be conservative, favoring established guidelines over innovative approaches—even when those innovations have legitimate supporting evidence. Patients seeking cutting-edge options must learn to explicitly prompt AI to look beyond standard care.

Legal and Ethical Considerations

The framework development faced challenges around legal liability, with some early contributors concerned about providing specific prompt examples that patients might follow to their detriment. Power noted that California has considered laws imposing strict liability on AI system developers, though Governor Newsom has vetoed such legislation to protect innovation.

The consensus position: these tools should be presented as educational guidance for approaching AI, not as medical advice. The legal landscape remains in flux, but favoring innovation while building appropriate guardrails appears to be the emerging consensus.

Looking Forward: The Education Interface

The conversation revealed an important insight: the framework itself serves as an educational tool. Suggested features for future AI interfaces include:

  • Checkboxes for preference settings (NCCN guidelines: yes/no)
  • "About" buttons explaining technical terms like NCCN
  • Persona selection (newly diagnosed vs. experienced patient)
  • Spectrum settings (strictly standard care vs. exploring all options)

The Cancer Patient Lab continues developing a comprehensive questionnaire to help patients articulate their needs and preferences before engaging with AI tools.

The Bottom Line

AI represents a powerful addition to the cancer patient's toolkit—but only when used with appropriate caution and critical thinking. As Dr. Appel concluded: "The more you reflect on your disease, the more mileage you get out of AI. It requires you to be careful, thoughtful, and use it as a tool to learn and understand options that would otherwise not be available."

The key is approaching AI as a research assistant, not a replacement for medical expertise. By following structured workflows, applying critical evaluation, and maintaining healthy skepticism, patients can harness AI's information-gathering power while avoiding its pitfalls.

For the growing number of cancer patients turning to AI for guidance, literacy isn't optional—it's essential for safety and effective self-advocacy.


Sources

  1. Power B, Appel C, et al. "Using AI to Guide Your Cancer Decisions." Cancer Patient Lab Webinar. December 2024. [Webinar transcript provided]

  2. National Comprehensive Cancer Network. "NCCN Clinical Practice Guidelines in Oncology." NCCN.org. https://www.nccn.org/guidelines

  3. ClinicalTrials.gov. U.S. National Library of Medicine. https://clinicaltrials.gov

  4. Massive Bio. "AI-Powered Clinical Trial Matching." https://massivebio.com

  5. MyTomorrows. "Expanded Access Programs and Clinical Trial Navigation." https://mytomorrows.com

  6. Anora Health. "Clinical Trial Matching Platform." https://anorahealth.com

  7. Cohen EEW. "AI and the Shifting Dynamics of Your Next Doctor Visit." Cancer Patient Lab Webinar. 2024. [Referenced in presentation materials]

  8. General Data Protection Regulation (GDPR). European Union. https://gdpr.eu

  9. OpenAI. "ChatGPT Privacy and Data Usage Policies." https://openai.com/privacy

  10. Google. "Gemini AI Studio and Privacy Settings." https://ai.google.dev

 

Comments

Popular posts from this blog

Dr. Christopher Kane of UCSD Health Appointed Chairman of the American Board of Urology

A 10-Second Steam Blast: The New Weapon Against Prostate Cancer?

PSMA-Targeted Therapies for Prostate Cancer: Move Treatment Earlier in Disease Course