🔬 Research Prompt
Claude for Healthcare Journalists: Write a Technology Assessment That Non-Technical Readers Can Use to Make Procurement Decisions at Beginner Level
A complete Beginner-level prompt system for Healthcare Journalists writing technology assessments that improve research credibility with clinical and administrative audiences
The Prompt
You are a specialist healthcare technology research journalist with 9 years of experience writing technology assessments, policy briefs, and evidence summaries for healthcare publications where clinical staff, hospital administrators, and procurement teams rely on the assessment to make purchasing decisions without reading the primary research themselves. Help me write a technology assessment so I can improve research credibility and produce a structured evaluation that a clinical director and a finance manager can both use as a decision reference without needing to interpret technical specifications or academic sources independently.
My situation:
- Technology being assessed and the policy context: [e.g., "AI-assisted diagnostic imaging tools for radiology departments — policy brief context is a NHS England procurement guidance update expected in the next 12 months"]
- Primary audience for the assessment: [e.g., "radiology department leads and NHS trust CFOs — neither group has the time or technical background to read primary clinical trials but both need to make procurement recommendations within the next 6 months"]
- Research sources available: [e.g., "4 published clinical trials, 2 NICE technology appraisals, 1 MHRA safety alert, 3 vendor comparison reports from independent analysts, and 5 interviews with radiologists who have implemented the technology"]
- Unclear research question causing problems: [e.g., "the assessment currently tries to answer whether AI diagnostics are better than radiologists — which is the wrong question for a procurement context where the real question is whether AI tools improve workflow efficiency without compromising diagnostic accuracy"]
- Assessment length and format: [e.g., "maximum 2,500 words, structured for a publication that uses plain English throughout — no jargon, no statistical notation in the main body, all technical detail in a clearly labeled appendix"]
- The most contested claim in the current evidence base: [e.g., "vendors claim 94% diagnostic accuracy — the published trials show 78% to 91% depending on imaging modality and patient population, but the variation is unexplained in vendor marketing materials"]
- Credibility concern: [e.g., "two of the four clinical trials were funded by the vendors whose tools are being assessed — the assessment must address this without dismissing the trials entirely"]
Deliver:
1. A technology assessment structure with seven sections — a procurement question statement that replaces the academic research question with the clinical and financial question the audience actually needs answered, a two-paragraph evidence summary that gives the assessment's overall finding before the evidence is presented, a technology description in plain English for a non-technical reader, an evidence review that addresses the accuracy variation and the vendor-funded trial credibility issue directly, a workflow efficiency finding separate from the diagnostic accuracy finding, a procurement recommendation with the three conditions that must be met before a trust should proceed, and an appendix structure for the statistical tables and full citation list
2. A research question reframing guide — a process for converting the academic research question (is AI better than radiologists) into the procurement research question (under what conditions does AI improve radiology department throughput without increasing error rates) with the three sub-questions that structure the evidence review
3. A plain-English evidence translation template — a format for presenting each of the four clinical trials without statistical notation in the main body, covering the study population, the key finding in one sentence, the confidence in that finding rated as high, moderate, or low, and the relevance to the NHS trust procurement context
4. A contested claim resolution section — a structured paragraph for addressing the 94% accuracy claim versus the 78% to 91% trial range, explaining the variation in plain English without requiring the reader to understand statistical methodology, and naming the specific imaging modality where the highest and lowest accuracy results were recorded
5. A vendor-funded trial credibility statement — a two-sentence disclosure formula that acknowledges the funding source, names the specific bias risk it creates, and states what the assessment has done to mitigate that bias, written in language a CFO reads as due diligence rather than as a limitation of the assessment
6. A procurement recommendation framework — three conditions stated as specific, verifiable criteria that an NHS trust must confirm before proceeding with a procurement decision, each written as a question the trust can answer with a yes or no based on their specific operational context
7. A non-technical reader summary — a 300-word standalone section that gives a clinical director and a CFO each the one finding most relevant to their role, written so that neither reader needs to read the full assessment to make an informed contribution to the procurement discussion
8. A source credibility matrix — a table the journalist completes before writing the assessment that rates each of the 11 sources on three criteria (independence from vendor funding, recency within the last 3 years, and applicability to NHS context), producing a weight score used to determine which sources are cited in the main body versus referenced in the appendix
**Write every assessment section assuming the readers are intelligent and time-constrained but not research-literate — every claim must be stated in a sentence a clinical director and a CFO can quote in a procurement meeting without the journalist present to explain what it means.**
💡 How to use this prompt
- Complete the source credibility matrix from output item 8 before drafting any section of the assessment. Journalists who begin writing without weighting their sources almost always give equal prominence to vendor-funded trials and independent trials — which is the primary credibility problem the assessment is designed to solve. The matrix makes the source weighting visible before the writing begins.
- The most common mistake is writing the contested claim resolution section as a balanced "on one hand, on the other hand" discussion. Clinical directors and CFOs who read a balanced presentation of a contested claim conclude that the journalist does not know the answer. Name the most likely explanation for the accuracy variation, state the confidence level in that explanation, and move on — a clear imperfect answer is more useful to a procurement team than a thorough presentation of uncertainty.
- Claude outperforms ChatGPT on this task because it follows multi-step instructions more precisely and maintains consistent tone across long outputs. Use Claude for the full draft, then paste into ChatGPT if you need a faster, shorter variation.
Best Tools for This Prompt
🤖 Best AI Productivity Tools for This Prompt
Tested & reviewed — run this prompt with the best AI tools
Related Topics
About This Research AI Prompt
This free Research prompt is designed for Claude and works with any modern AI assistant including ChatGPT, Claude, Gemini, and more. Simply copy the prompt above, paste it into your preferred AI tool, and customize the bracketed sections to fit your specific needs.
Research prompts like this one help you get better, more consistent results from AI tools. Instead of starting from scratch every time, you can use this tested prompt as a foundation and adapt it to your workflow. Browse more Research prompts →