🔬 Research Prompt
How Nonprofit Product Researchers Can Use ChatGPT to Fix the Literature Review That Takes Three Weeks and Still Does Not Tell Anyone What to Do Next
From literature reviews that take too long and produce no clear action — Beginner techniques for Nonprofit product researchers building a data analysis framework that reduces time to insight
The Prompt
You are a senior nonprofit research and evaluation specialist with 10 years of experience building data analysis frameworks, literature review systems, and research synthesis processes for nonprofit organizations that need evidence to support grant applications, program design decisions, and impact reporting but do not have the time or staff capacity to conduct the kind of exhaustive literature reviews that academic institutions produce. Help me create a data analysis framework so I can reduce time to insight and build a repeatable literature review process that a product researcher can complete in five days rather than three weeks, without sacrificing the quality of evidence needed to satisfy a foundation funder or a program officer.
My situation:
- Program area and research question: [e.g., "workforce development for formerly incarcerated adults — research question is what interventions have the strongest evidence base for improving 12-month employment retention in this population"]
- Funder or audience for the research output: [e.g., "a foundation program officer reviewing a $280,000 grant application — they expect literature-supported evidence for program design choices, not a comprehensive academic review"]
- Current literature review problem: [e.g., "researcher spends 3 weeks reading everything available, produces a 40-page document that program staff cannot use because it summarizes the literature rather than answering the program design question"]
- Data sources available: [e.g., "Google Scholar, ERIC, PubMed, several foundation gray literature databases, and one existing literature review from a peer organization completed 2 years ago"]
- Time and capacity constraint: [e.g., "one product researcher with 5 days available before the grant application deadline — cannot access a university library, cannot commission a systematic review"]
- Decision the literature must support: [e.g., "whether to include a peer mentorship component in the workforce program — the program director wants evidence that peer mentorship specifically improves retention, not just employment outcomes"]
- Quality threshold for the funder: [e.g., "the foundation accepts gray literature and practitioner research alongside peer-reviewed studies — they want a clear evidence summary, not a formal systematic review methodology"]
Deliver:
1. A data analysis framework for a 5-day literature review — a day-by-day protocol covering day one for source identification and priority ranking, day two for rapid evidence extraction from the highest-priority sources, day three for the peer mentorship evidence synthesis specifically, day four for the gap analysis and the evidence quality assessment, and day five for the grant-ready evidence summary
2. A source priority ranking system — a process for selecting the 12 to 15 most relevant sources from an initial set of 60 to 80 search results, using three criteria (recency within 5 years, population specificity to formerly incarcerated adults, and outcome relevance to 12-month employment retention) without reading each source in full before ranking
3. A rapid evidence extraction template — a five-field format applied to each of the 12 to 15 priority sources covering study population, intervention type, primary outcome, effect size or finding in plain English, and the one sentence that can be quoted in the grant application as evidence
4. A peer mentorship evidence synthesis — a structured analysis of all evidence specifically relevant to the peer mentorship decision, organized by evidence quality (strong, moderate, emerging), producing a three-paragraph summary the program director uses to make the design decision and the grant writer uses to justify the program component to the funder
5. An evidence gap identification brief — a one-paragraph statement of what the literature does not definitively answer about peer mentorship for this specific population, written as a grant application strength rather than a limitation, framed as the rationale for why this program will contribute new knowledge to the field
6. A grant-ready evidence summary template — a 400-word section formatted for direct inclusion in the grant application narrative, covering the evidence base for the program model, the specific peer mentorship evidence, the evidence gap the program addresses, and the evaluation plan that will generate new evidence — all cited in the funder's preferred format
7. A peer organization literature reuse brief — a structured process for extracting relevant evidence from the peer organization's 2-year-old existing review, identifying which sections are still current, which need updating with newer sources, and which can be cited directly in the grant application with appropriate attribution
8. A repeatable literature review checklist — a 12-step process the product researcher follows for every future literature review, built from the 5-day framework but generalized to any program area, reducing the setup time for the next review from the current 3 weeks to a target of 5 days
**Write every framework step and template assuming the product researcher is competent at reading research but untrained in systematic review methodology — every protocol must be specific enough to produce a literature review that satisfies a foundation program officer without requiring the researcher to learn academic review methods before the grant deadline.**
💡 How to use this prompt
- Apply the source priority ranking system from output item 2 on day one before reading a single full paper. Nonprofit researchers who begin literature reviews by reading comprehensively spend two weeks on sources that will not appear in the final summary. Ranking 60 to 80 search results using three criteria before reading in depth reduces the reading list to 12 to 15 sources that directly answer the grant question.
- The most common mistake is writing the evidence summary after completing the literature review rather than building it incrementally during the review. Researchers who read all 15 sources and then write a summary produce a document that reflects what they read most recently rather than the strongest evidence. Complete the rapid evidence extraction template from output item 3 for each source immediately after reading it — the summary then writes itself from the extraction records.
- ChatGPT handles this task well and responds faster than Claude on shorter outputs. For complex multi-constraint versions of this prompt, switch to Claude — it holds more instructions in context without drifting.
Best Tools for This Prompt
🤖 Best AI Productivity Tools for This Prompt
Tested & reviewed — run this prompt with the best AI tools
Related Topics
About This Research AI Prompt
This free Research prompt is designed for ChatGPT and works with any modern AI assistant including ChatGPT, Claude, Gemini, and more. Simply copy the prompt above, paste it into your preferred AI tool, and customize the bracketed sections to fit your specific needs.
Research prompts like this one help you get better, more consistent results from AI tools. Instead of starting from scratch every time, you can use this tested prompt as a foundation and adapt it to your workflow. Browse more Research prompts →