Home / Prompts / Design / Why Startup Product Designers Struggle With Poor Mobile Responsiveness — Gemini Fixes It With a UX Research Report
🖌️ Design Prompt

Why Startup Product Designers Struggle With Poor Mobile Responsiveness — Gemini Fixes It With a UX Research Report

Expert-level strategies for Startup Product Designers — solve poor mobile responsiveness with a UX research report that improves mobile usability
🔥 4.3K uses
🤖 Gemini
✅ Free to use
The Prompt
You are an expert mobile UX research strategist with 12 years of experience writing UX research reports and mobile responsiveness audit frameworks for startup product teams where poor mobile responsiveness is diagnosed as a development problem but is actually a design brief problem — the mobile interaction was never properly specified because the design brief assumed desktop-first behavior would transfer to mobile without modification. Help me write a UX research report so I can improve mobile usability and create a report that gives the startup's product team a clear diagnosis of which mobile responsiveness failures are design specification failures, which are development implementation failures, and which are genuine mobile UX problems that require new interaction design. My situation: - Startup product type and mobile problem: [e.g., "a B2B project management SaaS with a 38% mobile session rate — mobile users complete 24% fewer tasks than desktop users across the same workflows, and the most abandoned mobile workflow is the task creation and assignment flow"] - Design brief status: [e.g., "the product was designed in Figma with desktop as the primary frame — the mobile frames were created by scaling down the desktop layout rather than rethinking the interaction model for mobile context"] - Research data available: [e.g., "FullStory session recordings of 40 mobile sessions, a UserZoom moderated usability test with 5 mobile participants on the task creation flow, and analytics showing the specific tap targets that produce the highest error rate on mobile"] - Primary mobile usability failure identified: [e.g., "the task assignment dropdown is 32px tall on desktop — it renders at an effective touch target of 18px on mobile, producing a 44% tap error rate on the assignment field"] - Report audience: [e.g., "the startup's product manager and engineering lead — the PM makes the prioritization decision, the engineering lead determines the implementation approach, and neither has a UX research background"] - Mobile device range tested: [e.g., "iOS 16 on iPhone 14 Pro and iPhone SE (2nd generation), Android 13 on Pixel 7 — the iPhone SE represents the smallest viewport at 320px CSS width that currently represents 12% of mobile sessions"] - Startup's development constraint: [e.g., "two front-end engineers who have 3 sprint cycles available before the next funding milestone — the report must prioritize fixes by development complexity so the highest-impact changes ship in sprint one"] Deliver: 1. A UX research report structure for a product manager and engineering lead — a six-section format covering an executive summary with the three prioritized fixes, a mobile session overview from FullStory data, the usability test findings from the five UserZoom sessions, the tap error rate analysis from analytics, the root cause classification (design specification failure, development implementation failure, or mobile UX problem) for each finding, and the sprint-by-sprint implementation priority 2. A root cause classification framework — a decision tree for classifying each mobile usability finding into one of three categories, covering the specific diagnostic question for each category (did the design brief specify mobile behavior for this interaction, was the mobile specification implemented correctly, and is the mobile context fundamentally different from desktop in a way that requires a new interaction model), with one applied example from the task assignment dropdown finding 3. A tap target audit brief — a systematic review of all interactive elements in the task creation flow against the 44px minimum touch target recommended by Apple HIG and Google Material Design, producing a list of the elements failing the minimum with the current rendered size and the corrected size specification for the development team 4. A usability test finding presentation format for non-research audiences — a three-part structure for each of the five participant findings (what the participant did, what they said or expressed, and what this means for the product), with the finding mapped to the specific screen and the root cause classification applied 5. A sprint prioritization matrix — a two-axis scoring of each finding by development effort (hours estimated by the engineering lead) and user impact (task completion rate improvement estimated from the usability test data), producing a sprint one, sprint two, and sprint three assignment for each fix, with the expected mobile task completion rate improvement at the end of sprint three 6. A mobile design brief template for future feature development — a one-page brief format the product designer completes before any mobile feature is designed, covering the touch target specification for each interactive element, the thumb zone mapping for the primary actions, the viewport range the interaction must support, and the mobile context assumption (user is moving, one hand, glancing rather than reading) 7. A FullStory session annotation guide — a process for the product manager to review the 40 session recordings and annotate each session with the root cause classification, producing a quantified breakdown of what percentage of mobile usability failures are design specification failures versus development implementation failures versus genuine mobile UX problems, before presenting the report findings to the team 8. A 30-day mobile improvement measurement plan — four metrics tracked weekly after the sprint one fixes are deployed (mobile task completion rate on the task creation flow, tap error rate on the assignment field, mobile session bounce rate on the task creation page, and mobile feature adoption rate for task assignment), with the threshold that confirms the fix is working before sprint two begins **Write every report section and prioritization framework assuming the PM and engineering lead will act on the report without a follow-up research presentation — every finding must include the development action required, not just the UX observation, and every priority classification must explain the business consequence of deferring the fix so the PM can justify the sprint allocation to a founder who does not attend research debriefs.**

💡 How to use this prompt

  • Complete the tap target audit from output item 3 before the research report is written. The audit produces the specific quantitative data (18px rendered size versus 44px minimum, 44% tap error rate) that makes the report findings actionable rather than observational. A finding that says "the touch target is too small" requires a research debrief to act on. A finding that says "the assignment field renders at 18px on iPhone SE, causing a 44% error rate — fix requires changing the line height from 1.2 to 2.0" can be handed directly to the engineering lead.
  • The most common mistake is classifying every mobile usability problem as a design specification failure without checking whether the mobile specification exists and was implemented correctly. Some mobile failures are engineering implementation failures where the correct specification was ignored or misread — and a report that blames the design team for engineering failures damages the cross-functional relationship the product team needs to fix the problem together. The root cause classification framework from output item 2 must be applied rigorously before any finding is assigned a category.
  • Gemini's real-time web access gives it an edge when you need current mobile UX benchmarks, touch target specification standards, or recent research on mobile B2B task completion rates before building your research framework. For final report language and prioritization matrix structure, paste Gemini's research into Claude for cleaner professional output.
Best Tools for This Prompt
🤖 Best AI Image Generation for This Prompt
Tested & reviewed — run this prompt with the best AI tools
View All Tools →
Midjourney V7
★ 4.8 From $10/mo
Topaz Labs
★ 4.6 From $33/mo
Canva
★ 4.5 Free / From $18/mo
Related Topics
#Gemini #Mobile UX Research #Startup Product Design

About This Design AI Prompt

This free Design prompt is designed for Gemini and works with any modern AI assistant including ChatGPT, Claude, Gemini, and more. Simply copy the prompt above, paste it into your preferred AI tool, and customize the bracketed sections to fit your specific needs.

Design prompts like this one help you get better, more consistent results from AI tools. Instead of starting from scratch every time, you can use this tested prompt as a foundation and adapt it to your workflow. Browse more Design prompts →

Affiliate Disclosure: This page contains affiliate links. If you click and make a purchase, we may earn a small commission at no extra cost to you. We only recommend tools we genuinely believe in.

🎯 Explore More

Discover other curated resources from our platform

🛠️ AI Tools View All →
Wegic
Wegic
★ 3.8
Speechify
Speechify
★ 4.5
Ocoya
Ocoya
★ 4.3
⚔️ VS Comparisons View All →
ChatGPT vs Grok: 2026 Comparison — Pricing, Features & Verdict
ChatGPT vs Grok: 2026 Comparison —…
ChatGPT vs Grok
ChatGPT vs Gemini: 2026 Comparison — Pricing, Features & Verdict
ChatGPT vs Gemini: 2026 Comparison —…
ChatGPT vs Gemini
⚔️
DeepSeek vs Gemini: Which AI Is…
DeepSeek R1 vs Google Gemini 2.0 Pro
💡 Free Prompts View All →
💡
Travel Affiliate Links Not Converting Solved:…
🔥 4.1K uses
💡
Contract Review Checklist Generator for Any…
🔥 19.3K uses
💡
Inconsistent Brand Application Solved: Claude Prompts…
🔥 3.8K uses