The AI Discovery Problem
How to Actually Find Where AI Belongs in Your Company
You've been in this meeting before. The conference room is filled with senior stakeholders discussing "AI transformation opportunities." Someone presents a framework with multiple dimensions and strategic pillars. The conversation touches on competitive advantages, data assets, and emerging capabilities. Everyone contributes thoughtful observations about potential use cases.
Yet six months later, the AI pilot projects are struggling with adoption, the ROI isn't materializing, and teams are quietly reverting to their old processes.
The issue isn't technological—it's methodological.
Most organizations approach AI adoption like they're designing solutions for theoretical problems rather than addressing real operational challenges.
The core insight: ask questions that reveal genuine problems rather than polite possibilities. This requires a systematic approach to discovery that prioritizes understanding current workflows over exploring future possibilities.
The Current State: Common AI Adoption Patterns
The Strategy-First Approach: Leadership identifies AI as a strategic priority and forms committees to develop implementation roadmaps. These groups often include subject matter experts who understand AI capabilities but may not deeply understand day-to-day operational challenges.
The Trend-Following Pattern: Organizations observe AI implementations at peer companies or industry conferences and seek to replicate similar approaches. This creates a bias toward visible, demonstrable solutions rather than impactful but less glamorous applications.
The Consensus-Building Problem: When teams do gather input from potential users, they often ask questions that encourage agreeable responses rather than revealing specific operational pain points. A question like "Would an AI assistant help with your reporting tasks?" typically yields positive responses regardless of actual need.
The Hierarchy Challenge: Input collection sometimes involves senior managers asking direct reports about AI needs, which can create pressure to identify opportunities even when current processes are working effectively.
A Different Approach: Problem-Focused Discovery
The alternative is to structure discovery conversations around specific problems rather than hypothetical solutions. This requires a systematic framework with three core principles:
The Three-Principle Framework
1. Focus on Current Workflows, Not Future Possibilities Begin with understanding what exists today rather than exploring what could be possible tomorrow. Identify specific friction points in actual work patterns.
2. Ask About Past Behavior, Not Future Intentions People are poor at predicting their own behavior but accurate when describing recent experiences. Focus on what they actually did rather than what they might do.
3. Identify Emotional Indicators Pay attention to language that indicates frustration, time pressure, or repetitive annoyance. These emotional signals often point to high-impact opportunities.
Let's examine how this framework applies in practice:
Example 1: Report Generation
Standard Question: "Would you find value in an AI-powered reporting tool?"
Problem-Focused Alternative: "Walk me through how you created last month's performance summary. Which sections required the most time? What information was difficult to locate?"
The first question generates theoretical interest. The second reveals that Maria in operations spends three hours each month manually reformatting the same dataset because different stakeholders require different visualization formats. This points to a specific automation opportunity.
Example 2: Customer Support
Standard Question: "How could AI improve our customer service capabilities?"
Problem-Focused Alternative: "Describe the last complex customer inquiry you handled. What information did you need to access? How long did resolution take?"
This shift from hypothetical to specific, from future-focused to past-focused, transforms conversations from strategic speculation to operational insight.
Example 3: Data Analysis
Standard Question: "What AI analytics tools would benefit your team?"
Problem-Focused Alternative: "What analysis did you complete last week that felt repetitive? What would have accelerated your work?"
Three Principles for Effective AI Discovery
Now let's examine each principle in detail:
1. Focus on Current Workflows, Not Future Possibilities
Organizations often begin AI discussions by exploring what could be possible rather than understanding what currently exists. This approach can lead to solutions that sound innovative but don't integrate well with actual work patterns.
Less Effective: "We should explore AI applications for knowledge management."
More Effective: "I noticed you've been working longer hours recently. What's been requiring additional time?" → "I spend significant time searching for the right compliance documentation when customers have specific questions." → This reveals a concrete problem worth addressing.
Begin conversations with current-state understanding. Identify specific friction points. If someone isn't experiencing notable frustration with their current process, AI may not be the appropriate solution.
2. Ask About Past Behavior, Not Future Intentions
People are generally poor at predicting their own behavior, particularly regarding new tool adoption. They're much more accurate when describing recent experiences.
Instead of: "Would you use an AI tool for data analysis?"
Ask: "Show me the last analysis you completed. What data sources did you use? What took the longest? What would have changed if you'd received results faster?"
This approach reveals actual workflows, genuine bottlenecks, and real time investments. It also helps distinguish between solving genuine problems and simply digitizing already-efficient processes.
3. Identify Emotional Indicators
Pay attention to language that indicates frustration, time pressure, or repetitive annoyance. These emotional signals often point to high-impact opportunities.
High-Impact Indicators:
"I spend too much time manually updating these systems"
"It's difficult to locate the right contract language quickly"
"I find myself answering similar questions repeatedly"
Lower-Impact Indicators:
"Additional automation would be helpful"
"More analytical insights could be valuable"
"Increased efficiency would be beneficial"
Focus on areas where people express specific frustration rather than general interest in improvement.
Practical Implementation: Conversation Framework
Creating Appropriate Context
Effective discovery requires establishing environments where people feel comfortable providing honest feedback:
Separate from performance evaluation: Avoid having managers interview their direct reports about AI needs. People may hesitate to criticize current processes if their supervisor designed them.
Remove implementation pressure: Make clear that there's no expectation to identify AI opportunities. The goal is understanding current challenges.
Use operational language: Focus on work efficiency rather than technical capabilities. Avoid terms like "machine learning" or "digital transformation."
Question Categories
Current State Questions:
"Describe your typical workflow for [specific task]"
"What consumes the most time in your role?"
"What work feels least valuable to you?"
Specific Challenge Questions:
"How long does that process typically require?"
"What happens when you can't locate needed information?"
"Who becomes involved when this workflow breaks down?"
Impact Assessment Questions:
"How much time would you save if this were automated?"
"What would you do with that recovered time?"
"Who else experiences this same challenge?"
Recognition of Warning Signs
Overly Enthusiastic Responses: If someone immediately suggests that AI would improve everything they do, they may not be thinking specifically about individual use cases.
Vague Problem Statements: "Better insights would be helpful" isn't a specific problem. "I can't determine why our conversion rate decreased and I've spent six hours analyzing data" is.
Theoretical Solutions: "An AI that could predict customer behavior would be valuable" doesn't indicate whether they'd actually use such a tool or how it would fit their workflow.
From Discovery to Implementation
Prioritization Framework
Once you've gathered specific insights, prioritize opportunities based on:
Time Impact: How much time would this save per person per week? Frustration Level: How much dissatisfaction exists with the current process? Integration Ease: How smoothly would this fit existing workflows? Scope: How many people experience this specific problem?
Start with Minimum Viable Solutions
Rather than building comprehensive AI platforms, begin with targeted automations that address specific pain points.
Instead of: "AI-powered customer service platform" Consider: "Automatic routing of the five most common inquiry types"
Then measure actual usage, time savings, and user satisfaction. Let real behavior guide subsequent development.
Develop Internal Advocates
The most successful AI implementations have internal champions who participated in the discovery process. These individuals understand the problem being solved and can advocate for adoption because they helped identify the need.
Building Sustainable AI Adoption
Effective AI discovery isn't a one-time activity. It requires building organizational capabilities for ongoing problem identification and solution development.
Regular Assessment: Make discovery conversations ongoing rather than project-based. Cross-Functional Involvement: Include actual users in AI planning, not just technical teams. Honest Measurement: Track whether AI tools actually save time or create new overhead. Iteration Mindset: Be willing to discontinue AI projects that don't deliver measurable value.
Many organizations struggle with AI adoption because they're asking the wrong questions to the wrong people in the wrong context. They're building solutions for theoretical problems or addressing the wrong challenges entirely.
A problem-focused discovery approach provides a framework for having more productive conversations about where AI can actually add value. It's not about limiting innovation—it's about ensuring that innovation serves genuine operational needs.
The Framework Summary
Remember the three core principles:
Focus on Current Workflows, Not Future Possibilities: Start with understanding existing friction points rather than exploring hypothetical improvements
Ask About Past Behavior, Not Future Intentions: Ground conversations in recent experiences rather than predicted behaviors
Identify Emotional Indicators: Look for frustration and time pressure signals that indicate high-impact opportunities
The question isn't "How can we use AI?" but rather "What problems do we have that technology might solve?"
Your employees will provide more useful input. Your budget will be allocated more effectively. And your AI implementations are more likely to achieve sustainable adoption.
The most successful AI strategy isn't about deploying the most advanced technology. It's about solving real problems for real people. And the only way to identify those problems is to ask better questions.

