AI for the Real World
Notes from Davenport & Ronanki — HBR, Jan 2018
You are building the wrong AI project because you are chasing a moon shot while the real money sits in automating the boring, existing drudgery. MD Anderson Cancer Center burned $62 million on an IBM Watson initiative designed to diagnose and recommend cancer treatments, a grand vision that was shelved in 2017 without ever treating a single patient. That same institution found success with unglamorous tools that recommended hotels for grieving families, identified patients struggling with bills, or resolved IT tickets. The pattern holds across industries: the projects that sound impressive at a board meeting are the ones most likely to fail, and the mundane ones that automate repetitive tasks are the ones that ship and improve satisfaction scores.
What Organizations Actually Want
If you survey the landscape of 152 cognitive technology projects studied by Davenport and Ronanki, you will find that moon shots were far less likely to succeed than initiatives that enhanced existing business processes. A Deloitte survey of 250 executives in 2017 reveals what companies actually want from this technology, and it is not the sci-fi replacement of humans. Fifty-one percent want to enhance products, 36 percent aim to optimize operations, and another 36 percent seek to free up workers for higher-value work. Only 22 percent want to reduce headcount, confirming that augmentation leads headcount reduction by a wide margin.
Where the Bottleneck Is
What is actually blocking these organizations? Integration tops the list at 47 percent. Integration is the bottleneck — plugging cognitive technology into legacy processes and systems. Cost comes second at 40 percent, followed by management understanding at 37 percent and talent scarcity at 35 percent. Only 18 percent cite oversold hype as the main obstacle. The number-one challenge is organizational. A better algorithm won't fix an integration problem.
Three Types of Cognitive Technology
Three types of cognitive technology drove those 152 successful projects, yet most founders confuse them. Process automation uses robotic process automation to handle digital and physical tasks where bots act like humans, inputting data and consuming information across multiple IT systems. It is the least expensive, easiest to implement, and produces the fastest return, even though it is the least "smart" and not programmed to learn. NASA launched four RPA pilots in accounts payable, IT, and HR, achieving 86 percent completion of HR transactions without human intervention before rolling it out enterprise-wide. Across all 71 RPA projects, replacing employees was neither the primary objective nor a common outcome. Most projects reduced outsourced work or handled volume growth without adding staff.
Cognitive insight uses algorithms to detect patterns in vast data, acting as analytics with memory and models that improve over time. GE integrated supplier data across databases and saved $80 million in the first year by eliminating redundancies and renegotiating contracts. Deloitte extracts terms from contracts automatically so auditors can address 100 percent of documents without reading each one manually. Cognitive engagement uses natural language processing and chatbots to interact with humans, yet this is the least common type. Facebook's Messenger bots could not handle 70 percent of requests without human help, whereas Vanguard is piloting an intelligent agent to help customer service staff answer FAQs with a plan to let customers engage directly with the AI. The mature play combines all three: an Italian insurer built a cognitive help desk using deep learning to search FAQs, NLP to interact in Italian, and smart routing to escalate complex problems.
Start by understanding what each technology can do. Rule-based systems are transparent but fixed; deep learning learns but is a black box. Match the tech to the task — a neural network is overkill for data entry. The second step is building a portfolio of bets, assessing bottlenecks where knowledge exists but is siloed, scaling challenges where a process is too slow or expensive to expand, and firepower gaps where there is more data than humans can analyze. You must prioritize by business value and feasibility instead of chasing the flashiest capability.
How to Deploy It
The third step is launching pilots, and the injected project is the trap to avoid. A senior exec hears about AI at a conference, tells the team to "do something cognitive," bypasses rigorous piloting, and the project fails publicly, setting back the entire AI program. Pfizer ran 60 plus projects through systematic assessment, many now in production, while Becton Dickinson uses end-to-end process maps and heat maps to identify which activities are most amenable to AI before touching anything. Scaling up is where change management becomes half the battle, as a US retail chain piloted machine learning for inventory and merchandising only to have buyers collectively ask the chief merchandising officer to kill the program after feeling threatened. The fix required the executive to show results and then reframe the narrative: freed from routine tasks, buyers could focus on understanding customer desires and apparel trends. Educate, don't mandate.
Augment vs Replace
The distinction between augmenting and replacing matters more than most leaders acknowledge because cognitive systems perform tasks, not entire jobs. Most job losses in AI deployments came from attrition or reduced outsourcing — layoffs were rare. Vanguard's Personal Advisor Services manages over $80 billion in assets by dividing work clearly: AI generates financial plans, handles goals-based forecasting, rebalances portfolios, minimizes taxes, and tracks aggregated assets. Human advisers understand investment goals, customize implementation plans, provide behavioral coaching, handle estate planning, and monitor accountability. This hybrid model costs less than pure human advising and delivers higher satisfaction than pure robo-advising, turning human advisers into investing coaches and emotional circuit breakers.
Automating an existing workflow as-is gives you a fast, bad process. You must redesign the work around human and machine strengths by applying design thinking, involving affected workers, and treating first drafts as experiments. Anthem Health used AI as the occasion for a major modernization, redesigning workflows from scratch, with the CIO framing the initiative as using cognitive tools to move the organization to the next level. For Nyantrace, the Davenport framework applies directly: start with process automation, prove value fast, and build toward cognitive insight. The founders and teams I am building for are in the pilot phase of deploying AI agents, and they need the observability layer before they scale. The integration challenge Davenport identified at organizations maps to my market: the hardest part isn't building the agent, it's knowing what the agent did.
Observability and governance for AI agent systems. If you're building with agents, I'd like to talk.
nyantrace.ai →