A.I. and Healthcare—Hype vs. Reality
Navigating the Promise and Perils of AI in Modern Medicine
My fascination with statistical learning and AI started in a course that I took in college on statistical quality control models for manufacturing processes. A few years later, I worked on a thesis on how technological change (in particular, the emergence of high-throughput screening and combinatorial chemistry) shaped the cooperative strategies of biopharmaceutical firms in the 1990s. While trying to understand the nature of screening techniques, I came across quantitative structure-activity relationship (QSAR) models. These models, which have been used since the 1960s, if not earlier, employ regression techniques to predict the activity of chemical compounds based on their molecular structures. In QSAR, regression analysis is a critical tool for establishing the relationship between a compound’s structure and its observed activity. Thinking of them as predictive models, one may classify them as A.I. in chemistry and, by extension, A.I. in healthcare.
We have come a long way from those basic models to today’s massive neural networks. Assisted by decreasing computation costs, fast-evolving deep learning techniques are now the name of the game. Rarely does a day pass without mention of “AI in healthcare” in the headlines of major newspapers. The powerful tools are already making an impact and showing tremendous potential in improving different facets of healthcare. For example, in Cincinnati, all major hospital systems use AI in different capacities, from diagnosing pulmonary embolism, stroke, and breast cancer or screening and treating Alzheimer’s disease to automating insurance and claims billings.
AI is already saving doctors time when registering patients’ records. To understand the importance of this, I refer you to a paper published in the JAMA Internal Medicine estimating that U.S. physicians spent about 125 million hours outside office hours completing documentation in 2019. We all have heard stories about Electronic Health Records’ burn-out.
Drugs developed with the assistance of AI models are already entering advanced stages of clinical trials. An example is a pulmonary fibrosis drug by InSilico that is now in phase II clinical trial.
A Large Language Model (LLM) Developed by Google recently surpassed board-certified primary-care physicians in accurately diagnosing conditions related to respiratory and cardiovascular health, among others. In medical interviews, it gathered information comparable to human doctors and demonstrated greater empathy.
However, public opinion does not seem to reflect AI’s potential. Recent surveys, like the one conducted by Pew Research in 2023, indicate discomfort and mistrust among the public. The survey shows that about 60% of Americans would be uncomfortable with providers relying on AI in their own healthcare. Why, despite AI’s tangible benefits in healthcare, is there a cloud of skepticism overshadowing its potential?
In exploring the public’s apprehension towards AI in healthcare, I propose we examine the two extremes: the ‘narrative of gloom’ and the ‘narrative of bloom.’ The ‘narrative of gloom’ paints AI as a dystopian force, a harbinger of disruption and ethical quandaries, echoing the fears and uncertainties many feel towards rapid technological change.
The ‘narrative of gloom’ is exemplified by statements like that of Geoffrey Hinton in 2016, who suggested we stop training radiologists because AI would soon render their roles obsolete. This perspective assumes that the role of a radiologist is solely to take and read images, neglecting the multifaceted nature of their job. However, a deeper look into the U.S. Department of Labor’s job classification system (O*Net) reveals that radiologists perform many explicit tasks like other professionals. While AI can reshape some of these tasks, the notion that it will completely replace such complex jobs is oversimplified.
On the other hand, the ‘narrative of bloom’ (the other extreme) offers an overly optimistic view, portraying AI as a flawless solution poised to rectify all the ailments of our healthcare system. Both narratives are extremes, and both, I argue, are misleading. As usual, reality lies somewhere in between and is far more nuanced, interwoven with both incredible potential and complex challenges.
AI is a technological breakthrough. However, unleashing its potential in healthcare organizations requires two other complementary innovations: Innovation in how work is structured inside the organization and innovation in how the broader institutional environment (human rights, competition, freedom of speech, I.P. regime, privacy, etc.) is set up.
The first issue (organizational structure) can be broken down into several questions. For example, how should workflow be structured so that A.I.’s potential can be harnessed? This question requires an interdisciplinary approach that involves expertise in organizational theory, economics, operations research, computer science, and, of course, healthcare. Such an approach is possible only through the collaboration of Academia and Industry. And I believe business school academics and organizational theorists have much to offer.
The second issue (how the institutional environment is set up) is, in my view, significantly more challenging and requires engagement by all stakeholders in society. In the interest of brevity, I only focus on the issue of bias that is being widely discussed these days.
In our current age, where A.I.’s data processing abilities are exponentially greater, the risk of replicating biases on a grand scale is more pronounced than ever. It presents us with a unique challenge: ensuring that AI, as it becomes increasingly integrated into healthcare, does not cloak prejudice as objectivity. We must approach the development of AI with a blend of ethical oversight and critical scrutiny, striving to ensure it serves as a tool for unearthing truths rather than creating deceptive illusions. And yes, ironically, statistics and the search for objective truth can create an illusion of knowledge.
In the 1890s, a statistician working for the Prudential Insurance Company was tasked with refuting accusations of racial discrimination by the company, which was accused of denying insurance to African Americans. His report, which had an egregious title and I shall not repeat it here, published in 1896 by the American Economic Association, sought to justify this by using statistical analysis to claim the inherent inferiority of Black people—however, the famous African American sociologist W. E. B. Du Bois contested this, arguing that the higher mortality rates and lower standards of living among Black Americans were not evidence of their inferiority but rather the results of discrimination.
The actuarial analysis, underpinned by prejudiced assumptions, is a stark reminder of how biases can skew outcomes, even under the guise of objective statistical analysis. It reminds me of Stephen Hawking’s profound observation that “The greatest enemy of knowledge is not ignorance; it is the illusion of knowledge.”
Our goal should be to harness AI as a technological tool and as an ally in our quest for deeper understanding and better healthcare outcomes. This requires us to be vigilant and continuously question and refine our approaches, ensuring that AI aids our pursuit of knowledge rather than becoming an unwitting adversary.
In essence, we stand at a crossroads in healthcare – a point where the blending of human insight and AI’s capabilities can lead us either to unprecedented advancements or down the path of reinforced biases. The choice and the responsibility lie with us, the creators, users, and regulators of this powerful technology.






