Organizations adapting to the emergence of AI
- Hubert Saint-Onge

- 5 days ago
- 14 min read
By Hubert Saint-Onge

AI will have a transformative impact on how work is done in organizations. We are at the very beginning of a long trajectory for the development and adoption of AI in organizations. We all need to learn to ‘tame’ this technology, collaborate with it and add value. This technology is undeniably different from previous generations of digital tools. AI encompasses capabilities that touch nearly every corner of an organization.
This article serves as a compendium of research findings and ideas on AI adoption in organizations. The purpose is to provide an overview of how researchers and thinkers view the opportunities and dilemmas of AI adoption and its impact on organizations in this early phase. This is a fast-moving field: it is important to understand the trajectory of AI from where it is now to where it is going.
Hallucinations – the capability-reliability gap
Generative AI models, even the very latest ones, often get things wrong and “hallucinate”, which requires considerable human oversight to identify and correct. Sam Altman, OpenAI’s CEO, has not only openly admitted that “hallucinations” are frequent occurrences, but that very little can be done to avoid them with this technology. In support of this assertion, researchers at OpenAI concluded in a recent paper that it is not possible to reduce, let alone prevent, hallucinations.
This is eroding confidence in AI in organizations. IT consultants at Gartner attempted to quantify the occurrence of hallucinations and found that AI agents fail to complete tasks correctly around 70% of the time. Multiple studies have shown that AI is incapable of augmenting jobs without close supervision, let alone replacing human workers, due to these referred hallucinations. In a different context, it has been reported that lawyers have been sanctioned for relying on AI to draft legal briefs that cite fictitious cases. As a result, the role of human discernment will become increasingly important as organizations deploy AI across their operations.
There is broad agreement that coding is the most compelling application of current AI technology. MITR, for instance, has shown that the most advanced systems can code in a fraction of the time required by an accomplished human developer. However, the amount of human oversight required even for simple tasks undermines productivity gains. The frequency of these errors is such that it is often more productive not to use AI than to use it.
So, how could AI have made the developers in experiments less productive? The best answer to this question is that its is caused by the capability-reliability gap. While AI offers stunning capabilities, the lack of reliability in the real world creates risks that are difficult to accept. The results of a recent MITR study, for example, showed that AI could reliably complete tasks at only 50 percent accuracy. This gap makes it challenging to apply AI in an organization's day-to-day work.
The question at the forefront of this discussion is: why are we encountering this issue when using such highly sophisticated technology? Given this technology's proven capabilities, why is reliability such an issue? Why can’t this be resolved?
These questions possibly explain why OpenAI recently published a research paper entitled “Why Language Models Hallucinate.” Here’s a quote from the article:
“Hallucinations need not be mysterious -- they originate simply as errors in binary classification. If incorrect statements cannot be distinguished from facts, then hallucinations in pretrained language models will arise through natural statistical pressures… language models are optimized to be good test-takers and guessing when uncertain improves test performance.”
Essentially, hallucinations are an inherent part of AI technology; they can’t be fixed by simply adding more data to LLMs. AI is designed to find answers. When it is unable to find one, it makes them up around 70% of the time, according to Gartner research. The implications of this issue are significant for the effectiveness and adoption of AI in organizations. Hallucinations pose substantial risks to the use of AI. Accuracy and consistency are required for organizations to use AI with confidence.
The most effective way to address the risks created by hallucinations is to address them through socio-technical mitigation, where people and technology come together to address the challenges engendered by AI’s propensity for erratic responses. Without this level of attention, this syndrome will undermine the trust in AI and require considerable human oversight to correct. It is key that the necessary expertise be available to collaborate closely with AI. OpenAI’s admission that neither more data nor greater computing power can prevent hallucinations will inevitably undermine AI’s contribution to productivity, given the need to allocate expertise to identify and correct erroneous outputs.
Trust
As illustrated by a recent case, the need to strike a well-calibrated balance between over-trusting and under-trusting AI becomes crucial given the potential impact of hallucinations. This case involves a well-known consulting firm that reportedly had to issue a refund to the Australian government for submitting a $440,000 report that contained serious AI-induced errors. These were not minor errors, such as grammatical mistakes: AI hallucinations had led to misinterpreted data and the fabrication of data to fill gaps, rendering the report potentially damaging.
When AI is implemented into an organization, it soon becomes deeply embedded into its operations – particularly Agentic AI, which touches many aspects of the organization. This pervasiveness makes its contributions risky if the reliability of this digital platform is compromised. There must be a high level of trust in AI’s outputs across an organization. In a high-entropy environment, consistency becomes problematic, and trust can erode rapidly. This is why it is essential to implement mechanisms that ensure AI stability.
Measures must be in place to ensure a low-entropy environment in which consistency enables AI to function without resorting to hallucinations. Here’s an interesting case in point: when JPMorgan recently announced an expansion of its AI capability, it summarized the measures being taken to optimize the contribution of AI:
“The harder work lies in governance—deciding which teams can use AI under what conditions and with what oversight requires clear rules. Errors need defined escalation paths. Responsibility must be assigned when systems produce flawed outputs.”
This type of disciplined adaptation is what organizations that choose to leverage AI systematically need to enact. By placing AI within a well-defined organizational framework, it reduces entropy, prevents the need to guess how to respond, and ensures greater reliability.
NVIDIA CEO Jensen Huang’s exhortations to stop being negative about AI's potential contributions must give way to the need to remain alert to AI's propensity to error. Although AI is widely recognized as error-prone, opaque and prone to bias, organizations continue to integrate it into mission-critical processes. Over-trust leads to reputational risk when AI-generated errors go unchecked—under-trust results in missed efficiency opportunities. Even more damaging, the widespread use of shadow AI remains prevalent in many organizations and can often be more impactful than formal pilots.
Trust in AI is ultimately trust in the humans who deploy, govern and oversee it. Organizations must prioritize governance, transparency and education to adopt AI safely and responsibly. Well-defined, practical governance - data rules, human review and clear accountability - is essential. Organizations cannot manually review every AI output. Instead, they must build capability, establish targeted trust checkpoints, create safe sandboxes, and promote transparency. Fast learning cycles and open sharing of experiments are key to scaling AI safely.
The fear of being left behind propels people to adopt AI. However, because hallucinations can emerge unexpectedly at any time, it is essential to put in place measures to verify AI outputs. Everything AI produces must be thoroughly verified. It is crucial to know when to trust and when to verify. For this reason, transparency about AI use is key. AI outputs must be clearly identified and verified, even though this requirement may reduce AI’s impact on organizational productivity.
Organizations must put in place processes that keep knowledgeable humans in the loop. Everyone must be transparent when using AI to generate work. Everyone must be held accountable for reducing errors caused by AI. Users in organizations must ensure that the appropriate expertise is in place to verify AI outputs. To optimize AI`s impact, those with the right competencies must be in the loop. This is where the socio-technical approach of embedding AI-supported work in ecosystems that ensure continuous integrity checks comes into play. The most significant risk of AI may be relying on it in a context where there are no specialists in the field being explored. Controls need to focus on those who potentially become vulnerable through its use.
Effectiveness: the impact of AI on performance
A few independent studies, including one by MITR, have concluded that AI tools significantly slow experts' work, as they spend more time correcting AI-generated work than AI saves them time. Moreover, other studies, such as those from Carnegie Mellon, found that experts also lose critical core skills and critical thinking skills (i.e., judgment) when making sustained use of AI. In fact, several studies, including this one, have shown that extensive workplace use of AI attenuates critical thinking abilities. Experts using AI as a “partner” in their field do not get a “dividend” of extra time and sharper judgment. In fact, they experience significant skill atrophy when using AI as a shortcut. A new report from the Work AI Institute echoes similar findings. Interestingly, this research organization is run by Glean AI, a generative AI platform for businesses.
McKinsey, which tends to be positively biased towards the AI industry, found that 80% of AI pilots had no positive impact on the bottom line. MIT found that 95% of AI pilots failed to generate returns and that many actually reduced productivity. Forrester has predicted that a quarter of enterprises are delaying their AI spending plans until 2027, as they look for ways in which AI can positively impact their bottom lines. Forrester found that only 15% of AI decision-makers could report increases in AI-related income. Note that this is income, not profit, meaning the number of AI initiatives that improved their bottom line is even lower, as AI is often far more expensive than hiring human workers.
Although there are examples of highly successful applications, many AI programs have been reported to have stalled or failed. One explanation that has surfaced is that employees perceive the adoption of AI as an additional burden on their existing workload. Another obstacle identified is that some leaders tend to treat initial AI projects as akin to the launch of more conventional technologies. When a new technology platform was introduced, the traditional approach was to provide users with brief training and let them sort it out.
AI has broader organizational implications than previous technology and requires different, more extensive support to ensure its success. As a result, leaders more accustomed to the last generation of technology are predisposed to invest in AI but overlook how to engage the human beings involved. AI Champions must apply LLMs to real work to drive steep learning and tangible ROI. AI is fundamentally different from conventional technologies and requires a distinct approach to implementation.
It has become increasingly clear that it is fundamental to support AI adoption through a more intensive socio-technical approach that facilitates the required changes to enable effective collaboration between AI and the “human” stakeholders involved. None of this should be that surprising. After all, MIT’s report, which found that 95% of AI pilots failed to deliver meaningful results, is now famous. This report has since been backed up by other surveys, such as BCG's, which found that only 5% of companies that deployed AI saw value from it.
Employment: A broad scope of early predictions
Geoffrey Hinton, often referred to as one of the “Godfathers of AI”, recently stated that
“the only way generative AI companies can reach profitability is by replacing human workers on a massive scale.”
In other words, the substantial investments in data centre infrastructure will be funded by displacing current employees and by achieving productivity gains. However, AI’s impact on employment has been the object of unresolved economic debates. There is a wide range of opinions on how many white-collar professionals will lose their jobs due to AI.
Technology executives have been saying that the labour market should brace for at least some disruptions in the months ahead. There are signs that the labour market is softening. Hiring levels have declined significantly, and some employers, including Amazon, Verizon, Target, and others, have announced substantial layoffs. Mark Penn, chief executive of Stagwell, a global marketing network, recently released the results of a survey of CEOs indicating that almost 70% of respondents expected AI to weaken the U.S. job market, even though they were optimistic that the technology would strengthen the economy. The perception that AI would boost the economy while simultaneously reducing employment levels points to a disconnect that underscores uncertainty for the foreseeable future.
Many predictions about the impact of AI on employment have been rather vague, if dire. We are still at an early stage in understanding how AI will affect organizations. The rapid emergence of this powerful technology is creating contradictory countercurrents. However, this context did not prevent Jared Kaplan, the chief scientist and co-founder of Anthropic, from offering relatively vague but controversial predictions in an interview with The Guardian. He explicitly stated that AI systems will
“handle most white-collar work within two to three years,”
clearly implying that these jobs would be eliminated within this relatively brief period.
At the other end of the spectrum, less than a year ago, Daron Acemoglu, a Nobel Prize recipient and highly regarded MIT economist, conducted a study that led him to conclude that AI would displace fewer than 5% of employees. More recently, financial executives have predicted that unemployment rates will exceed 30 percent over the next few years, despite significantly accelerated GDP growth. The juxtaposition of these predictions underscores the difficulty of assessing AI's potential impact on employment. Who is right? In this instance, it seems wise to turn to more quantitative research.
According to a recent report by Dr. Rebecca Hinds and Dr. Bob Sutton from the Work AI Institute, demand for AI fluency is growing faster than any other skill, increasing nearly sevenfold over the two years through mid-2025. Since then, numerous predictions have been made that the application of AI in organizations will lead to significant job losses. Some researchers, however, have concluded, based on thorough research, that AI may have a lesser impact than many have claimed. For instance, researchers such as Drs Hinds and Sutton found that the net effect of AI on employment was close to zero, particularly for highly paid jobs.
Kellogg’s Bryan Seegmiller and a group of colleagues have similarly concluded that the expected AI tidal wave may not materialize. Although they recognize that AI is likely to eliminate job responsibilities, their research shows that it also creates just as many opportunities. After analyzing 58 million LinkedIn profiles from 2014 to 2023 and a comprehensive job database, Seegmiller and his Kellogg colleagues have found that the net effect of AI on employment has been close to zero to date. These findings lead them to conclude that;
“the more a job is exposed to AI, the more likely demand for that job will go down. However, workers in jobs with greater AI exposure often have more opportunities to redirect their attention to other, less AI-exposed tasks and perform better in those areas.”
To preserve their employment during the AI boom, the Seegmiller study findings suggest that workers may need to shift their responsibilities toward tasks that compliment AI’s growing role in their occupations. People might, for example, consider spending more time on tasks that involve collaborating with AI to supplement their work. A recent McKinsey report reinforces these findings, predicting that the impact on white-collar employees will likely be mitigated: those who acquire AI skills and can collaborate with AI are less likely to be affected in the future. Given AI’s tendency for hallucinations and inaccuracies, critical thinking will likely be in high demand.
Hinds and Sutton conclude that AI can create either a “cognitive dividend or a cognitive debt.” Essentially, they found that when a white-collar worker uses AI as a partner to supplement their expertise, it can free up time and sharpen judgment, creating a cognitive dividend. However, when AI is used as a shortcut to automate tasks, it leads to workforce reductions and erodes employees' cognitive skills.
In an article on this topic, McKinsey has reported that AI could, in theory, automate activities accounting for about 57 percent of US work hours today. McKinsey also estimates that more than 70 percent of today’s skills can be applied in both automatable and non-automatable work. The greater a job's exposure to AI, the more likely it is that the demand for that job will decline. At the same time, workers in jobs with greater AI exposure are often able to make more adjustments, redirecting their attention to other, less AI-exposed tasks and performing better in those areas. For instance, if AI replaces one of their rote tasks, workers might be able to spend more time on strategic planning or building essential business relationships. They conclude that AI will not make most human skills obsolete, but it will change how their skills are used.
If a company uses AI extensively, it will likely increase its overall productivity and expand its workforce. When taking stock of these findings, McKinsey suggests that although people may be displaced from some work activities, many of their skills will remain essential. If a job involves both high- and low-AI-exposure tasks, this variance reduces the likelihood of displacement because workers have sufficient flexibility to adjust their responsibilities.
New forms of collaboration are emerging, creating skill partnerships between people and AI that raise demand for complementary human capabilities. This dynamic suggests that when an organization uses AI extensively, it tends to increase its overall productivity and expand its workforce, much like “a rising tide lifts all boats.” These skills could also be central to guiding and collaborating with AI, a change already redefining roles in many organizations.
Future work will involve partnerships among people, agents, and robots—all powered by AI. While recognizing that AI could theoretically automate more than half of current US work hours and profoundly transform the configuration of work in organizations, the authors of this McKinsey report explain that as adoption unfolds, they foresee that
“…some roles will shrink, others grow or shift, while new ones emerge—with work increasingly centred on collaboration between humans and intelligent machines.”
When both the Kellogg and McKinsey researchers accounted for these factors, and more specifically the built-in capability to collaborate with AI, they found that the net effect of AI on employment was close to zero, particularly for high-wage jobs.
As AI handles more routine tasks, people will apply their skills in new contexts. Workers will spend less time preparing documents and doing basic research, for example, and more time framing questions and interpreting results. Employers may increasingly prize skills that enhance AI. Employers are already adjusting.
A case in point on the impact of AI on employment comes from Salesforce, where it was decided to let go of 4,000 of their 9,000 staff, believing AI would easily replace them. Recently, senior executives publicly admitted that AI could not handle socially and technically complex work. It failed at handling complex issues and escalations. They experienced a marked decline in service quality and a substantial increase in complaints, mainly due to incorrect AI-generated responses. But Salesforce is not alone; a similar situation occurred at Amazon, where thousands of staff were let go at once, including members of the ‘outage team’ in AWS operations. Afterwards, a severe outage led to significant losses because experienced staff were unavailable to resolve it.
Organizations will obviously encounter challenges in managing their workforce levels given the increasing use of AI. The best quantitative research available on the topic concludes there is no basis to assume that AI will necessarily lead to productivity enhancement or the elimination of jobs. As it stands, AI will impact how work gets done, but it remains difficult to pinpoint the extent to which employment levels can be reduced without affecting organizational performance. If anything is done in this regard, it must be done gradually to optimize both human and machine capabilities.
Conclusion
There is no doubt that AI represents a transformative technology. Its ability to assemble information, detect and resolve issues quickly, and manage processes effectively is genuinely unprecedented. No organization can afford to ignore its potential benefits. However, the capability-reliability gap associated with this technology is an issue that organizations must carefully manage. Experience so far indicates that it is an issue that cannot be avoided: it must be given the appropriate level of attention and carefully addressed.
Jensen Huang, Invidia’s CEO, keeps telling us that being negative about AI harms its potential benefits, but it is also essential to manage its significant flaws. As illustrated by companies like Master Card, a well-structured governance approach is key. Adopting a socio-technical approach* to AI is also essential: the right expertise and accountability must be put in place to oversee the accuracy of AI outputs. AI has provided ample evidence that human monitoring needs to be systematically implemented. AI cannot be left alone without a well-developed framework that carefully establishes governance mechanisms to oversee its outputs and ensure accuracy.
AI must be positioned as augmenting the expertise of people who are held accountable. However capable AI can be, a tool cannot be held accountable for outcomes. AI must be seen as ‘augmenting’ and not replacing human expertise. Of course, this requirement has implications for employment – leaving AI unsupervised by humans is currently not a viable alternative and may never be.
*When I worked at Shell, I was actively involved in architecting new petrochemical plants based on a socio-technical approach geared to ensure superior effectiveness by optimizing interactions between the technology and the people operating it."


