There is no single universal human response to artificial intelligence (AI), and individuals make completely different choices based on identical AI inputs, according to research released this week in an article by MIT Sloan Management Review.

New analysis, it states, finds that these differences in AI-based decision-making have a direct financial effect on organizations. Depending on their particular decision-making style, some executives invest up to 18% more in important strategic initiatives based on the exact same AI advice.

“To champion AI in the boardroom, leaders must acknowledge human biases and decision-making styles,” said Philip Meissner, professor of strategy and decision-making at ESCP Business School in Berlin. “If we do not understand the human dimension, we will only comprehend half the equation when it comes to optimizing the interplay between AI and human judgment.”

Findings suggest that executives using AI to make strategic decisions fall into three archetypes based on their individual decision-making styles:

  • Skeptics do not follow the AI-based recommendations. They prefer to control the process themselves. When using AI, skeptics can fall prey to a false illusion of control, which allows them to overestimate themselves and underestimate AI.
  • Interactors balance their own perception and the algorithm's advice. When AI-based analyses are available, interactors will trust and make decisions based on these recommendations.
  • Delegators largely transfer their decision-making authority to AI. Delegators may misuse AI to reduce their perceived individual risk and avoid personal responsibility. They consider the AI recommendations as a personal insurance policy in case something goes wrong.

“What's interesting is that the same behavioral patterns remain relevant whether or not AI is involved,” said Christoph Keding, research associate at ESCP Business School in Berlin. “In the era of AI-advised decision-making, executives' decision behaviour is still shaped by their underlying decision-making styles.”

The article goes on to say that in order to “utilize AI's full potential, companies need a human-centered approach to address the cognitive dimension of human-machine interactions beyond automation.

“With the right balance of analytics and experience, AI-augmented decision processes can increase the quality of an organization's most critical choices and drive tremendous value for companies in an increasingly complex world.”

Entitled The Human Factor in AI-Based Decision-Making, it provides three recommendations for boards of directors and senior executives to integrate AI into strategic decision-making processes successfully.

  • Create awareness: “Communicate with all executives who interact with AI-based systems about the impact of human judgment, which remains a decisive factor when augmenting the top management team. Executives should learn about the specific biases they have toward AI, which vary depending on their individual decision-making styles. This awareness is the crucial foundation for a successful integration of AI into organizations' decision-making processes.”
  • Avoid risk shift and illusion of control: “Emphasize that the ultimate decision authority stays with the executives, even if AI is involved. And explain the potential benefits of AI as well as what parameters and data the suggested course of action is based upon.”
  • Embrace team-based decisions: “Balance the predominant tendencies of the three decision-making archetypes in teams to overcome choices that are overly risky or risk-averse. Different perspectives and multiple options improve human decision-making processes, whether or not AI is involved.”