Blog

AI Hysteria is Missing the Point

  • Less than 20 percent of people in Western countries support the growing use of AI. This is driven by the fear of losing human agency. We believe this worry is misplaced.
  • The pace of LLM adoption has led to overestimating broader AI capability and underestimating the human ability to adapt and harness new technologies.
  • LLMs excel in verbal tasks but are prone to errors in detail-oriented, precision work.
  • Human subjectivity will continue to drive the complex process of organizational knowledge creation, something that AI cannot replicate.
  • The application of co-intelligence tools (such as ModuleQ’s Unprompted AI) can unlock both human flourishing and organizational outperformance.

Across Western societies, citizens are broadly opposed to Artificial Intelligence (AI). This is the conclusion of the 2024 Edelman Trust Barometer. In the US, the UK, Germany, and France, less than 20 percent of the population “embrace the growing use of AI,” while 50 percent or more “reject the growing use of AI.” Given AI's transformative potential, this is shocking. Why the pessimism? At the top of people’s concerns is the loss of human agency. Agency refers to the thoughts and actions taken by people to express their power.

The primary concern toward agency is in its impact on work. While past technologies have generally boosted worker productivity and satisfaction, AI is perceived differently. Many fear that AI will displace humans. This fear is fed by rampant speculation regarding AI’s capabilities. Business and tech media feed into this, with prominent voices predicting that large language model (LLM)-driven AIs will soon surpass human intellect.

In a recent white paper published at Elevandi, David Brunner PhD, CEO and Founder of ModuleQ, and Anupriya Ankolekar PhD, Co-Founder and Principal Scientist, lays out why this fear is unfounded. In short: the rapid introduction of LLMs has led to an overestimation of AI's abilities and an underestimation of humanity’s capacity to adapt and thrive, especially when empowered by new technologies.

LLMs have achieved superhuman mastery in skills such as verbal composition, persuasion, and rhetoric. Spend 15 minutes on ChatGPT, and you may come to the same conclusion. This gives the impression of near omniscience. But that impression may be misleading.

Overestimating AIs

We know that LLMs can make mistakes. They struggle with hallucination, handling multi-step tasks, and providing reliable information in narrow or specialized areas. For tasks requiring absolute accuracy, they are far from omniscient. A Harvard study found that management consultants using ChatGPT completed data analysis tasks faster than peers without the tool while making more mistakes. This was largely because the consultants relied too heavily on the AI, neglecting to engage deeply with the data themselves.

These blind spots highlight the limitations of using LLMs for detail-oriented tasks, which are increasingly common in modern knowledge work. ModuleQ’s analysis has shown that LLMs fail to deliver accurate data retrieval in financial services workflows, such as identifying the correct date of corporate action—a basic task for first-year investment banking analysts.

Underestimating Humans

Similarly, we often underestimate human potential in the workspace. Corporate culture thrives on human subjectivity, which is crucial for creating strong operational performance. Foundational organizational research has found the importance of human subjectivity in an organization’s ability to develop deep knowledge. This research describes the process as SECI: socialization, externalization, combination, and internalization. This entirely subjective loop occurs in every thriving organization. SECI isn’t a phenomenon likely to be replicated by LLMs anytime soon.

Blog-AI-Hysteria

Co-Intelligence

To harness AI's potential while preserving human agency, we should explore “co-intelligence,” an idea that AI complements human capabilities. Research shows that generative AI can enhance productivity when used as a tool by human workers. For example, customer service agents increased their productivity by ~14 percent when leveraging AI-generated suggestions pointing them toward relevant technical documentation in answering specific customer service questions. This co-intelligence approach allowed agents to develop a deeper understanding of those inquiries, delivering a better customer experience and better internal knowledge creation around edge cases.

ModuleQ’s Approach to Co-Intelligence

At ModuleQ, we focus on co-intelligent solutions that empower financial professionals to be more productive. Our Unprompted AI solution helps workers socialize, externalize, combine, and internalize knowledge, equipping them with the information they need to accurately tackle complex problems. We believe in a future where AI enhances organizational excellence and enriches the way people work.

While concerns about AI’s potential to spread inaccurate knowledge or displace jobs should not be swept aside, these issues should be viewed within the framework of any new technology. When applied judiciously, AI can lead to more fulfilling work, faster learning, and higher functioning organizations. It is our responsibility in the technology and business community to demonstrate these benefits to society. At ModuleQ, we are excited to step up and lead the way to this better future.

 

AI Hysteria is Missing the Point
4:53