In the late 1820s, breakthroughs in railroad technology began to fuel dreams of connectivity in the United States. While a network of new canals connected certain cities, they required extensive circumnavigation. The 1830s saw an explosion of railroad inventions, and by the 1860s, nearly 9,000 miles of track spanned the US, mostly east of the Mississippi. A severe financial crisis in 1873 upended further investment, leading to skepticism regarding the need for expanding the railroads further west. During this period of uncertainty, railroads found utility in transporting petrochemicals that would eventually electrify the nation and substantiate the need to further their reach.
In the 1990s, advances in fiber optic technology accelerated the promise of the internet. By the late 90s, capital expenditure on fiber optics surged to about $500 billion. The NASDAQ bubble burst in the early 2000s led to a severe reduction in Capital Expenditure (“Cap Ex”) and broadened skepticism about the internet's potential to deliver ROI on the existing investment. Today, that very infrastructure supported the expansion of the world’s largest companies, driving us deeper into the digital age.
This pattern is broadly known as the Hype Cycle. It is a term coined by Gartner, an IT consulting and research company. In it, emerging technologies initially gain explosive visibility and exuberance, only for enthusiasm to crest and fall into a trough of pessimism. With most promising technologies, the mood eventually climbs back as the productization of the technology finds utility. We are beginning to witness a similar pattern with Generative AI.
Recall the explosive growth of public attention toward LLMs in late 2022, beginning with the unveiling of ChatGPT. This naturally led to an overextrapolation of what LLMs can do and how quickly they will do it. Remember this petition to halt AI development for fear of an omniscient AI imminently overtaking human civilization? That was a mere 15 months ago. While the fears over existential risk have been replaced by largely sensible regulatory frameworks, the pace and build-out of enterprise focused Cap Ex has exploded. With that investment has come significant expectations of return. It is still too early to tell, but the initial results have been mixed.
On the positive side, several “killer features” have already come to market. These included coding copilots that have had dramatic impacts on enterprise productivity. According to Marco Argenti, the CTO at Goldman Sachs, these copilots have increased productivity by 10-40% across the firm’s 12,000 software adjacent workers. Because copilots can be packaged as features within existing coding environments, the ability to generate a return on investment is relatively straightforward, not requiring any significant augmentation to workflows.
Advertising has also been another area of strong ROI. This makes sense as many major machine learning advancements in the enterprise began at mega cap technology firms focused on superior recommendations and advertising monetization. The proverbial tracks had been laid for companies such as META and Alphabet. META posted stunning results this quarter.
Other facets of the enterprise rollout have been more haphazard. Businesses were promised AI infusion into enterprise workflows but have found adoption channels challenging. Common issues that businesses face with Gen AI include data privacy, characterization, the need for accuracy over fluency, and the allure of general-purpose flexibility.
To start with data privacy, companies have been stymied in adopting LLMs for features such as internal meeting transcription, a killer use case. This is due to concerns about how that data is being processed, stored, and used. This is a tough balancing act and not a condemnation of the technology. It speaks to the need for the foundations of privacy to settle before the use case can take root.
Characterization is another issue, especially when deterministic results are required. In a mission-critical function such as a global bank’s risk management, the consensus has been that output that cannot be traced or confirmed for accuracy represents too great a risk for highly probabilistic models to hold the domain.
The same goes for knowledge retrieval. Gen AI models demonstrate greater variance with smaller sample niche requests than with the common consumer questions or facts cited across the internet. This will pose challenges for enterprise utilization. Prompting purely deterministic requests through natural language (e.g., “what were the mergers that Microsoft closed in 2013?”) and receiving variable responses will lead enterprise workers to abandon those workflows in favor of slower but tried-and-true approaches.
These problems are exacerbated in highly regulated industries that are often a half step behind their peers in terms of adopting innovation (for obvious reasons). These are the firms that stand to benefit the most by leapfrogging into the Age of AI but must navigate regulatory and compliance hurdles in doing so. Both are topics generally eschewed by many Gen AI startups who proverbially want to move fast and break things.
Have we reached peak pessimism yet? Probably not. Magnificent 7 companies continue to guide their optimism and investment dollars into the space. Venture-backed funding remains robust, and the public’s imagination is still captive. The cracks lie beneath the surface, largely regarding enterprise revenues keeping up with forecasts and the pace of spending. If additional use cases flounder, you can expect the pessimism to pick up.
While the early enthusiasm is not quite at the fevered pitch of 12 months ago, we haven’t seen a significant deterioration in investment, focus, or energy. It stands to reason that we probably have a way to go until we reach a Plateau of Productivity. What will guide us through to that phase?
It is likely some form of product managers and entrepreneurs working through the idea maze. The idea maze is a concept framed by former venture capitalist Balaji Srinivasan. It encapsulates what entrepreneurs working with nascent technology do: they navigate the many dead-ends of an idea or technology to ultimately transfer a vision into an actual product.
At ModuleQ, our journey through the idea maze is driven by our hands-on experience working with highly regulated global financial institutions. For us, the bar for driving productivity enhancement through AI starts with a belief in human centricity and a focus on the problems specific to investment banking and private wealth management.
For our clients, productivity is all about finding greater efficiency by accessing the information they need, when they need it. Investment bankers need information that catalyzes greater revenue generation. After all, if you are better plugged into what’s happening across your institution if you gain the right information to spark a prospect dialogue, and if you know organizationally where the pulse on relationships is beating fastest, you will be more effective in drumming up business.
Right now, companies at the forefront of knowledge work struggle with information curation. This problem will only become more acute as the pace of information intensity increases. Traditionally, this was viewed as a headwind that comes with organizational complexity, weathered by more meetings, more management, and more bureaucracy.
Our investment banking clients use ModuleQ alerts within their secure Microsoft Teams environment to save time preparing for meetings, highlight important activities across their firms’ global footprints, and stay abreast of important changes across companies and markets.
Someone once said, “there is no such thing as information overload, only filter failure.” We believe information filtration is one of the key productivity enhancements that will power knowledge workers through the current hype cycle. It is our focus to deliver this to investment bankers and private wealth managers. This is our contribution to AI’s journey through the hype cycle.
Want to learn how ModuleQ drives efficiency with Investment Bankers? Reach out to learn more.