Glossary

Generative AI (GenAI)

Written by ModuleQ | Oct 18, 2024 3:38:21 PM

Generative AI: The Transformative Power of Predictive Creation

What is Generative AI?

Generative AI has taken the world by storm. Since the introduction of ChatGPT, the public's imagination has been reshaped by the ability of AI tools to generate content. That dynamically generated content comes in the form of text, image, sound, and video. The act of creation has changed our understanding of many things, principally, the use of AI and the role that computers play in our daily lives. Whereas previously, AI was used to analyze vast reams of data and provide some sort of analysis or prediction of that data, Generative AI creates new data by leveraging a prediction engine so powerful that its output is accurate enough to take in human-generated prompts and provide high-quality responses. It does so by predicting the next (slice) of a word, pixel in an image, or frame of a video.

Content Creation Through Prediction?

Yes, exactly! It works something like this: collect a humungous corpus of information. By humongous, we mean as much of the internet as viable! Use a deep learning approach called Transformers to learn from this data. Transformers are the backbone of today's Generative AI. It works through all of this information and is able to  distill it into a sequence of steps. This process is known as a "self-attention mechanism," turns patterns within text into mathematical vectors, and the relationship in those patterns allows it to probabilistically predict ensuing words (or fragments of words), which are built into phrases, sentences, and ultimately the well-reasoned content.

The Transformer approach leapfrogged prior machine learning techniques directed at understanding context and meaning in text, accelerating the trajectory of machine-based reasoning and output. As Ethan Mollick explains in his book Co-Intelligence, "The Transformer solved...issues by utilizing an 'attention mechanism.' This technique allows the AI to concentrate on the most relevant parts of a text, making it easier for the AI to understand and work with language in a way that seemed more human."

As explained on the NVIDIA blog:

[Take the sentence] “I arrived at the bank after crossing the river”...to determine that the word “bank” refers to the shore of a river and not a financial institution, the Transformer can learn to immediately attend to the word “river” and make this decision in a single step....

More specifically, to compute the next representation for a given word - “bank” for example - the Transformer compares it to every other word in the sentence. The result of these comparisons is an attention score for every other word in the sentence. These attention scores determine how much each of the other words should contribute to the next representation of “bank”. In the example, the disambiguating “river” could receive a high attention score when computing a new representation for “bank”. The attention scores are then used as weights for a weighted average of all words’ representations which is fed into a fully-connected network to generate a new representation for “bank”, reflecting that the sentence is talking about a river bank.

ModuleQ's Approach Towards Generative AI:

ModuleQ is an AI company, but not one solely geared towards Generative AI. As such, we take a measured approach towards leveraging Generative AI as a tool within the tool chest. As shiny and powerful as that tool may be, sometimes it is the wrong approach for a particular job to be done. And so, we incorporate Generative AI in areas where we deem it the right tool (superior output vs downside). Even though Generative AI represents significant promise and potential, it currently has associated limitations and drawbacks. Among them, hallucinations, false confidence, data governance, entitlement, and data security. Each of these problems is acute in the regulated verticals we service (i.e., financial services) and so, their application must be thoughtfully metered out.

Internal Thought Leadership on Generative AI: 

AI Governance: The Challenge of Language Models (September, 2024)

Where is Generative AI in the Hype Cycle? (September, 2024)

The Future is Now: AI in Banking (August, 2024)

Foundational External Papers & Resources: 

Attention is All You Need (Vaswani et al, 2017)

On the Opportunities and Risks of Foundation Models (Bommasani et al., 2022)

Transformer: A Novel Neural Network Architecture for Language Understanding (Google Blog, 2017)

Defined by others as: 

IBM: Generative AI refers to deep-learning models that can generate high-quality text, images, and other content based on the data they were trained on.

McKinsey: "Until recently, machine learning was largely limited to predictive models, used to observe and classify patterns in content. For example, a classic machine learning problem is to start with an image or several images of, say, adorable cats. The program would then identify patterns among the images, and then scrutinize random images for ones that would match the adorable cat pattern. Generative AI was a breakthrough. Rather than simply perceive and classify a photo of a cat, machine learning is now able to create an image or text description of a cat on demand."

MIT: Generative AI can be thought of as a machine-learning model that is trained to create new data, rather than making a prediction about a specific dataset. A generative AI system is one that learns to generate more objects that look like the data it was trained on.