screenshot 2024 04 13 at 13.29.11

mins read

Defining Generative AI In Simple Terms.

Article Summary

In this blog, I’m excited to take you through the basics of Generative AI. We’ll break it down into simple terms that everyone can understand. Let’s dive in and discover together just how this incredible technology works and what it means for all of us.

 

Generative AI is a branch of artificial intelligence designed to create new content autonomously, ranging from never-before-seen images to original written works. Unlike traditional AI, which primarily assists with decision-making and pattern recognition, generative AI adopts a creative role. It leverages a variety of inputs, such as text, images, sounds, and animations, to produce outputs that mirror the inventiveness of human artists and writers. This technology allows users to generate unique and diverse content at a fast pace, drawing inspiration from existing data, and opening up new possibilities in many fields.

Gen AI has become a prevalent field of its own, predominantly because of the advancement of Large Language Models.

So what then is a Large Language Model?

Artificial Intelligence models known as LLM, or Large Language Models, learn from existing texts to generate new written content.When we train a very large AI model that supports billions of parameters and is trained on lots of data (hundred of billions of words),we get a Large Language Model like ChatGPT. If you are wondering what parameters of a model are, think of them as internal settings or dials within a large language model that can be adjusted to optimize taking text and generating new ones.

table llm 2

How do LLMs generate new text?

To generate text, LLMs must first translate words into a language they understand. Take, for example, this piece of text:

token basic1

First a block of words is broken into tokens — basic units that represent – words, combinations of words, or punctuation – that is formatted such that an LLM can use it efficiently and effectively. 

The models represent tokens as fractions of words.For simplicity, for our example, we’ll use each full word as a token.

token basic2

In order to grasp a word’s meaning, ”cook” in our example, LLMs first observe its context learning from enormous sets of training data, noting adjacent words appearing in text that contain the word “cook”.

The datasets that LLMs look through are mostly sets of text, published on the internet, e.g. Wikipedia data. Unshared and offline data can also be used to train LLMs.

Eventually, we end up with an enormous set of the words found alongside ”cook” in the training data, as well as those that weren’t found near it.

gifmaker me 1

As the model processes this set of words, it produces a list of numerical values, known as a vector — and adjusts it based on each word’s proximity to the word “cook” in the training data. This helps the model determine how close two words are to each other.

vector1

This allows the model to spot clusters of pronouns, or types of food, and forms the basis of generating new text.

clusters generative ai

However, this alone is not enough to generate new content.

What truly enhanced the model’s ability to generate highly comprehensible text is the introduction of technology within these AI models called Transformers. Transformers process entire sequences, such as sentences, paragraphs, or entire articles, by analyzing the elements together rather than focusing on individual words. This gave the LLM models greater ability to comprehend the proximity of words and the significance of word combinations within a particular context. This method enables the model to generate sentences that have greater meaning , context and convey a sense of intelligence.

This ability to predict motifs and patterns extends beyond language and is how these models are able to generate images by predicting pixels.

Limitations of LLMs or Generative AI

Hallucinations

LLMs are simply predicting the next sentence based on large amounts of previously occurring examples. While the text appears relevant, its factual accuracy may be questionable. As a result, LLMs are prone to a phenomenon called “hallucination” where they generate fabricated information. To overcome this, LLMs are constantly updated through a mechanism referred to as Reinforcement Learning by Human Feedback (RLHF) where the hallucinations are corrected through retraining. This, however, doesn’t completely prevent models from hallucinating because it is inherent to the technology.

Privacy and Originality

The ethics of Generative AI, particularly regarding originality, privacy, and the potential for misuse is still an open question that we haven’t found an answer for.While there is no legal requirement yet in most places to disclose the use of AI in products we consume, it is heading that way. The EU passed the EU AI Act in committee late in 2023 and ambassadors ratified it into law on February 2, 2024. It is now the gold standard planet-wide.

Generative AI, like ChatGPT, would have to comply with transparency requirements:

  • Disclosing that the content was generated by AI
  • Designing the model to prevent it from generating illegal content
  • Publishing summaries of copyrighted data used for training
European Parliament

Conclusion

Generative AI represents one of the most exciting developments in technology today. With its foundations in language, which is foundational also to humans, Generative AI can be transformational. Whether you’re an artist, a product developer, a researcher, or simply an enthusiast, Generative AI can be useful to enhance your creativity, efficiency, and imagination. By understanding its mechanisms, potential applications and, importantly, its limitations, we can adopt it to the degree that we feel comfortable.

Share this article:

Latest Articles

Table Of Contents

Latest Articles

screenshot 2024 04 13 at 09.35.10

Newsletter

Signup to our newsletter to be the first to hear our latest news. This is a low volume newsletter and your data is protected as per our privacy policy. You can unsubscribe at anytime.