How ChatGPT Works: The Model Behind The Bot

Molly Ruby
 

Molly Ruby

Exploring the world through data

Follow
 
 A brief introduction to the intuition and methodology behind the chat bot you can’t stop hearing about.

Large Language Models

Arbitrary example of next-token-prediction and masked-language-modeling generated by the author.
  1. The model is unable to value some of the surrounding words more than others. In the above example, while ‘reading’ may most often associate with ‘hates’, in the database ‘Jacob’ may be such an avid reader that the model should give more weight to ‘Jacob’ than to ‘reading’ and choose ‘love’ instead of ‘hates’.
  2. The input data is processed individually and sequentially rather than as a whole corpus. This means that when an LSTM is trained, the window of context is fixed, extending only beyond an individual input for several steps in the sequence. This limits the complexity of the relationships between words and the meanings that can be derived.

GPT and Self-Attention

Comparison of GPT-2 (left) and GPT-3 (right). Generated by the author.
  1. Creates a query, key, and value vector for each token in the input sequence.
  2. Calculates the similarity between the query vector from step one and the key vector of every other token by taking the dot product of the two vectors.
  3. Generates normalized weights by feeding the output of step 2 into a softmax function.
  4. Generates a final vector, representing the importance of the token within the sequence by multiplying the weights generated in step 3 by the value vectors of each token.
Screenshot from ChatGPT generated by the author.
  • Lack of helpfulness meaning they do not follow the user’s explicit instructions.
  • Contain hallucinations that reflect non-existing or incorrect facts.
  • Lack interpretability making it difficult for humans to understand how the model arrived at a particular decision or prediction.
  • Include toxic or biased content that is harmful or offensive and spreads misinformation.

ChatGPT

Step 1: Supervised Fine Tuning (SFT) Model

  • Plain prompts: any arbitrary ask.
  • Few-shot prompts: instructions that contain multiple query/response pairs.
  • User-based prompts: correspond to a specific use-case that was requested for the OpenAI API.
  1. Direct: “Tell me about…”
  2. Few-shot: Given these two examples of a story, write another story about the same topic.
  3. Continuation: Given the start of a story, finish it.
Image (left) inserted from Training language models to follow instructions with human feedback OpenAI et al., 2022 https://arxiv.org/pdf/2203.02155.pdf. Additional context added in red (right) by the author.

Step 2: Reward Model

Example of response ranking combinations. Generated by the author.
Image (left) inserted from Training language models to follow instructions with human feedback OpenAI et al., 2022 https://arxiv.org/pdf/2203.02155.pdf. Additional context added in red (right) by the author.

Step 3: Reinforcement Learning Model

Image (left) inserted from Training language models to follow instructions with human feedback OpenAI et al., 2022 https://arxiv.org/pdf/2203.02155.pdf. Additional context added in red (right) by the author.
Screenshot from ChatGPT generated by the author.

Evaluation of the Model

  1. Instructed to provide respectful responses: resulted in a significant decrease in toxic responses.
  2. Instructed to provide responses, without any setting for respectfulness: no significant change in toxicity.
  3. Instructed to provide toxic response: responses were in fact significantly more toxic than the GPT-3 model.
Screenshot from ChatGPT generated by the author.

Sources

  1. https://openai.com/blog/chatgpt/
  2. https://arxiv.org/pdf/2203.02155.pdf
  3. https://medium.com/r/?url=https%3A%2F%2Fdeepai.org%2Fmachine-learning-glossary-and-terms%2Fsoftmax-layer
  4. https://www.assemblyai.com/blog/how-chatgpt-actually-works/
  5. https://medium.com/r/?url=https%3A%2F%2Ftowardsdatascience.com%2Fproximal-policy-optimization-ppo-explained-abed1952457b
 

Add comment

Loading