After EleutherAI released their new language model, GPT-J-6B, it was clear that it would fill a much-needed gap in available language models. While some discounted the model due to its seemingly insignificant number of parameters compared to the 175B+ parameter models available from OpenAI and AI21, it has proven to offer advantages compared to its larger predecessors.
Some of these advantages are clear like the fact that it is an open-source model, and, consequently, you have complete control over the model and deploying to dedicated replicas. Others aren’t evident until you can fully experiment with GPT-J. While there are playgrounds available for people to get a feel of what GPT-J is about, none offer the parallel experience you could expect with GPT-3.
So today, we’re excited to announce the launch of our free, public GPT-J playground with all of the standard parameters you’d expect from other GPT alternatives and a list of fine-tuned models that will be available soon. With that said, this article will provide a tutorial with key concepts for our GPT-J playground.
First, select the model you’d like to use. We currently offer the standard GPT-J model, but we will be adding different fine-tuned models for people to experience the capabilities of fine-tuning.
Write your prompt
Next, write a prompt to receive a response from the model you’d like. It is best to tell the model what you would like to receive and show an example.
Adjust the parameters
Once your prompt is complete, you can customize the model parameters based on the task you are providing the model. More information on parameters are provided later in this guide.
Finally, press ‘Submit’ to generate a response from the model.
The prompt is how you “program” the model to achieve the response you’d like. GPT-J can do everything from writing original stories to generating code. Because of its wide array of capabilities, you have to be explicit in showing it what you want. Telling and showing is the secret to a good prompt.
GPT-J tries to guess what you want from the prompt. If you write the prompt “Give me a list of fiction books” the model may not automatically assume you’re asking for a list of books. Instead, you could be asking the model to continue a conversation that starts with “Give me a list of fiction books” and continue to say “and I’ll tell you my favorite.”
There are three basic tips to creating prompts:
1. Check your settings
The temperature and top_p parameters are what you will typically be configuring based on the task. These parameters control how deterministic the model is in generating a response. A common mistake is assuming these parameters control “creativity”. For instance, if you're looking for a response that's not obvious, then you might want to set them higher. If you're asking it for a response where there's only one right answer, then you'd want to set them lower. More on GPT-J parameters later.
2. Show and tell
Make it clear what you want through a combination of instructions and examples. Back to our previous example, instead of:
“Give me a list of fiction books”
“Give me list of fiction books. Here’s an example list: Harry Potter, Game of Thrones, Lord of the Rings.”
3. Provide quality data
If you’re trying to classify text or get the model to follow a pattern, make sure that there are enough examples. Not only is providing sufficient examples important, but the examples should be proofread for spelling or grammatical errors. While the model is usually capable of seeing through simple errors, it may believe they are intentional.
Whitespace, or what happens when you press the Spacebar, can be a token or tokens depending on context. Make sure to never have trailing whitespace at the end of your prompt or else it can have unintended effects on the model’s response.
GPT-J understands and processes text by breaking it down into tokens. As a rough rule of thumb, 1 token is approximately 4 characters. For example, the word “television” gets broken up into the tokens “tele”, “vis” and “ion”, while a short and common word like “dog” is a single token. Tokens are important to understand because GPT-J, like other language models, have a maximum context length of 2048 tokens, or roughly 1500 words. The context length includes both the text prompt and generated response.
Parameters are different settings that control the way in which GPT-J responds. Becoming familiar with the following parameters will allow you to apply GPT-J to a number of different tasks.
Response length is the length of the generated text, in tokens, you’d like based on your prompt. A token is roughly 4 characters including alphanumerics and special characters.
Note that the max response length for GPT-J is 2048 tokens.
Temperature controls the randomness of the generated text. A value of 0 makes the engine deterministic, which means that it will always generate the same output for a given input text. A value of 1 makes the engine take the most risks.
As a frame of reference, it is common for story completion or idea generation to see temperature values between 0.7 to 0.9.
Top-P is an alternative way of controlling the randomness of the generated text. We recommend that only one of Temperature and Top P are used, so when using one of them, make sure that the other is set to 1.
A rough rule of thumb is that Top-P provides better control for applications in which GPT-J is expected to generate text with accuracy and correctness, while Temperature works best for those applications in which original, creative or even amusing responses are sought.
Top-K sampling means sorting by probability and zero-ing out the probabilities for anything below the k'th token. A lower value improves quality by removing the tail and making it less likely to go off topic.
Repetition penalty works by lowering the chances of a word being selected again the more times that word has already been used. In other words, it works to prevent repetitive word usage.
Stop sequences allow you to define one or more sequences that when generated force GPT-J to stop.
This provides a basic understanding of the key concepts to begin using our free GPT-J playground. As you begin to experiment and generate interesting or funny responses worth sharing, feel free to tweet them to us!
If you have a use case for fine-tuning or want API access, get in touch with our team.
Increase throughput, fine-tune for free, and save up to 33% on inference costs. Try GPT-J on Forefront today.