Forefront Blog

GPT-J-6B: An Introduction to the Largest Open Source GPT Model

Forefront Team
October 14, 2021

The newest GPT model, GPT-J, is making its rounds in the NLP community and bringing up some questions along the way. So the purpose of this article is to answer the question: What is GPT-J?

GPT-J-6B is an open source, autoregressive language model created by a group of researchers called EleutherAI. It's one of the most advanced alternatives to OpenAI's GPT-3 and performs well on a wide array of natural language tasks such as chat, summarization, and question answering, to name a few.

For a deeper dive, GPT-J is a transformer model trained using Ben Wang's Mesh Transformer JAX. "GPT" is short for generative pre-trained transformer, "J" distinguishes this model from other GPT models, and "6B" represents the 6 billion trainable parameters. Transformers are increasingly the model of choice for NLP problems, replacing recurring neural network (RNN) models such as long short-term memory (LSTM). The additional training parallelization allows training on larger datasets than was once possible.

The model consists of 28 layers with a model dimension of 4096, and a feedforward dimension of 16384. The model dimension is split into 16 heads, each with a dimension of 256. Rotary Position Embedding (RoPE) is applied to 64 dimensions of each head. The model is trained with a tokenization vocabulary of 50257, using the same set of BPEs as GPT-2/GPT-3.

Training data

GPT-J was trained on the Pile, a large-scale curated dataset created by EleutherAI.

Training procedure

GPT-J was trained for 402 billion tokens over 383,500 steps on a TPU v3-256 pod. It was trained as an autoregressive language model, using cross-entropy loss to maximize the likelihood of predicting the next token correctly.

Intended Use

GPT-J learns an inner representation of the English language that can be used to extract features useful for downstream tasks. The model is best at generating text from a prompt due to the core functionality of GPT-J being to take a string of text and predict the next token. When prompting GPT-J it is important to remember that the statistically most likely next token is often the one that will be provided by the model.

Use cases

GPT-J can perform various tasks in language processing without any further training, including tasks it was never trained for. It can be used to solve a lot of different use cases like language translation, code completion, chatting, blog post writing and many more. Through fine-tuning (discussed later), GPT-J can be further specialized on any task to significantly increase performance.

Let's look at some example tasks:

Chat

Open ended conversations with an AI support agent.

Q&A

Create question + answer structure for answering questions based on existing knowledge.

English to french

Translate English text into French.

Parse unstructured data

Create tables from long form text by specifying a structure and supplying some examples.

Translate SQL

Translate natural language to SQL queries.

Python to natural language

Explain a piece of Python code in human understandable language.

As you can tell, the standard GPT-J model adapts and performs well on a number of different NLP tasks. However, things get more interesting when you explore fine-tuning.

Fine-tuning GPT-J

While the standard GPT-J model is proficient at performing many different tasks, the model's capabilities improve significantly when fine-tuned. Fine-tuning refers to the practice of further training GPT-J on a dataset for a specific task. While scaling parameters of transformer models consistently yields performance improvements, the contribution of additional examples of a specific task can greatly improve performance beyond what additional parameters can provide. Especially for use cases like classification, extractive question answering, and multiple choice, collecting a few hundred examples is often "worth" billions of parameters.

To see what fine-tuning looks like, here's a demo (2m 33s) on how to fine-tune GPT-J on Forefront. There's two variables to fine-tuning that, when done correctly, can lead to GPT-J outperforming GPT-3 Davinci (175B parameters) on a variety of tasks. Those variables are the dataset and training duration.

Dataset

For a comprehensive tutorial on preparing a dataset to fine-tune GPT-J, check out our guide.

At a high level, the following best practices should be considered regardless of your task:

  • Your data must be in a single text file.
  • You should provide at least one hundred high-quality examples, ideally vetted by a human knowledgeable in the given task.
  • You should use some kind of separator at the end of each prompt + completion in your dataset to make it clear to the model where each training example begins and ends. A simple separator which works well in almost all cases is " <|endoftext|> ".
  • Ensure that the prompt + completion doesn't exceed 2048 tokens, including the separator.

Let's look at some example datasets:

Classification

Classify customer support messages by topic.

Sentiment Analysis

Analyze sentiment for product reviews.

Idea Generation

Generate blog ideas given a company's name and product description.

Training duration

The duration you should fine-tune for largely depends on your task and number of training examples in your dataset. For smaller datasets, fine-tuning 5-10 minutes for every 100kb is a good place to start. For larger datasets, fine-tuning 45-60 minutes for every 10MB is recommended. These are rough rules of thumb and more complex tasks will require longer training durations.

Deploying GPT-J

GPT-J is notoriously difficult and expensive to deploy in production. When considering deployment options there are two things to keep in mind: cost and response speeds. The most common hardware for deploying GPT-J is a T4, V100, or TPU, all of which come with less than ideal tradeoffs. At Forefront, we experienced these undesirable tradeoffs and started to experiment to see what we could about it. Several low-level machine code optimizations later, and we built a one-click GPT-J deployment, offering the best cost, performance, and throughput available. Here's a quadrant to compare the different deployment methods by cost and response speeds:

Large transformer language models like GPT-J are increasingly being used for a variety of tasks and further experimentation will inevitably lead to more use cases that these models prove to be effective at. At Forefront, we believe providing a simple experience to fine-tune and deploy GPT-J can help companies easily enhance their products with minimal work required. To start using GPT-J, get in touch with our team.

Read more...
GPT-J-6B: An Introduction to the Largest Open Source GPT Model

How to Use Our Free GPT-J Playground

Forefront Team
October 1, 2021

After EleutherAI released their new language model, GPT-J-6B, it was clear that it would fill a much-needed gap in available language models. While some discounted the model due to its seemingly insignificant number of parameters compared to the 175B+ parameter models available from OpenAI and AI21, it has proven to offer advantages compared to its larger predecessors.


Some of these advantages are clear like the fact that it is an open-source model, and, consequently, you have complete control over the model and deploying to dedicated replicas. Others aren’t evident until you can fully experiment with GPT-J. While there are playgrounds available for people to get a feel of what GPT-J is about, none offer the parallel experience you could expect with GPT-3. 


So today, we’re excited to announce the launch of our free, public GPT-J playground with all of the standard parameters you’d expect from other GPT alternatives and a list of fine-tuned models that will be available soon. With that said, this article will provide a tutorial with key concepts for our GPT-J playground.


Tutorial


Select model

First, select the model you’d like to use. We currently offer the standard GPT-J model, but we will be adding different fine-tuned models for people to experience the capabilities of fine-tuning.


Write your prompt

Next, write a prompt to receive a response from the model you’d like. It is best to tell the model what you would like to receive and show an example.


Adjust the parameters

Once your prompt is complete, you can customize the model parameters based on the task you are providing the model. More information on parameters are provided later in this guide.


Generate response

Finally, press ‘Submit’ to generate a response from the model.


Key concepts


Prompts

The prompt is how you “program” the model to achieve the response you’d like. GPT-J can do everything from writing original stories to generating code. Because of its wide array of capabilities, you have to be explicit in showing it what you want. Telling and showing is the secret to a good prompt.


GPT-J tries to guess what you want from the prompt. If you write the prompt “Give me a list of fiction books” the model may not automatically assume you’re asking for a list of books. Instead, you could be asking the model to continue a conversation that starts with “Give me a list of fiction books” and continue to say “and I’ll tell you my favorite.”


There are three basic tips to creating prompts:


1. Check your settings

The temperature and top_p parameters are what you will typically be configuring based on the task. These parameters control how deterministic the model is in generating a response. A common mistake is assuming these parameters control “creativity”. For instance, if you're looking for a response that's not obvious, then you might want to set them higher. If you're asking it for a response where there's only one right answer, then you'd want to set them lower. More on GPT-J parameters later.


2. Show and tell

Make it clear what you want through a combination of instructions and examples. Back to our previous example, instead of:


“Give me a list of fiction books”


Do:


“Give me list of fiction books. Here’s an example list: Harry Potter, Game of Thrones, Lord of the Rings.”



3. Provide quality data

If you’re trying to classify text or get the model to follow a pattern, make sure that there are enough examples. Not only is providing sufficient examples important, but the examples should be proofread for spelling or grammatical errors. While the model is usually capable of seeing through simple errors, it may believe they are intentional.


Whitespace

Whitespace, or what happens when you press the Spacebar, can be a token or tokens depending on context. Make sure to never have trailing whitespace at the end of your prompt or else it can have unintended effects on the model’s response.


Tokens

GPT-J understands and processes text by breaking it down into tokens. As a rough rule of thumb, 1 token is approximately 4 characters. For example, the word “television” gets broken up into the tokens “tele”, “vis” and “ion”, while a short and common word like “dog” is a single token. Tokens are important to understand because GPT-J, like other language models, have a maximum context length of 2048 tokens, or roughly 1500 words. The context length includes both the text prompt and generated response.


Parameters

Parameters are different settings that control the way in which GPT-J responds. Becoming familiar with the following parameters will allow you to apply GPT-J to a number of different tasks.


Response length

Response length is the length of the generated text, in tokens, you’d like based on your prompt. A token is roughly 4 characters including alphanumerics and special characters.


Note that the max response length for GPT-J is 2048 tokens.


Temperature

Temperature controls the randomness of the generated text. A value of 0 makes the engine deterministic, which means that it will always generate the same output for a given input text. A value of 1 makes the engine take the most risks.


As a frame of reference, it is common for story completion or idea generation to see temperature values between 0.7 to 0.9.


Top-P

Top-P is an alternative way of controlling the randomness of the generated text. We recommend that only one of Temperature and Top P are used, so when using one of them, make sure that the other is set to 1.


A rough rule of thumb is that Top-P provides better control for applications in which GPT-J is expected to generate text with accuracy and correctness, while Temperature works best for those applications in which original, creative or even amusing responses are sought.


Top-K

Top-K sampling means sorting by probability and zero-ing out the probabilities for anything below the k'th token. A lower value improves quality by removing the tail and making it less likely to go off topic.


Repetition penalty

Repetition penalty works by lowering the chances of a word being selected again the more times that word has already been used. In other words, it works to prevent repetitive word usage.


Stop sequences

Stop sequences allow you to define one or more sequences that when generated force GPT-J to stop.



This provides a basic understanding of the key concepts to begin using our free GPT-J playground. As you begin to experiment and generate interesting or funny responses worth sharing, feel free to tweet them to us!



If you have a use case for fine-tuning or want API access, get in touch with our team.

Read more...
How to Use Our Free GPT-J Playground

Preparing a Dataset to Fine-tune GPT-J

Forefront Team
September 23, 2021

Fine-tuning is a powerful technique to create a new GPT-J model that is specific to your use case. When done correctly, fine-tuning GPT-J can achieve performance that exceeds significantly larger, general models like OpenAI's GPT-3 Davinci.

To fine-tune GPT-J on Forefront, all you need is a set of training examples formatted in a single text file with each example generally consisting of a single input example and its associated output. Fine-tuning can solve a variety of problems, and the optimal way to format your dataset will depend on your specific use case. Below, we'll list the most common use cases for fine-tuning GPT-J, corresponding guidelines, and example text files.

Before diving into the most common use cases, there are a few best practices that should be followed regardless of the specific use case:

  • Your data must be in a single text file.
  • You should provide at least one hundred high-quality examples, ideally vetted by a human knowledgeable in the given task.
  • You should use some kind of separator at the end of each prompt + completion in your dataset to make it clear to the model where each training example begins and ends. A simple separator which works well in almost all cases is " <|endoftext|> ".
  • Ensure that the prompt + completion doesn't exceed 2048 tokens, including the separator.

Classification

Classification is the process of categorizing text into a group of words. In classification problems, each input in the prompt should be classified into one of your predefined classes.

Choose classes that map to a single token. At inference time, specify the parameter, length=1, since you only need the first token for classification.

Let's say you'd like to organize your customer support messages by topics. You may want to fine-tune GPT-J to filter incoming support so they can be routed appropriately.

The dataset might look like the following:

In the example above, we provided instructions for the model followed by the input containing the customer support message and the output to classify the message to the corresponding category. As a separator we used " <|endoftext|> " which clearly separated the different examples. The advantage of using " <|endoftext|> " as the separator is that the model natively uses it to indicate the end of a completion. It does not need to be set as a stop sequence either because the model automatically stops a completion before outputting " <|endoftext|> ".

Now we can query our model by making a Completion request.

Sentiment Analysis

Sentiment Analysis is the act of identifying and extracting opinions within a given text across blogs, reviews, social media, forums, news, etc. Let's say you'd like to get a degree to which a particular product review is positive or negative.

The dataset might look the following:

Now we can query our model by making a Completion request.


Chatbot

The purpose of a chatbot is to simulate human-like conversations with users via text message or chat. You could fine-tune GPT-J to imitate a specific person or respond in certain ways provided the context of a given conversation to use in a customer support situation. First, let's look at getting GPT-J to imitate Elon Musk.

The dataset might look like the following:

Here we purposefully left out separators to divide specific examples. Instead, you can opt for compiling long-form conversations when attempting to imitate a specific person since we want to capture a wide variety of responses in an open-ended format.

You could query the model by making a Completion request.


Notice that we provide "User:" and "Elon Musk:" as a stop sequence. It's important to anticipate how the model may continue to provide completions beyond the desired output and use stop sequences to stop the model from continuing. Given the pattern of the dataset where a User says something followed by Elon Musk, it makes sense to use "User:" and "Elon Musk:" as the stop sequence.

A similar but different chatbot use case would be that of a customer support bot. Here, we'll go back to providing specific examples with separators so the model can identify how to respond in different situations. Depending on your customer support needs, this use case could require a few thousand examples, as it will likely deal with different types of requests and customer issues.

The dataset might look like the following:

An optional improvement that could be made to the above dataset would be to provide more context and exchanges leading up to the resolution for each example. However, this depends on the role you're hoping to fill with your customer support chatbot.

Now we can query our model by making a Completion request.


As with the previous example, we're using "Customer:" and "#####" as stop sequences so the model stops after providing the relevant completion.

Entity Extraction

The main purpose of entity extraction is to extract information from given text to understand the subject, theme, or other pieces of information like names, places, etc. Let's say, you'd like to extract names from provided text.

The dataset might look like the following:

Now we can query our model by making a Completion request.


Idea Generation

A common use case is to use GPT-J to generate ideas provided specific information. Whether it's copy for ads or websites, blog ideas, or products, generating ideas is a useful task for GPT-J. Let's look at the aforementioned use case of generating blog ideas.

The dataset might look like the following:

Now we can query our model by making a Completion request.


Following the above example on preparing a dataset for each use case should lead to well-performing fine-tuned GPT-J models. If performance is not up to par, it can be helpful to also provide explicit instructions with each example like the following dataset for blog idea generation.

If you need custom assistance or support with preparing a dataset for your use case, get in touch with our team.

Read more...
Preparing a Dataset to Fine-tune GPT-J

How You Can Use GPT-J

Forefront Team
September 15, 2021

Generative Pre-trained Transformer (GPT) models, the likes of which GPT-J and GPT-3 belong to, have taken the NLP community by storm. These powerful language models excel at performing various NLP tasks like question-answering, entity extraction, categorization, and summarization without any supervised training. They require very few to no examples to understand a given task and outperform state-of-the-art models trained in a supervised fashion.


GPT-J is a 6-billion parameter transformer-based language model released by a group of AI researchers called EleutherAI in June 2021. The goal of the group since forming in July of 2020 is to open-source a family of models designed to replicate those developed by OpenAI. Their current focus is on the replication of the 175-billion parameter language model, GPT-3. But don’t let the difference in parameter size fool you. GPT-J outperforms GPT-3 in code generation tasks and, through fine-tuning, can outperform GPT-3 on a number of common natural language processing (NLP) tasks. The purpose of this article will be to outline an array of use cases that GPT-J can be applied to and excel at. For information on how to fine-tune GPT-J for any of the use cases below, check out our fine-tuning tutorial.



Use cases


Code generation

The most natural use case for GPT-J is generating code. GPT-J was trained on a dataset called the Pile, an 835GB collection of 22 smaller datasets—including academic sources (e.g., Arxiv, PubMed), communities (StackExchange, Wikipedia), code repositories (Github), and more. The addition of Github into the data has led to GPT-J outperforming GPT-3 on a variety of code generating tasks. While “vanilla” GPT-J is proficient at this task, it becomes even more capable when one fine-tunes the model on any given programming language. 


To get started fine-tuning GPT-J for code generation, check out Hugging Face’s CodeSearchNet containing 2 million comment/code pairs from open-source libraries hosted on GitHub for Go, Java, Javascript, PHP, Python, and Ruby.


Input:

GPT-J code generation input

Output:

GPT-J code generation output


Chat bot

An increasingly common NLP use case is to build a chatbot. A chatbot is software that simulates human-like conversations with users via text message or chat. With its main commercial use case to help users by providing answers to their questions, chatbots are commonly used in a variety of customer support scenarios. However, chatbots can also be used to imitate specific people like Kanye West.


Regardless of your reason for using a chatbot, it is recommended to fine-tune GPT-J by providing transcripts of the specific task. For instance, let’s say you want a custom chatbot to assist with customer support requests. A simple method to curate a fine-tuning dataset would be to record transcripts of typical customer support exchanges between your team and customers. Somewhere in the order of one hundred or so examples would be enough for GPT-J to become proficient at your company’s specific customer support tasks.


GPT-J Kanye West chatbot


Story writing

Story writing is simply a work of fiction that is written in easily understandable grammatical structure with a natural flow of speech. 


Story writing with GPT-J becomes interesting as one could fine-tune to a particular author’s writing style or book series. Imagine having a Stephen King writing bot or a bot that could help generate books 6 and 7 to Game of Thrones because, let’s be honest, George R.R. Martin is dragging his feet at this point.


Here’s an example of the begin ning to a fictitious piece written by GPT-J-6B:


Story written by GPT-J


Entity extraction

The main purpose of entity extraction is to extract information from given text to understand the subject, theme, or other pieces of information like names, places, etc. Some interesting use cases for entity extraction include:


Financial market analysis: Extract key figures from financial news articles or documents to use as signals for trading algorithms or market intelligence


Email inbox optimization: Notify users of flight times, meeting locations, and credit card charges without having to open emails


Content recommendation: Extract information from articles and media to recommend content based on entity similarity and user preferences


GPT-J shines new light on entity extraction, providing a model that is adaptive to both general text and specialized documents through few-shot learning.

GPT-J entity extraction example


Summarization

Summarization is the process of summarizing information in given text for quicker consumption without losing its original meaning. GPT-J is quite proficient out-of-the-box at summarization. What follows is an example of taking a snippet of the Wikipedia article for Earth and tasking GPT-J to provide a short summary.


Input:



Article before being summarized by GPT-J


Output:

Article after being summarized by GPT-J




Paraphrasing

Not to be confused with summarization, paraphrasing is the process of rewriting a passage without changing the meaning of the original text. Where summarization attempts to condense information, paraphrasing rewords the given information. While GPT-J is capable of summarization out-of-the-box, paraphrasing with GPT-J is best achieved through fine-tuning. Here is an example of paraphrasing a shorter snippet from the same Earth Wikipedia article in the previous summarization example after training on hand-written paraphrasing examples.


Input:

Article before being paraphrased by GPT-J


Output:

Article after being paraphrased by GPT-J


Copywriting

A widely used commercial use case for GPT-J and other transformer-based language models is copywriting for websites, ads, and general marketing. Copywriting is a crucial marketing process to increase website, ad, and other conversion rates. Through fine-tuning GPT-J on a given company’s voice or previously successful ad campaigns, GPT-J can automatically provide effective copy at a fraction of the cost of hiring a copywriter.

Input:

Company description for GPT-J

Output:

Ad text from GPT-J


Classification

Text classification is the process of categorizing text into organized groups. Unstructured text is everywhere, such as emails, text conversations, websites, and social media, and the first step in extracting value from this data is to categorize it into organized groups. This is another use case where fine-tuning GPT-J will lead to the best performance. By providing one hundred examples or more of your given classification task, GPT-J can perform as good or better than the largest language models available like OpenAI’s GPT-3 Davinci.

GPT-J Classification


Sentiment Analysis

Sentiment analysis is the act of identifying and extracting opinions within given text like blogs, reviews, social media, forums, new, etc. Perhaps you’d like to automatically analyze thousands of reviews about your products to discover if customers are happy about your pricing plans or gauge brand sentiment on social media in real-time so you can detect disgruntled customers and immediately respond, the applications of sentiment analysis are endless and applicable to any type of business.

GPT-J Sentiment Analysis


Given the infancy of large transformer-based language models, further experimentation will inevitably lead to more use cases that these models prove to be effective at. As you may have noticed, a number of the use cases are the result of fine-tuning GPT-J. At Forefront, we believe the discovery of more use cases will not only come from increased usage of these models, but by providing a simple experience to fine-tune that allows for easy experimentation and quick feedback loops. For a tutorial on easily fine-tuning GPT-J on Forefront, check out our recent tutorial.

Read more...
How You Can Use GPT-J

How to Fine-Tune GPT-J

Forefront Team
September 7, 2021

Recent research in Natural Language Processing (NLP) has led to the release of multiple large transformer-based language models like OpenAI’s GPT-[2,3], EleutherAI’s GPT-[Neo, J], and Google’s T5. For those not impressed by the leap of tunable parameters in the billions, the ease with which these models could perform on a never before seen task without training a single epoch is something to behold. While it has become evident that the more parameters a model has the better it will generally perform, an exception to this rule applies when one explores fine-tuning. Fine-tuning refers to the practice of further training transformer-based language models on a dataset for a specific task. This practice has led to the 6 billion parameter GPT-J outperforming the 175 billion GPT-3 Davinci on a number of specific tasks. As such, fine-tuning will continue to be the modus operandi when using language models in practice, and, consequently, fine-tuning is the main focus of this post. Specifically, how to fine-tune the open-source GPT-J-6B.


Curate a dataset

The first step in fine-tuning GPT-J is to curate a dataset for your specific task. The specific task for this tutorial will be to imitate Elon Musk. To accomplish this, we compiled podcast transcripts of Elon’s appearances on the Joe Rogan Experience and Lex Fridman Podcast into a single text file. Here’s the text file for reference. Note that the size of the file is only 150kb. When curating a dataset for fine-tuning, the main focus should be to encapsulate an evenly-distributed sample of the given task instead of prioritizing raw size of the data. In our case, these podcast appearances of Elon were great as they encompass multiple hours of him speaking on a variety of different topics.

If you plan on fine-tuning on a dataset of 100MB or greater, get in touch with our team before beginning. For more information on preparing your dataset, check out our guide.



Fine-tuning GPT-J on Forefront

Believe it or not, once you have your dataset, the hard part is done since Forefront abstracts all of the actual fine-tuning complexity away. Let’s go over the remaining steps to train your fine-tuned model.


Create deployment

Once logged in, click “New deployment”.

Create deployment


Select Fine-tuned GPT-J

From here, we’ll add a name and optional description for the deployment then select "Fine-tuned GPT-J".



Select fine-tuned GPT-J


Upload dataset

Then, we’ll upload our dataset in the form of a single text file. Again, if the dataset is 100MB or greater, get in touch with our team.

Upload dataset


Set training duration

A good rule of thumb for smaller datasets is to train 5-10 minutes every 100kb. For text files in the order of megabytes, you’ll want to train 45-60 minutes for every 10MB.

Set training duration

Set number of checkpoints

A checkpoint is a saved model version that you can deploy. You’ll want to set a number of checkpoints that evenly divides the training duration.

Set number of checkpoints


Add test prompts

Test prompts are prompts that every checkpoint will automatically provide completions for so you can compare the performance of the different models. Test prompts should be pieces of text that are not found in your training text file. This allows you to see how good the model is at understanding your topic and prevents the model from regurgitating information it has seen in your training set.

You can also customize model parameters for your specific task.

Add test prompts


Once your test prompts are set, you can press 'Fine-tune' and your fine-tuned model will begin training. You may notice the estimated completion time is longer than your specified training time. This is because it takes time to load the base weights prior to training.

View test prompts

As checkpoints being to appear, you can press 'View test prompts' to start comparing performance between your different checkpoints.


View test prompts button
View test prompts


Deploy to Playground and integrate in application

Now for the fun part: deploying your best-performing checkpoint(s) for further testing in the Playground or integration into your app.

Deploy checkpoint


To see how simple it is to use the Playground and integrate your GPT-J deployment into your app, check out our tutorial on deploying standard GPT-J.


Fine-tuning GPT-J by yourself

Using Forefront isn’t the only way to fine-tune GPT-J. For a tutorial on fine-tuning GPT-J by yourself, check out Eleuther’s guide. However, it’s important to note that not only do you save time by fine-tuning on Forefront, but it’s absolutely free—saving you $8 per hour of training. Also, when you go to deploy your fine-tuned model you save up to 33% on inference costs with increased throughput by deploying on Forefront.

Helpful Tips

  1. Prioritize quality samples of the given task over a large dataset when curating your dataset.
  2. Train 5-10 minutes per 100kb or 45-60 minutes per 10MB of your dataset.
  3. Save a number of checkpoints that evenly divides the number of minutes your training. Saving more than 10-15 checkpoints returns diminishing value and makes assessing quality difficult.
  4. Set test prompts that are not included in your dataset.
  5. You can deploy multiple checkpoints and conduct further testing in our Playground. Deployed checkpoints are pro-rated according to time deployed.
  6. For more detailed information on fine-tuning and preparing your dataset, refer to our docs.

These tips are meant as loose guidelines and experimentation is encouraged.


At Forefront, we believe building a simple experience for fine-tuning can increase experimentation with quicker feedback loops so companies and individuals can apply language models to a myriad problems. If you have any ideas on how we can further improve the fine-tuning experience, please get in touch with our team.

Read more...
How to Fine-Tune GPT-J

How to Deploy GPT-J

The Forefront Team
August 30, 2021

More than one year has passed since the public release of OpenAI's API for GPT-3. Since then, thousands of developers and hundreds of companies have started building on the platform to apply the transformer-based language model to a variety of NLP problems.

In its wake, EleutherAI, a team of AI researchers open-sourcing their work, released their first implementation of a GPT-like system, the 2.7B parameter GPT-Neo, and most recently, the 6B parameter GPT-J. Before getting into GPT-J deployments, let's understand why a company or developer would use GPT-J in the first place.

So why would one prefer to use the open-source 6B parameter GPT-J over the 175B parameter GPT-3 Davinci? The answer comes down to cost and performance.

First, let's talk about cost. With GPT-3, you pay per 1000 tokens. For the unacquainted, you can think of tokens as pieces of words, where 1000 tokens are about 750 words. So with GPT-3, your costs scale directly with usage. On the other end, the open-sourced GPT-J can be deployed to cloud infrastructure enabling you to effectively get unlimited usage while only incurring the cost of the cloud hardware hosting the model.

Now let's talk about performance. "Bigger is better" has become an adage for a reason, and transformer-based language models are no exception. While a 100B parameter transformer model will always generally outperform a 10B parameter one, the keyword is generally. Unless you're trying to solve general artificial intelligence, you probably have a specific use case in mind. This is where fine-tuning GPT-J, or specializing the model on a dataset for a specific task, can lead to better performance than GPT-3 Davinci.

Now that we've discussed why one would use GPT-J over GPT-3 to lower costs at scale and achieve better performance on specific tasks, we'll discuss how to deploy GPT-J.

How to deploy GPT-J on Forefront

For this tutorial, we'll be deploying the standard GPT-J-6B.

Create deployment

Once logged in, you can click "New deployment".

Select Vanilla GPT-J

From here, add a name and optional description for your deployment then select "Vanilla GPT-J".

Select Vanilla GPT-J

Press "Deploy"

Navigate to your newly created deployment, and press "Deploy" to deploy your Vanilla GPT-J model.

Deploy Vanilla GPT-J

Replica count

From your deployment, you can control the replica count for your deployments as usage increases to maintain fast response speeds at scale.

GPT-J Replica Count

Inferencing

To begin inferencing, copy the URL under the name and refer to our docs on a full set of instructions for passing requests and receiving responses.

Inferencing GPT-J

You can expect all the parameters you'd typically use with GPT-3 like response length, temperature, top P, top K, repetition penalty, and stop sequences.

Playground

You can also navigate to Playground to experiment with your new GPT-J deployment without needing to use Postman or implement any code.

Deploying GPT-J on Forefront takes only a few minutes. On top of the simplicity we bring to the deployment process, we've made several low-level machine code optimizations enabling your models to run at a fraction of the cost compared to deploying on Google's TPU v2 with no loss in throughput. If you're ready to get started deploying GPT-J, get in touch with our team.

Read more...
How to Deploy GPT-J