Enable 2x higher throughput and fewer replicas with no TPU blocking.
Specialize GPT-J by providing examples of your task and the resulting model can often outperform GPT-3 Davinci.
Prepare and upload a text file with desired samples of the task you’d like GPT-J to fine-tune on.
Define the number of minutes GPT-J should be trained on your provided dataset and how many checkpoints should be saved.
Set custom prompts to auto-test your checkpoints and deploy any checkpoint to instantly retreive an endpoint for inference.
Instantly experiment with all your GPT-J deployments in a single playground.
“My fine-tuned GPT-J model significantly outperformed GPT-3 Da Vinci.”