Stanford’s Alpaca shows that OpenAI may have a problem



summary
Summary

Researchers train a language model from Meta with text generated by OpenAI’s GPT-3.5 for less than $600 – and achieve similar performance.

Training large language models is expensive, and powerful models remain the monopoly of large technology companies – right?

Perhaps not.

Researchers at Stanford used 52,000 instruction-following demonstrations generated by OpenAI’s GPT-3.5 (text-davinci-003) to fine-tune a seven-billion-parameter variant of Meta’s recently announced LLaMA model.

ad

Instruction training is one of the key techniques that make GPT-3.5 superior to the original GPT-3 model, and the training data used is proprietary to OpenAI.

While RLHF is critical for tuning models like ChatGPT or even GPT-4, the essential capabilities of the models are based on their original training – ie, training with instructions as well.

Stanford’s Alpaca trains with OpenAI output

In their work, the Stanford group used the AI-generated instructions to train Alpaca 7B, a language model that the researchers say exhibits many GPT-3.5-like behaviors. In a blind test using input from the Self-Instruct Evaluation Set both models performed comparably, the team says.

Alpaca has problems common to other language models, such as hallucinations, toxicity, and stereotyping. In particular, hallucinations occur more frequently than in the OpenAI model.

The team is releasing an interactive demo, the training dataset, and the training code. They have also asked Meta for permission to release the model. With the release, the team hopes to enable research on language models trained with instructions. To prevent misuse, they have included a content filter via the OpenAI API and a watermark in the demo.

Recommendation

You can try Stanford’s Alpaca 7B for free.



Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top