Microsoft seeks Plan B for more cost-effective AI, sidestepping OpenAI’s GPT-4



summary
Summary

Microsoft owns just under half of OpenAI, but is reportedly still working on a Plan B for AI that is more efficient.

The reason, according to The Information, is that OpenAI’s AI models, such as GPT-4, are expensive to run. The outlet claims its sources are an employee and one who recently left.

Microsoft research chief Peter Lee has tasked “many” of the company’s 1,500 researchers with developing conversational AI systems that are smaller and cheaper to run, though they are likely to be less capable than GPT-4.

Microsoft is currently building AI capabilities into nearly all of its products, even releasing a chat-based Copilot for Windows. With potentially more than a billion users, the cost of running AI could explode, negating any economic benefits from, for example, broader and more intensive use. Google faces a similar challenge as it introduces generative AI to Google Search.

Ad

Ad

Good AI is expensive

The shift is reportedly still in its early stages. However, Microsoft’s product teams are allegedly already testing in-house AI models in products such as Bing Chat.

According to Mikhail Parakhin, Microsoft’s head of search, Bing Chat currently uses 100 percent GPT-4 in Creative and Precision modes. In Balanced mode, which is probably the mode most users choose, Microsoft uses its “Prometheus model,” a “collection of skills and techniques,” as well as its Turing language models as a supplement.

The Turing models are less powerful than GPT-4 and are designed to recognize and answer simple questions and pass more difficult questions to GPT-4, which may not always work well.

Microsoft also recently introduced the 1.3-billion-parameter Phi-1, a small and efficient code model that was trained on textbook-quality data. It achieves the code performance of much larger models in Python, but falls far short of GPT-4.

In addition, Microsoft is experimenting with models such as Orca, which is based on Meta’s Llama 2 and is said to partially reach the level of GPT-3.5 and GPT-4, although it is smaller and more efficient. However, benchmark results and real-world user experience can be quite different.

Recommendation

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top