MADRID, Nov. 21 (Portaltic/EP) –
Microsoft has presented Orca 2 a small language model that achieves reasoning capabilities comparable to those of large models, the result of strategic training with personalized synthetic data.
The technology company is working on ways to teach the smallest language models to reason, those that have 10 billion parameters or less. It first did so with Orca, a 13 billion-parameter model introduced in June that mimicked the reasoning process of large models.
Now it does so with the next iteration, Orca 2, which is available with 7 billion parameters or 13 billion. It is based on the Llama 2 base model – which Microsoft has developed with Meta -, based on custom synthetic data.
Large models, such as GPT-4 or PaLm, show their ability to reason “answering complex questions, generating explanations, and even solving problems that require multi-step reasoning“; a capacity that, according to Microsoft, “has not been observed in smaller language models”, as stated in its research blog.
The technology company has trained Orca 2 under the approach that the solution strategies used by large models may not be the best option for a smaller one. For this reason, he has usedn “carefully filtered” synthetic data set with which he taught Orca 2 various reasoning techniques and different strategies to solve different tasks.
After evaluating the performance of this model on complex tasks, Microsoft states that “Orca 2 significantly outperforms similarly sized models (including the original Orca model) and achieves similar or better performance levels than models five to ten times larger.” “.
“As larger models continue to excel, our work with Orca 2 marks a significant step in diversification of applications and implementation options for language models“he concludes.