• July 21, 2024
  • roman
  • 0



The company tested Codestral Mamba on in-context retrieval capabilities up to 256k tokens — twice the number seen in OpenAI’s GPT4o — and found its 7B version performing better than open source models in several benchmarking tests, such as HumanEval, MBPP, Spider, and CruxE.

The larger 22B parameter version of the new model also performed significantly better than CodeLlama-34B with the exception of the CruxE benchmark.

While the 7B version is available under the Apache 2.0 license, the larger 22B version is available under a commercial license for self-deployment or community license for testing purposes.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *