• July 24, 2024
  • roman
  • 0



Meta said it had partnered with the likes of Accenture, AWS, AMD, Anyscale, Cloudflare, Databricks, Dell, Deloitte, Fireworks.ai, Google Cloud, Groq, Hugging Face, IBM watsonx, Infosys, Intel, Kaggle, Microsoft Azure, Nvidia DGX Cloud, OctoAI, Oracle Cloud, PwC, Replicate, Sarvam AI, Scale.AI, SNCF, Snowflake, Together AI, and the UC Berkeley vLLM Project to make the Llama 3.1 family of models available and simpler to use.  

While cloud service providers such as AWS and Oracle will provide the latest models, partners such as Groq, Dell, and Nvidia will allow developers to use synthetic data generation and advanced retrieval augmented generation (RAG) techniques, Meta said, adding that Groq has optimized low-latency inference for cloud deployments, and that Dell has achieved similar optimizations for on-prem systems.

Other large models, such as Claude, Gemini, and GPT-4o, are also served via APIs.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *