첫 페이지 News 본문

On April 18th local time, Meta released its latest open-source large model Llama 3, providing pre trained and instruction fine-tuning versions of 8B and 70B.
According to Meta's official introduction, Llama 3 is trained on two customized 24K GPU clusters based on data exceeding 15T, which is 7 times larger than the dataset used by Llama 2 and 4 times more code. Additionally, Llama 3 supports 8K context length, which is twice the capacity of Llama 2. Moreover, Meta has also released parameter comparisons between two versions of Llama 3 and competitors such as Google Gemma, Google Gemini, Mistral, and Antioptic's Claude 3.
Meta CEO Zuckerberg introduced that Meta AI will be integrated into the search boxes at the top of Meta's major products WhatsApp, Instagram, Facebook, and Messenger, and will create a website called meta.ai to make it easier to use.
Meta Chief Scientist Yann LeCun stated that Llama will continue to release more versions in the coming months and will soon release research papers on Llama 3. Jim Fan, a senior scientist at Nvidia, believes that the possible release of Llama 3-400B and above versions in the future will become a "watershed", and the open source community will be able to use GPT-4 level models.
您需要登录后才可以回帖 登录 | Sign Up

本版积分规则

日微牧 新手上路
  • Follow

    0

  • Following

    0

  • Articles

    4