Is it time for Google to open up a lightweight and large model, Gemma, and the era of universal AI?
hughmini
发表于 2024-2-22 11:24:19
1252
0
0
Google released a new artificial intelligence "open model" Gemma on February 21st, which means that open source for large models means that external developers can build them into their own models. Google has also become another major technology company, following Meta, attempting to take the path of open source big models and accelerating the arrival of the era of universal AI.
Google stated that Gemma is a series of "lightweight" advanced open models built using the same research and technology as creating Gemini models. Developers can use the Gemma "Open Model" series to build artificial intelligence software for free. The company stated that it is publicly disclosing key technical data, such as so-called "model weights".
Google CEO Sundar Pichai said, "Gemma has demonstrated powerful performance and will be available globally starting today, running on laptops or Google Cloud."
Market analysis suggests that Google's open source of large models may attract software engineers to develop on Google's technological foundation and encourage the use of its newly profitable cloud division. Google stated that these models have also been optimized for Google Cloud.
However, Gemma is not entirely open source, which means the company can still establish terms and ownership for using the model.
It is reported that compared to the Gemini model previously released by Google, the Gemma model may have smaller parameters, with 2 billion or 7 billion parameter versions available for selection. Google has not yet disclosed the parameter size of its largest Gemini.
Google stated, "Gemini is the largest and most powerful AI model widely used today. The Gemma model shares technology and infrastructure components with Gemini, and can run directly on developers' laptops or desktops."
The company also emphasizes that Gemma surpasses models with larger parameters on key benchmarks while adhering to strict standards for safe and responsible output.
Previously, the open-source Meta's Llama 2 model had a maximum of 70 billion parameters. In contrast, OpenAI's GPT-3 model has 175 billion parameters.
In a technical report released by Google, the company compared Gemma's 7 billion parameter model with several models including Llama 27 billion parameter, Llama 213 billion parameter, and Mistral 7 billion parameter in different dimensions. Gemma outperformed its competitors in benchmark tests such as question answering, reasoning, mathematics/science, and code.
Nvidia stated during the release of the Gemma model that it has partnered with Google to ensure that the Gemma model runs smoothly on its chip. Nvidia also stated that it will soon develop a chatbot software to be used in conjunction with Gemma.
Opening up AI models with smaller parameters is also Google's business strategy. Previously, iFlytek also chose to open source smaller parameter size models.
Liu Qingfeng, Chairman of iFlytek, explained to a reporter from First Financial News, "The key to General Motors' big models is to see who has good performance, and open source big models are to establish an ecosystem. Therefore, from a technical perspective, generally open source big models are slightly lower than General Motors' big models."
"We have also observed that many companies may hide their biggest model and still hope to establish barriers for commercialization," a researcher engaged in AI big model development told a reporter from First Financial.
There are currently different views on open source big models. Some experts believe that open source AI big models may be abused, while others support open source methods, believing that this can promote technological development and expand the beneficiaries.
CandyLake.com is an information publishing platform and only provides information storage space services.
Disclaimer: The views expressed in this article are those of the author only, this article does not represent the position of CandyLake.com, and does not constitute advice, please treat with caution.
Disclaimer: The views expressed in this article are those of the author only, this article does not represent the position of CandyLake.com, and does not constitute advice, please treat with caution.
You may like
- Baidu Robin Lee: The AI native era needs millions of native applications, not 100 big models
- Rolling GPT-4? Google releases the strongest AI model interpretation
- News reports that Apple is developing its own large-scale language model for devices
- IPhone can run! Microsoft launches lightweight model Phi-3 with performance comparable to GPT-3.5 Turbo AI in the future on mobile devices?
- The first hundred billion parameter model from Tongyi Qianwen has arrived
- JD technical leader: Large models will become smaller and even finer down to the scene
-
11월 14일, 세계예선 아시아지역 제3단계 C조 제5라운드, 중국남자축구는 바레인남자축구와 원정경기를 가졌다.축구 국가대표팀은 바레인을 1-0으로 꺾고 예선 2연승을 거두었다. 특히 이번 경기 국내 유일한 중계 ...
- 我是来围观的逊
- 6 시간전
- Up
- Down
- Reply
- Favorite
-
계면신문기자 장우발 4분기의 영업수입이 하락한후 텐센트음악은 다시 성장으로 돌아왔다. 11월 12일, 텐센트음악은 최신 재보를 발표했다.2024년 9월 30일까지 이 회사의 3분기 총수입은 70억 2천만 위안으로 전년 ...
- 勇敢的树袋熊1
- 그저께 15:27
- Up
- Down
- Reply
- Favorite
-
본사소식 (기자 원전새): 11월 14일, 다다그룹 (나스닥코드: DADA) 은 2024년 3분기 실적보고를 발표했다. 수치가 보여준데 따르면 고품질발전전략에 지속적으로 전념하고 사용자체험을 끊임없이 최적화하며 공급을 ...
- 家养宠物繁殖
- 어제 15:21
- Up
- Down
- Reply
- Favorite
-
11월 12일 소식에 따르면 소식통에 따르면 아마존은 무료스트리밍서비스 Freevee를 페쇄하고 일부 종업원과 프로를 구독서비스 Prime Video로 이전할 계획이다. 올해 초 아마존이 내놓은 몇 편의 대형 드라마의 효 ...
- 度素告
- 그저께 13:58
- Up
- Down
- Reply
- Favorite