첫 페이지 News 본문

It has been almost a week since Google launched its most powerful model, Gemini, and many domestic AI companies are trying to explore the power of this large model.
Unlike many large models previously launched in the industry, Google Gemini has bypassed the textual aspect and relied directly on visual and sound to understand the world, despite its on-site demo being suspected of fraud and excessive exaggeration of its capabilities.
Gemini's demonstration videos have led many users to mistakenly believe that Gemini can read video information in real-time and answer user questions through understanding. However, in reality, Google employees only generate these responses through prompts. Image source: Google
In order to understand the impact of Gemini's emergence on OpenAI and other AI companies, Interface News recently visited the business leaders and developers of several top generative AI companies. They believe that Gemini's biggest feature is its "native" multimodal large model.
"In theory, native multimodal models are more effective than 'concatenated' multimodal models because the latter is prone to encountering bottlenecks during the training phase." Chen Yujun, the AI manager of Recurrent Intelligence, told Interface News reporters that as Gemini has not been deeply used yet, its actual advantages need to be further understood.
Several start-up developers of large models have stated that even though the largest size Ultra in the Gemini series has not yet been officially launched, Gemini has already demonstrated the same level of ability as the GPT-4 in terms of text.
In the benchmark test set released by Google, Gemini Ultra performs better than GPT-4 in most text tests and GPT-4v in almost all multimodal task tests. If using the testing conditions of GPT-4 as a benchmark, Gemini Ultra performs weaker on MMLU than GPT-4, but still outperforms other mainstream large models. Image source: Gemini Technical Report CITIC Construction Investment Research Report
In Gemini's demonstration video, this large model seems to be able to observe human behavior in real-time and provide feedback, for example, it can perfectly describe the process of a duck from sketching to coloring; Can track paper balls in the cup changing game and assist in solving math and physics problems; Can distinguish gestures, engage in hands-on classroom games, and rearrange planetary sketches.
Developers generally believe that regardless of the geometry of the fake components, Gemini has demonstrated strong abilities in understanding, reasoning, creation, and real-time interaction, achieving a comprehensive surpassing of the OpenAI multimodal model GPT-4v. Google's response has also been widely accepted by the industry, "All user prompts and outputs are genuine, only shortened for simplicity."
The GPT-4v, which was low-key released by OpenAI three months ago, can perform multimodal tasks such as comprehension and image generation, but the results are not very good, and its key reasoning ability is to cooperate with other models to complete. And abstract reasoning ability itself is the most critical ability of large models.
Image source: CITIC Construction Investment
Yin Bohao explained to Interface News that GPT-4v and Gemini are based on two completely different training logics. "GPT-4v is a nearsighted person who cannot see clearly, so its performance is not good. It is a typical cheating scheme. Gemini trains multiple modalities together."
But in the opinion of an algorithm manager at a multimodal large model company, Gemini should not have completely surpassed GPT-4. "During the evaluation, GPT-4 and Gemini did not form a completely fair comparison in text generation."
Many netizens have also tested and expressed that the Gemini Pro's ability to search for objects and images accurately surpasses the GPT-4. For this situation, Liu Yunfeng from Zhuiyi Technology believes that Google's search business naturally has text and other modal aligned data, which is indeed more conducive to training native multimodal large models.
Gemini is able to correctly recognize handwritten answers from students and verify the reasoning process of physics problems. Image source: Gemini Technical Report
Any major move by Google in the field of artificial intelligence will unlock emerging exploration directions in the market, but before the release of Gemini, the trend towards comprehensive multimodality of AI models had become increasingly clear.
As early as the release of GPT-4 in March, OpenAI stated that it would add multimodal integration in this iteration. Starting from September, star companies such as Runway, Midjournal, Adobe, and Stability AI have successively launched multiple multimodal products.
On the domestic side, Baidu's Wenxin Big Model 4.0 has made significant progress in the field of cross modal cultural and biological images. The largest model startup in China, Zhipu AI, has the highest public financing, and its generative AI assistant Zhipu Qingyan has significant advantages in the visual field.
Multiple developers have told Interface News that multimodal big models are recognized as a clear development direction in the industry and will not be awakened by Google's big actions. However, the arrival of Gemini will stimulate domestic companies to accelerate research and development. The algorithm manager of the aforementioned multimodal large model company also pointed out Gemini's limitations, "its ability in image generation and its reference significance in video and image generation are limited."
At present, it is difficult to come to the conclusion that Gemini has completely surpassed the GPT-4, but it is an undeniable fact that Google has become the strongest opponent of OpenAI. It also proved a truth with Gemini: any multimodal large model must rely on the training process of the large language model in order to achieve true multimodal AI.
CandyLake.com is an information publishing platform and only provides information storage space services.
Disclaimer: The views expressed in this article are those of the author only, this article does not represent the position of CandyLake.com, and does not constitute advice, please treat with caution.
您需要登录后才可以回帖 登录 | Sign Up

本版积分规则

  • 11월 14일, 세계예선 아시아지역 제3단계 C조 제5라운드, 중국남자축구는 바레인남자축구와 원정경기를 가졌다.축구 국가대표팀은 바레인을 1-0으로 꺾고 예선 2연승을 거두었다. 특히 이번 경기 국내 유일한 중계 ...
    我是来围观的逊
    어제 15:05
    Up
    Down
    Reply
    Favorite
  • 계면신문기자 장우발 4분기의 영업수입이 하락한후 텐센트음악은 다시 성장으로 돌아왔다. 11월 12일, 텐센트음악은 최신 재보를 발표했다.2024년 9월 30일까지 이 회사의 3분기 총수입은 70억 2천만 위안으로 전년 ...
    勇敢的树袋熊1
    3 일전
    Up
    Down
    Reply
    Favorite
  • 본사소식 (기자 원전새): 11월 14일, 다다그룹 (나스닥코드: DADA) 은 2024년 3분기 실적보고를 발표했다. 수치가 보여준데 따르면 고품질발전전략에 지속적으로 전념하고 사용자체험을 끊임없이 최적화하며 공급을 ...
    家养宠物繁殖
    그저께 15:21
    Up
    Down
    Reply
    Favorite
  • 11월 12일 소식에 따르면 소식통에 따르면 아마존은 무료스트리밍서비스 Freevee를 페쇄하고 일부 종업원과 프로를 구독서비스 Prime Video로 이전할 계획이다. 올해 초 아마존이 내놓은 몇 편의 대형 드라마의 효 ...
    度素告
    3 일전
    Up
    Down
    Reply
    Favorite
六月清晨搅 注册会员
  • Follow

    0

  • Following

    0

  • Articles

    30