첫 페이지 News 본문

"If you want to ask me, what is the biggest change for the industry in the past 24 months? My answer must be that the big model has basically eliminated illusion, and its accuracy of answering questions has been greatly improved." Robin Lee, chairman of Baidu, said at the Baidu World Conference on November 12. Behind it, Enhanced Retrieval (RAG) technology plays an indispensable role, as large models utilize the retrieved information to guide the generation of text or answers.
Comment: RAG is not a new technology, and AI search companies such as Perplexity have used RAG. An AI researcher told reporters that adopting RAG's AI search solution itself is not difficult and can be used by all manufacturers. The adoption of RAG scheme can indeed provide more accurate information for the answers of the large model, but it cannot fundamentally improve the ability of the large model. According to the test of First Finance, ERNIE Bot AI still fails to accurately understand the intention and the picture is garbled. However, the capabilities of large models themselves are also improving, such as the long thought chain reasoning path proposed by OpenAI's O1 series models, which can improve the quality of large model responses.
Yang Zhilin claims that Kimi's monthly active life exceeds 36 million
On November 16th, Yang Zhilin, founder of Moon Dark Side, revealed during a communication meeting that the AI assistant Kimi had over 36 million monthly active users in October and is continuing to grow even faster. He also emphasized that improving retention is Kimi's core goal at present. On this day, the dark side of the moon also released its latest mathematical model k0 math, which will be launched in the next one or two weeks, benchmarking against two publicly available models in the OpenAI o1 series: o1 mini and o1 preview. It has strong deep thinking capabilities. Yang Zhilin stated that the important capability for the future development of AI products, including AI technology, should be deeper reasoning ability, which can transform the simple question and answer that is currently only a short link into a combination of longer link task operations.
Comment: During the communication meeting, Yang Zhilin mentioned some opinions on the current industry. Recently, Ilya Sutskever, former chief scientist of OpenAI, publicly stated that the results of expanding pre training have reached the platform stage. In Yang Zhilin's view, the development of AI is like swinging on a swing, switching back and forth between two states. "One is that the algorithm and data are ready, but the computing power is not enough." He believes that from the birth of Transformer architecture to the emergence of GPT 4, more contradictions lie in how to scale up, and there is no fundamental problem in algorithm and data. But today, the scale has reached a certain level, and it will be found that adding more computing power may not necessarily solve the problem, with the core issue being the lack of high-quality data. What needs to be done at this point is to change the algorithm, break through bottlenecks, and "good algorithms can unleash the potential of scaling, making the model continuously better
XAI to raise $6 billion and plan to purchase 100000 more NVIDIA GPUs
On November 16th, it was reported that xAI, an AI company under Elon Musk, is planning to raise up to $6 billion in funding at a valuation of $50 billion to purchase 100000 NVIDIA GPUs and build a Memphis data center. This is likely to have a significant impact on Tesla's improvements to fully autonomous driving capabilities. It is reported that xAI company will complete a new round of financing next week, with $5 billion from Middle Eastern sovereign funds and $1 billion from other investors.
Comment: Musk's AI startup company xAI announced its establishment in July 2023, and according to its website, the company aims to "understand the true nature of the universe". Previously, xAI only took 122 days to build the world's largest AI cluster Colossus, and the company plans to double the number of GPUs by 100000 in the future, including 50000 for the more advanced H200.
Musk accuses OpenAI of attempting to monopolize the market for generative artificial intelligence
On November 15th, it was reported that Elon Musk's feud with OpenAI CEO Sam Altman has escalated. In a court filing, he accused OpenAI of attempting to monopolize the market for generative artificial intelligence and sacrificing security to gain a competitive edge. Late last Thursday, Musk's lawyer wrote in a revised lawsuit submitted to the Oakland Federal Court in California, "Microsoft and OpenAI are clearly dissatisfied with their monopoly (or near monopoly) in the field of generative artificial intelligence, and are now actively trying to eliminate competitors such as xAI by making investors promise not to provide them with funding
Comment: OpenAI has not yet responded. Musk founded xAI last year and became a competitor to OpenAI. Since then, he has publicly challenged OpenAI multiple times. Musk, in support of OpenAI's anti competitive behavior, pointed out in the document that the company is mining AI talent with high salaries to weaken competitors' talent sources. It is expected that the salary expenditure for only 1500 employees will reach $1.5 billion.
OpenAI co-founder returns
On November 13th, OpenAI co-founder Greg Brockman announced in a post on social media that he has returned to the artificial intelligence startup after leaving his position as president for three months. He posted, 'My longest vacation of my life is over, let's go back and build OpenAI.' In August of this year, Greg Brockman announced his vacation until the end of the year, which sparked speculation that he would resign from OpenAI.
Comment: Prior to Greg Brockmann's return, several OpenAI executives resigned one after another, including former Chief Technology Officer Mira Murati, co-founder John Schulman, and co-founder Ilya Suskovo. But at present, it seems that the negative impact of multiple personnel resignations cannot be seen. Greg Brockman's return has enriched the management team.
Moore thread initiates the listing process
On November 12th, Mole Thread Intelligent Technology (Beijing) Co., Ltd. (hereinafter referred to as "Mole Thread") completed the registration and filing for guidance at the Beijing Securities Regulatory Bureau, and started the A-share listing process. The guidance institution is CITIC Securities. The AI computing chips of Moore's Thread include MTT S2000, MTT S3000, and MTT S4000. MTT S4000 is the latest chip released in December 2023, with FP32 computing power of 10.6 TFLOPS, 15.2 TFLOPS, and 25 TFLOPS, respectively.
Comment: Previously, AI chip companies Suiyuan Technology and Boren Technology also launched IPO guidance and filing. Moore Thread was founded in 2020 and is the youngest among the several chip companies preparing for IPO, but may have the highest valuation. In the Hurun 2024 Global Unicorn List, Moore's Thread ranks 261st with a valuation of 25.5 billion yuan, while Suiyuan Technology and Boren Technology rank 482nd and 495th respectively. In contrast, the computing power of the Moore Thread AI chip FP32 is lower than that of the Nvidia A100 and H100. On GPU clusters, Moore's Thread is catching up with the world's advanced level, transitioning from a thousand card cluster to a ten thousand card cluster.
OpenAI shares AI data center construction plan
It is reported that OpenAI has shared information with US government officials on how to build an artificial intelligence data center, which is expected to consume 5 gigawatts of electricity (1 gigawatt equals 1 million kilowatts) and will be five times larger than the data center currently under development. OpenAI also calls for expanding the energy capacity of data centers to ensure its leading edge in the field of artificial intelligence, and recommends that the government accelerate the approval process for AI data centers.
Comment: OpenAI's concept is somewhat similar to the Stargate. Earlier this year, it was reported that OpenAI was in negotiations for a massive global data center project worth up to $100 billion, which includes an AI supercomputer called Stargate. The project is expected to start in 2028. OpenAI has been working hard to expand its computing power and obtain more energy support.
Tencent management claims that AI brings tangible benefits to the company
On November 13th, Tencent Holdings announced its financial report for the third quarter of 2024. According to the financial report, Tencent's revenue in the third quarter was 167.193 billion yuan, a year-on-year increase of 8%, and its operating profit (Non IFRS) was 61.274 billion yuan, a year-on-year increase of 19%. During the conference call after the release of the financial report, the management stated that AI has brought tangible benefits to the company, and now a major focus is on using AI to do better content recommendations and advertising push. The accuracy of advertising push has great potential, which has direct benefits for revenue growth. But the management also mentioned that compared with the AI revenue of some American companies, Tencent still has further development space in AI. "There is not yet a very large market for enterprises in China, while many overseas companies have already started to apply AI, such as to improve business efficiency. Our AI revenue is much lower than some American cloud service providers
Comment: Not only are domestic companies not as fast as American companies in applying AI, Tencent management also stated that there are not many AI startups in China that purchase a large amount of computing power, so promoting the growth of cloud business scale will not be easy and needs to be done step by step. Tencent has previously issued signals that expanding the application of large models requires more patience. In September, Tang Daosheng, Senior Executive Vice President of Tencent Group and CEO of the Cloud and Smart Industry Business Group, stated that at first, many people believed that models could quickly change the world, but later there were some pessimists. In fact, "overestimating progress in the short term and underestimating effectiveness in the long term" are not advisable.
Amazon invests 110 million yuan to promote AI research on Trainium chips
On November 13th, Amazon Web Services (AWS) announced a major investment project called "Build on Trainium", which will provide its latest AI computing power to researchers for free, marking the company's hope to compete directly with Nvidia in the field of artificial intelligence through this initiative. Amazon's decision aims to attract more researchers to use its computing power based on Trainium chips, thereby challenging Nvidia, which currently dominates the market. Amazon further stated that any AI advancements created in the project will be released in an open source manner, allowing researchers and developers to continue driving their innovation.
Comment: Amazon also attaches great importance to AI, and in August, it announced a $4 billion investment in OpenAI competitor Anthropic. It is reported that AWS Trainium is a customized machine learning chip designed specifically for deep learning training and inference tasks. As part of the 'Build on Trainium' initiative, Amazon has established a research UltraCluster consisting of up to 40000 Trainium chips optimized specifically for AI's unique workloads and computing structures.
The Beatles use AI to repair song 'Now And Then', receiving two Grammy nominations
On November 9th, 2025, the nominations for the 67th Grammy Awards were announced. The legendary band The Beatles, nearly 50 years after disbanding, successfully nominated their final song "Now and Then" for the "Production of the Year" and "Best Rock Performance" awards with the help of AI, marking the birth of the first AI assisted song to receive a Grammy nomination.
This widely acclaimed work was released at the end of last year, and its creative process is quite legendary. Now And Then "was originally a demo recorded by John Lennon in the late 1970s, but it was not fully completed in the end. In 2022, director Peter Jackson and the sound engineer used machine learning algorithms to separate John Lennon's voice from the original sample of "Now And Then", and repaired and enhanced it through artificial intelligence technology, allowing other members of the band to continue to participate in completing the song.
Comment: Although 'Now And Then' was completed through machine learning, it still falls within the scope of Grammy AI rules. The current guidelines stipulate that only human creators are eligible to submit for review, nomination, or award at the Grammy Awards, but works containing artificial intelligence elements are eligible to enter applicable categories.
您需要登录后才可以回帖 登录 | Sign Up

本版积分规则

王俊杰2017 注册会员
  • Follow

    0

  • Following

    0

  • Articles

    28