The company wants manual auditors to pass the AI application
安全到达彼岸依
发表于 2023-10-26 07:57:16
1324
0
0
Enterprises weighing the risks and benefits of generative artificial intelligence (AI) technology are facing the challenge that social media platforms have long been striving to address: preventing technology from being maliciously exploited.
Drawing on the experience of these platforms, business technology leaders are starting to combine software based "guardrails" with manual auditors to limit their use within prescribed limits.
AI models such as OpenAI's GPT-4 have been trained through extensive internet content. With the correct prompts, large language models can generate a large amount of toxic content inspired by the darkest corners of the internet. This means that content auditing needs to occur at the source (i.e., when AI models are trained) and their heavily generated outputs.
TurboTax software developer Intuit Inc. (INTU), headquartered in Mountain View, California, recently released Intuit Assist, a generative AI based assistant that provides financial advice to customers. At present, this assistant is only available for a limited number of users, relying on large language models trained on internet data and fine-tuning models based on Intuit's own data.
Intuit Chief Information Security Officer Atticus Tysen
The company's Chief Information Security Officer, Atticus Tysen, stated that the company is currently planning to form a team of eight full-time auditors to review the content entering and exiting this large language model driven system, including helping to prevent employees from leaking sensitive company data.
Tysen said, "When we try to provide truly meaningful and specific answers around financial issues, we don't know how effective these models are. So for us, adding manpower to this loop is very important
Tysen stated that Intuit's self-developed content review system uses a separate large language model to automatically label content it deems offensive, such as profanity, and is currently in its early stages. He said, for example, if a customer asks questions unrelated to financial guidelines or attempts to launch a prompt injection attack, the customer will also be automatically banned by the system. These attacks may include inducing chatbots to disclose customer data or the way they operate.
Then, the manual auditor will be reminded to review the text and be able to send it to the model building team, thereby improving the system's ability to block or identify harmful content. If Intuit's customers believe that their prompt words have been incorrectly marked, or if they believe that the AI assistant has generated inappropriate content, they can also notify the company.
Although there is currently no company specializing in AI content auditing, Intuit is supplementing its workforce with contractors trained in social media content auditing. Like the so-called prompt word engineer, AI content reviewers may become part of a new type of job opportunity created by AI.
Tysen said that the ultimate goal of Caijie Group is to have its AI audit model complete most of the content audit work for its AI assistants, thereby reducing the harmful content that humans may come into contact with. But he said that at present, generative AI is not enough to completely replace manual auditors.
Social media companies such as Facebook and Instagram's parent company Meta have long relied on outsourced human auditors to review and filter offensive posts on their platforms, providing best practices and warnings for the future development path of AI auditing.
In recent years, AI companies like OpenAI have recruited personnel to review and classify harmful text obtained online and generated by AI itself. These classified paragraphs are used to create AI security filters for ChatGPT, preventing users of the chat robot from accessing similar content.
OpenAI also collaborated with its partner and biggest supporter Microsoft to develop what Microsoft calls Azure AI Content Safety service, which utilizes AI to automatically detect unsafe images and text, including hate, violence, sexual, and self harm content. Microsoft is using its security services to prevent harmful content from appearing in its generative AI tools, including GitHub Copilot and Copilot for Office software.
Eric Boyd, Vice President of Microsoft AI Platform Enterprise, said, "These AI systems are indeed quite powerful. With the right instructions, they can do various things
Other technology leaders are studying the potential of manual auditing or investing in third-party software like Microsoft. Analysts say that content security filters will soon become a necessary condition for businesses to register and use generative AI tools sold by any supplier.
Syneos Health, the Chief Information and Digital Officer of Syneos Health, a biopharmaceutical services company located in Morrisville, North Carolina, has stated that the company will consider hiring content auditors at some point next year. During this period, the training data used by the AI model will be reviewed one by one through manual feedback.
Pickett said, "We will do this in a precise surgical manner, but in a broader sense, some level of review and supervision has many benefits
Forrester analyst Brandon Purcell focuses on responsible and ethical AI usage issues, stating that people are increasingly interested in "responsible AI", with the aim of making AI algorithms more transparent or auditable and reducing the unintended negative consequences of AI.
He said, "Everyone is interested in this because they realize that if they don't do it well, they will face reputational, regulatory, and revenue risks
CandyLake.com is an information publishing platform and only provides information storage space services.
Disclaimer: The views expressed in this article are those of the author only, this article does not represent the position of CandyLake.com, and does not constitute advice, please treat with caution.
Disclaimer: The views expressed in this article are those of the author only, this article does not represent the position of CandyLake.com, and does not constitute advice, please treat with caution.
You may like
- Former Vice President of Baidu, Qu Jing, registered a new company before resigning? Former assistant Zou Shaohuan becomes a partner
- Domestic CAR-T enterprise Legendary Biology's first quarter losses narrowed, and the company is expected to achieve profitability in 2026
- Taobao launches live streaming full hosting service for corporate CEOs
- Baidu's big model ERNIE 4.0 Turbo is fully open to enterprise users
- Google reportedly plans to acquire cybersecurity startup Wiz for $23 billion, making it the largest acquisition in its history
- Luckin Coffee and 6 other coffee companies were interviewed
- Kanye's shopping receipts have been exposed! Miniso is suspected of leaking privacy, and the company has responded
- Several companies, including Jike and Tesla, have responded by denying illegal surveying and mapping under the pretext of intelligent driving of automobiles
- Is a well-known eVTOL company on the brink of bankruptcy?
- AI Agents: A New Track for Technology Companies to Compete in
-
"영비릉: 2024회계연도 영업수입 동기대비 8% 감소"영비릉은 2024회계연도 재무제보를 발표했다.2024 회계연도 매출은 149억5500만 유로로 전년 동기 대비 8% 감소했습니다.이익은 31억 500만 유로입니다.이익률은 ...
- 勇敢的树袋熊1
- 3 일전
- Up
- Down
- Reply
- Favorite
-
계면신문기자 장우발 4분기의 영업수입이 하락한후 텐센트음악은 다시 성장으로 돌아왔다. 11월 12일, 텐센트음악은 최신 재보를 발표했다.2024년 9월 30일까지 이 회사의 3분기 총수입은 70억 2천만 위안으로 전년 ...
- 勇敢的树袋熊1
- 그저께 15:27
- Up
- Down
- Reply
- Favorite
-
본사소식 (기자 원전새): 11월 14일, 다다그룹 (나스닥코드: DADA) 은 2024년 3분기 실적보고를 발표했다. 수치가 보여준데 따르면 고품질발전전략에 지속적으로 전념하고 사용자체험을 끊임없이 최적화하며 공급을 ...
- 家养宠物繁殖
- 어제 15:21
- Up
- Down
- Reply
- Favorite
-
11월 12일 소식에 따르면 소식통에 따르면 아마존은 무료스트리밍서비스 Freevee를 페쇄하고 일부 종업원과 프로를 구독서비스 Prime Video로 이전할 계획이다. 올해 초 아마존이 내놓은 몇 편의 대형 드라마의 효 ...
- 度素告
- 그저께 13:58
- Up
- Down
- Reply
- Favorite