첫 페이지 Stocks Forefront 본문

Enterprises weighing the risks and benefits of generative artificial intelligence (AI) technology are facing the challenge that social media platforms have long been striving to address: preventing technology from being maliciously exploited.
Drawing on the experience of these platforms, business technology leaders are starting to combine software based "guardrails" with manual auditors to limit their use within prescribed limits.
AI models such as OpenAI's GPT-4 have been trained through extensive internet content. With the correct prompts, large language models can generate a large amount of toxic content inspired by the darkest corners of the internet. This means that content auditing needs to occur at the source (i.e., when AI models are trained) and their heavily generated outputs.
TurboTax software developer Intuit Inc. (INTU), headquartered in Mountain View, California, recently released Intuit Assist, a generative AI based assistant that provides financial advice to customers. At present, this assistant is only available for a limited number of users, relying on large language models trained on internet data and fine-tuning models based on Intuit's own data.
Intuit Chief Information Security Officer Atticus Tysen
The company's Chief Information Security Officer, Atticus Tysen, stated that the company is currently planning to form a team of eight full-time auditors to review the content entering and exiting this large language model driven system, including helping to prevent employees from leaking sensitive company data.
Tysen said, "When we try to provide truly meaningful and specific answers around financial issues, we don't know how effective these models are. So for us, adding manpower to this loop is very important
Tysen stated that Intuit's self-developed content review system uses a separate large language model to automatically label content it deems offensive, such as profanity, and is currently in its early stages. He said, for example, if a customer asks questions unrelated to financial guidelines or attempts to launch a prompt injection attack, the customer will also be automatically banned by the system. These attacks may include inducing chatbots to disclose customer data or the way they operate.
Then, the manual auditor will be reminded to review the text and be able to send it to the model building team, thereby improving the system's ability to block or identify harmful content. If Intuit's customers believe that their prompt words have been incorrectly marked, or if they believe that the AI assistant has generated inappropriate content, they can also notify the company.
Although there is currently no company specializing in AI content auditing, Intuit is supplementing its workforce with contractors trained in social media content auditing. Like the so-called prompt word engineer, AI content reviewers may become part of a new type of job opportunity created by AI.
Tysen said that the ultimate goal of Caijie Group is to have its AI audit model complete most of the content audit work for its AI assistants, thereby reducing the harmful content that humans may come into contact with. But he said that at present, generative AI is not enough to completely replace manual auditors.
Social media companies such as Facebook and Instagram's parent company Meta have long relied on outsourced human auditors to review and filter offensive posts on their platforms, providing best practices and warnings for the future development path of AI auditing.
In recent years, AI companies like OpenAI have recruited personnel to review and classify harmful text obtained online and generated by AI itself. These classified paragraphs are used to create AI security filters for ChatGPT, preventing users of the chat robot from accessing similar content.
OpenAI also collaborated with its partner and biggest supporter Microsoft to develop what Microsoft calls Azure AI Content Safety service, which utilizes AI to automatically detect unsafe images and text, including hate, violence, sexual, and self harm content. Microsoft is using its security services to prevent harmful content from appearing in its generative AI tools, including GitHub Copilot and Copilot for Office software.
Eric Boyd, Vice President of Microsoft AI Platform Enterprise, said, "These AI systems are indeed quite powerful. With the right instructions, they can do various things
Other technology leaders are studying the potential of manual auditing or investing in third-party software like Microsoft. Analysts say that content security filters will soon become a necessary condition for businesses to register and use generative AI tools sold by any supplier.
Syneos Health, the Chief Information and Digital Officer of Syneos Health, a biopharmaceutical services company located in Morrisville, North Carolina, has stated that the company will consider hiring content auditors at some point next year. During this period, the training data used by the AI model will be reviewed one by one through manual feedback.
Pickett said, "We will do this in a precise surgical manner, but in a broader sense, some level of review and supervision has many benefits
Forrester analyst Brandon Purcell focuses on responsible and ethical AI usage issues, stating that people are increasingly interested in "responsible AI", with the aim of making AI algorithms more transparent or auditable and reducing the unintended negative consequences of AI.
He said, "Everyone is interested in this because they realize that if they don't do it well, they will face reputational, regulatory, and revenue risks
您需要登录后才可以回帖 登录 | Sign Up

本版积分规则

安全到达彼岸依 新手上路
  • Follow

    0

  • Following

    0

  • Articles

    1