CoinInsight360.com logo CoinInsight360.com logo
America's Social Casino

CoinTelegraph 2025-05-05 03:30:34

OpenAI ignored experts when it released overly agreeable ChatGPT

OpenAI says it ignored the concerns of its expert testers when it rolled out an update to its flagship ChatGPT artificial intelligence model that made it excessively agreeable. The company released an update to its GPT‑4o model on April 25 that made it “noticeably more sycophantic,” which it then rolled back three days later due to safety concerns, OpenAI said in a May 2 postmortem blog post. The ChatGPT maker said its new models undergo safety and behavior checks , and its “internal experts spend significant time interacting with each new model before launch,” meant to catch issues missed by other tests. During the latest model’s review process before it went public, OpenAI said that “some expert testers had indicated that the model’s behavior ‘felt’ slightly off” but decided to launch “due to the positive signals from the users who tried out the model.” “Unfortunately, this was the wrong call,” the company admitted. “The qualitative assessments were hinting at something important, and we should’ve paid closer attention. They were picking up on a blind spot in our other evals and metrics.” OpenAI CEO Sam Altman said on April 27 that it was working to roll back changes making ChatGPT too agreeable. Source: Sam Altman Broadly, text-based AI models are trained by being rewarded for giving responses that are accurate or rated highly by their trainers. Some rewards are given a heavier weighting, impacting how the model responds. OpenAI said introducing a user feedback reward signal weakened the model’s “primary reward signal, which had been holding sycophancy in check,” which tipped it toward being more obliging. “User feedback in particular can sometimes favor more agreeable responses, likely amplifying the shift we saw,” it added. OpenAI is now checking for suck up answers After the updated AI model rolled out, ChatGPT users had complained online about its tendency to shower praise on any idea it was presented, no matter how bad, which led OpenAI to concede in an April 29 blog post that it “was overly flattering or agreeable.” For example, one user told ChatGPT it wanted to start a business selling ice over the internet, which involved selling plain old water for customers to refreeze. Source: Tim Leckemby In its latest postmortem, it said such behavior from its AI could pose a risk, especially concerning issues such as mental health. “People have started to use ChatGPT for deeply personal advice — something we didn’t see as much even a year ago,” OpenAI said. “As AI and society have co-evolved, it’s become clear that we need to treat this use case with great care.” Related: Crypto users cool with AI dabbling with their portfolios: Survey The company said it had discussed sycophancy risks “for a while,” but it hadn’t been explicitly flagged for internal testing, and it didn’t have specific ways to track sycophancy. Now, it will look to add “sycophancy evaluations” by adjusting its safety review process to “formally consider behavior issues” and will block launching a model if it presents issues. OpenAI also admitted that it didn’t announce the latest model as it expected it “to be a fairly subtle update,” which it has vowed to change. “There’s no such thing as a ‘small’ launch,” the company wrote. “We’ll try to communicate even subtle changes that can meaningfully change how people interact with ChatGPT.” AI Eye: Crypto AI tokens surge 34%, why ChatGPT is such a kiss-ass

면책 조항 읽기 : 본 웹 사이트, 하이퍼 링크 사이트, 관련 응용 프로그램, 포럼, 블로그, 소셜 미디어 계정 및 기타 플랫폼 (이하 "사이트")에 제공된 모든 콘텐츠는 제 3 자 출처에서 구입 한 일반적인 정보 용입니다. 우리는 정확성과 업데이트 성을 포함하여 우리의 콘텐츠와 관련하여 어떠한 종류의 보증도하지 않습니다. 우리가 제공하는 컨텐츠의 어떤 부분도 금융 조언, 법률 자문 또는 기타 용도에 대한 귀하의 특정 신뢰를위한 다른 형태의 조언을 구성하지 않습니다. 당사 콘텐츠의 사용 또는 의존은 전적으로 귀하의 책임과 재량에 달려 있습니다. 당신은 그들에게 의존하기 전에 우리 자신의 연구를 수행하고, 검토하고, 분석하고, 검증해야합니다. 거래는 큰 손실로 이어질 수있는 매우 위험한 활동이므로 결정을 내리기 전에 재무 고문에게 문의하십시오. 본 사이트의 어떠한 콘텐츠도 모집 또는 제공을 목적으로하지 않습니다.