Scan to download
BTC $77,677.75 -0.85%
ETH $2,314.94 -0.73%
BNB $636.14 -0.38%
XRP $1.43 -0.56%
SOL $86.40 +0.24%
TRX $0.3227 -1.57%
DOGE $0.0979 -0.29%
ADA $0.2521 +0.33%
BCH $453.89 -0.88%
LINK $9.40 +0.36%
HYPE $41.60 +1.29%
AAVE $95.51 +0.40%
SUI $0.9461 -0.54%
XLM $0.1729 -1.81%
ZEC $358.19 +0.31%
BTC $77,677.75 -0.85%
ETH $2,314.94 -0.73%
BNB $636.14 -0.38%
XRP $1.43 -0.56%
SOL $86.40 +0.24%
TRX $0.3227 -1.57%
DOGE $0.0979 -0.29%
ADA $0.2521 +0.33%
BCH $453.89 -0.88%
LINK $9.40 +0.36%
HYPE $41.60 +1.29%
AAVE $95.51 +0.40%
SUI $0.9461 -0.54%
XLM $0.1729 -1.81%
ZEC $358.19 +0.31%

Analysis: Anthropic and OpenAI have consecutively exposed security vulnerabilities, raising concerns about the safety of AI models

2026-04-25 08:45:54
Collection

According to The Information, Anthropic and OpenAI have both experienced security incidents, raising market concerns about the safety of AI models themselves.

Currently, Anthropic is investigating potential unauthorized access to its Claude Mythos model by users. Almost simultaneously, OpenAI was also reported to have accidentally exposed several unreleased models in its Codex application. Industry opinions indicate that these vulnerability incidents have intensified scrutiny on the security governance capabilities of AI companies and reflect that the security systems of current AI technology still need improvement as it rapidly develops. Analysts believe that even AI model providers that focus on cybersecurity capabilities face significant security challenges themselves. As AI is gradually used to defend against cyberattacks, issues related to platform security and access control have also become critical risk points.

app_icon
ChainCatcher Building the Web3 world with innovations.