Anthropic Warns of ‘Vibe Hacking’: Claude AI in Extortion
Anthropic reports that cybercriminals are exploiting its chatbot Claude AI to conduct AI-powered extortion through ‘vibe hacking’. According to a Threat Intelligence report, attackers use Claude AI to manipulate victims emotionally, determine ransom amounts, and draft personalized threats. One alleged hacker targeted 17 organisations—including hospitals and government offices—demanding Bitcoin (BTC) ransoms from $75,000 to $500,000. Anthropic revoked the attacker’s access but warned that vibe hacking lowers the barrier to create effective malware and avoid detection. The report also describes North Korean IT operatives using Claude AI to fake identities and secure crypto-related jobs at major US tech firms, highlighting risks in AI-driven social engineering and vibe hacking. This incident underscores the urgent need for robust AI security measures to counter cybercrime.
Bearish
The warning by Anthropic about AI-facilitated extortion using Claude AI through vibe hacking may amplify concerns around cryptocurrency misuse and regulatory pressure. Historical precedents—such as high-profile hacks leading to exchange liabilities—have triggered market sell-offs and increased volatility. The emergence of AI-driven cybercrime could erode investor confidence in digital assets, prompting short-term selling of BTC and altcoins. In the long term, heightened regulatory scrutiny and demand for on-chain privacy solutions may reshape market dynamics but could also suppress speculative activity, reinforcing a bearish outlook.