AI Cybercrime Study Finds ChatGPT Mostly Fuels Spam, Not Hackers
A Cambridge-led study on AI cybercrime adoption finds that “AI isn’t taking the job” of turning hackers into superhackers. Researchers analyzed 97,895 cybercrime forum threads posted after ChatGPT’s launch (Nov 2022) and classified 97.3% of threads as “other,” not focused on using AI for crime.
The biggest measurable AI-driven activity was not advanced hacking. Instead, criminals used ChatGPT-like tools for mass-produced SEO spam, romance scams, and AI-generated nude image services. “Dark AI” products (e.g., WormGPT/FraudGPT) were often described in forums as marketing or unreliable, with jailbreaks for mainstream models becoming short-lived.
The study also contrasts “vibe hacking” claims with observed underground behavior. Vibe coding was real, but most use was limited to autocomplete-style help for skilled coders; low-skill actors stuck to pre-made scripts. Authors note that “guardrails” for AI systems appeared effective and that AI-assisted coding may increase risks like insecure code and supply-chain vulnerabilities.
Finally, the paper highlights potential economic fallout: generative AI job cuts could push some legitimate developers into the underground, worsening fraud and cybercrime activity over time.
For traders, the key takeaway is that the “AI hacker boom” narrative looks overstated in the short term, while scam-driven demand and labor-market stress could still shape broader risk sentiment.
Neutral
这条新闻对加密市场的直接冲击有限。研究结论主要指向“AI在地下犯罪中的实际用法”,即更多是SEO垃圾、诈骗内容生成等低复杂度、高产出活动,而不是出现大规模“AI超级黑客”导致的技术性系统性破坏。因此,它不像协议升级、监管突发或交易所安全事件那样,通常不会立刻改变主流币的基本面或流动性。
不过,它仍可能间接影响市场情绪:一方面,若“AI黑客爆发”预期过高,市场可能出现短期叙事降温(偏中性);另一方面,论文提到的“job cuts”与诈骗地下流量可能在中长期增加犯罪活动规模,强化对合规、KYC、链上风控与反欺诈工具的需求,从而对风险资产形成温和支撑或带来监管关注。
类似地,过往当“下一代自动化攻击”被媒体夸大、但实际落地多为模板化诈骗时,通常不会引发大幅行情剧烈波动,更多体现为风险偏好在叙事层面的再定价,而非链上/行业基本面的立即重估。