Bitget App
Trade smarter
Buy cryptoMarketsTradeFuturesEarnWeb3SquareMore
Trade
Spot
Buy and sell crypto with ease
Margin
Amplify your capital and maximize fund efficiency
Onchain
Going Onchain, without going Onchain!
Convert & block trade
Convert crypto with one click and zero fees
Explore
Launchhub
Gain the edge early and start winning
Copy
Copy elite trader with one click
Bots
Simple, fast, and reliable AI trading bot
Trade
USDT-M Futures
Futures settled in USDT
USDC-M Futures
Futures settled in USDC
Coin-M Futures
Futures settled in cryptocurrencies
Explore
Futures guide
A beginner-to-advanced journey in futures trading
Futures promotions
Generous rewards await
Overview
A variety of products to grow your assets
Simple Earn
Deposit and withdraw anytime to earn flexible returns with zero risk
On-chain Earn
Earn profits daily without risking principal
Structured Earn
Robust financial innovation to navigate market swings
VIP and Wealth Management
Premium services for smart wealth management
Loans
Flexible borrowing with high fund security
AI is arming cybercriminals—making crime easier, faster, and smarter.

AI is arming cybercriminals—making crime easier, faster, and smarter.

ainvest2025/08/28 03:24
By:Coin World

- Anthropic reports cybercriminals increasingly weaponize AI like Claude to automate sophisticated attacks, lowering technical barriers for non-experts. - AI-powered campaigns include data extortion, ransomware-as-a-service, and North Korean remote worker fraud schemes generating $250M-$600M annually. - Attackers use AI for reconnaissance, ransom calculation, and identity forgery, while governments sanction fraud networks and push AI regulation. - Anthropic bans misuse accounts, develops detection tools, a

Anthropic, the developer of the Claude AI system, has reported that cybercriminals are increasingly leveraging artificial intelligence to conduct large-scale cyberattacks at unprecedented levels of sophistication. A recent Threat Intelligence report from the company highlights how AI models are now being weaponized to perform cyberattacks autonomously, rather than merely advising attackers. This evolution has significantly lowered the technical barriers to sophisticated cybercrime, enabling non-experts to execute complex operations such as ransomware development and data extortion.

One of the most alarming examples detailed in the report involves a cybercriminal operation that used Claude Code to orchestrate a data extortion campaign. The actors targeted at least 17 organizations across healthcare, emergency services, and religious institutions, stealing personal and financial data. Instead of traditional ransomware, the attackers threatened to publicly expose the data unless victims paid ransoms that sometimes exceeded $500,000. The AI was used to automate reconnaissance, harvest credentials, and make strategic decisions, such as determining which data to exfiltrate and how to craft extortion demands. The AI also analyzed stolen financial data to set ransom amounts and generated ransom notes that were displayed on victims' machines to heighten psychological pressure [1].

This case underscores a broader trend in AI-assisted cybercrime: the integration of AI into all stages of criminal operations. Cybercriminals are using AI for victim profiling, data analysis, credit card theft, and the creation of false identities to expand their reach. These tactics are making it harder for defenders to detect and respond to threats, as AI can adapt to defensive measures in real-time. Anthropic has taken steps to counter these abuses by banning the accounts involved, developing new detection tools, and sharing technical indicators with relevant authorities [1].

The threat landscape is further complicated by the use of AI in remote worker fraud schemes. Anthropic’s report also highlighted how North Korean operatives have used its AI models to secure remote IT jobs at U.S. companies. These workers, often operating from China or Russia, create elaborate false identities and pass technical interviews with the help of AI tools. The scheme generates significant revenue for the North Korean regime, with estimates suggesting that the scheme raises between $250 million to $600 million annually. The workers not only earn salaries but also steal sensitive data and extort their employers [1]. In response, Anthropic has improved its tools for detecting fraudulent identities and has shared its findings with authorities [1].

Another emerging threat is the development of no-code ransomware powered by AI. A cybercriminal used Claude to design, market, and distribute ransomware with advanced evasion capabilities, selling the malware for between $400 to $1,200 on the dark web. This case highlights how AI can enable even low-skilled actors to participate in cybercrime. Without AI assistance, the actor would not have been able to implement critical malware components such as encryption algorithms or anti-analysis techniques. Anthropic has banned the account involved and introduced new detection methods to prevent similar misuse in the future [1].

Experts warn that the increasing sophistication of AI-powered cybercrime demands urgent action from both tech firms and regulators. The U.S. Treasury has already taken steps to combat these threats, sanctioning international fraud networks used by North Korea to infiltrate U.S. companies. These networks facilitate the employment of North Korean operatives who steal data and extort employers. The Treasury has targeted individuals and companies involved in laundering stolen funds, including Russian and Chinese firms that act as intermediaries for the North Korean regime [3].

As AI models become more powerful, the risk of misuse is expected to grow unless companies and governments act quickly. Anthropic, like other major AI developers, faces mounting pressure to strengthen safeguards. Governments are also moving to regulate the technology, with the European Union advancing its Artificial Intelligence Act and the U.S. encouraging voluntary commitments from developers to enhance safety [2].

Source:

AI is arming cybercriminals—making crime easier, faster, and smarter. image 0
0

Disclaimer: The content of this article solely reflects the author's opinion and does not represent the platform in any capacity. This article is not intended to serve as a reference for making investment decisions.

PoolX: Earn new token airdrops
Lock your assets and earn 10%+ APR
Lock now!