top of page

OpenAI Bans ChatGPT Accounts Used by Nation-State Hackers

  • Rakshit Sethi
  • Jun 20
  • 3 min read

“The misuse of AI for cyber offense is real — but so is the defense.” — OpenAI Threat Intelligence Team


Introduction

In a decisive move to protect the integrity of its platform, OpenAI has banned ChatGPT accounts linked to hackers from Russia, China, Iran, North Korea, and other countries after discovering their involvement in cyber operations and disinformation campaigns.


These malicious actors used the AI tool to support malware development, automate social engineering, conduct reconnaissance, and even influence global political narratives.

Let’s break down what happened, the methods used, and why this matters for cybersecurity professionals today.

Russian Threat Actor: “ScopeCreep” Malware Campaign

A Russian-speaking threat group used ChatGPT to assist in developing and refining malware as part of a campaign dubbed ScopeCreep.

Objectives

  • Develop and debug Windows-based malware

  • Set up command-and-control (C2) infrastructure

  • Evade detection via PowerShell scripting

 

Techniques Used

Tactic

Description

Temporary Email Accounts

Created one-time use accounts to make single queries

Go-based Malware

Written in Go, refined incrementally using AI

AI Assistance

Debugged HTTPS requests, modified Windows Defender exclusions, integrated Telegram API

 

“The malware used ShellExecuteW to escalate privileges and inserted timing delays to avoid detection.”


Delivery Mechanism

Function

Description

Distribution

Via a fake version of Crosshair X, a gaming utility

Execution

Retrieved additional payloads

Execution

Established persistence

Execution

Harvested credentials & cookies

Execution

Sent alerts via Telegram

Chinese APTs: APT5 and APT15 Use AI for Cyber Infrastructure

Two China-linked groups — APT5 and APT15 — were also identified using ChatGPT.

Observed Activities

  • Building Linux packages for offline use

  • Modifying scripts for network infrastructure

  • Creating brute-force tools for FTP servers

  • Managing Android fleets for automated social posting


Influence Capabilities

Created scripts to auto-post content on:

  • TikTok

  • Facebook

  • X (formerly Twitter)

  • Instagram


“Some of the requests included modifying firewalls, troubleshooting system configs, and performing recon on satellite communication systems.”

Iranian Influence Operations: Storm-2035

The Iran-affiliated cluster Storm-2035 used ChatGPT to generate politically charged content in English and Spanish to promote narratives like:

  • Support for Palestinian rights

  • Advocacy for Scottish independence

  • Praise for Iran’s military and diplomacy

 

These messages were distributed by inauthentic personas on platforms like X, targeting audiences in the U.S., U.K., Ireland, and Venezuela.

Other Identified Malicious Clusters

OpenAI flagged a variety of other coordinated campaigns using ChatGPT:


Operation

Origin

Tactics

Sneer Review

China

Bulk social media post generation in English, Chinese, Urdu

Operation High Five

Philippines

Political comments in English and Taglish on TikTok and Facebook

Operation VAGue Focus

China

AI-generated posts from fake journalists

Operation Uncle Spam

China

Created content supporting both sides of U.S. political issues

Operation Helgoland Bite

Russia

Russian-language content targeting German elections

Operation Wrong Number

Cambodia (likely China-linked)

Recruitment messages for task scams in 6+ languages

 

“Task scams used ChatGPT to lure victims with offers of high pay for simple tasks, then demanded upfront fees — a hallmark of fraud.”

OpenAI’s Response: Responsible AI and Proactive Defense

OpenAI emphasized that most malicious queries were low-complexity, involving tasks like debugging or minor script rewriting. However, the risks of LLMs being weaponized for cyber and influence ops are clear.

To counter this, OpenAI:

  • Banned identified accounts

  • Collaborated with Microsoft Threat Intelligence

  • Issued public reports to foster transparency and community defense

 

“Our models helped catch bad actors as much as they helped those actors attempt bad things.” — OpenAI

Conclusion

This crackdown reinforces that AI security is not just about the technology — it’s about the ecosystem.

As LLMs grow in power and accessibility, security teams must:

  • Monitor abuse patterns

  • Understand how AI can be misused

  • Collaborate on threat intelligence

  • Apply strong usage policies for AI in development environments

 

 

References:

 
 
 

Comentários


Security Certification

Security Testing

Services

Consulting & Support

Quick Links

Stay Connected

© 2024 Powered and secured by FiveTattva

Privacy Policy

bottom of page