The Dark Side of Conversational AI: How Attackers Are Exploiting ChatGPT and Similar Tools for Violence

The Dark Side of Conversational AI: How Attackers Are Exploiting ChatGPT and Similar Tools for Violence
Photo by Solen Feyissa / Unsplash

In a sobering development that highlights the dual-edged nature of artificial intelligence, law enforcement agencies have identified the first documented cases of attackers using popular AI chatbots like ChatGPT to plan and execute violent attacks on U.S. soil. This emerging threat raises critical questions about AI safety, user privacy, and the responsibilities of technology companies in monitoring potentially harmful uses of their platforms.


The Dark Side of AI: OpenAI’s Groundbreaking Report Exposes Nation-State Cyber Threats
How State Actors Are Weaponizing ChatGPT for Espionage, Fraud, and Influence Operations In a watershed moment for AI security, OpenAI has released its June 2025 quarterly threat intelligence report, marking the first comprehensive disclosure by a major tech company of how nation-state actors are weaponizing artificial intelligence tools. The report

Two Landmark Cases Define a New Threat Vector

The Palm Springs Fertility Clinic Bombing (May 2025)

In the most recent incident, two men suspected in last month's bombing of a Palm Springs, California fertility clinic used a generative artificial intelligence chat program to help plan the attack, federal authorities revealed. Records from an AI chat application show Guy Edward Bartkus, the primary suspect in the bombing, "researched how to make powerful explosions using ammonium nitrate and fuel," authorities said.

According to federal court documents, three days before his accomplice arrived at Bartkus's house, records from an AI chat application show that Bartkus researched how to make powerful explosions using ammonium nitrate and fuel. The attack resulted in one death and four injuries, marking a tragic milestone in the exploitation of AI technology for terrorist purposes.

The Las Vegas Cybertruck Explosion (January 2025)

Just months earlier, the highly decorated soldier who exploded a Tesla Cybertruck outside the Trump hotel in Las Vegas used generative AI including ChatGPT to help plan the attack, Las Vegas police revealed. U.S. Army Special Forces Master Sgt. Matthew Alan Livelsberger looked into the amount of explosives he'd need, where to buy fireworks and how to buy a phone without providing identifying information.

"This is the first incident that I'm aware of on U.S. soil where ChatGPT is utilized to help an individual build a particular device," Sheriff Kevin McMahill stated, calling it "a concerning moment."

The Privacy and Safety Paradox

These incidents illuminate a fundamental challenge facing AI companies: how to balance user privacy with the need to detect and prevent harmful uses of their platforms. The cases raise several critical questions that extend far beyond traditional cybersecurity concerns.

What AI Companies Know About User Queries

When asked whether law enforcement agencies should have been aware of Livelsberger's queries of ChatGPT, McMahill said he didn't know whether the capability to track how someone uses artificial intelligence exists yet. Las Vegas and New York law enforcement officials have told NBC News that they don't yet have cooperation from AI services to get alerted when someone starts to ask what one would need to conduct an attack or build an explosive.

This gap reveals a concerning lack of coordination between AI platforms and law enforcement agencies. Unlike traditional internet searches, which can be monitored through various legal mechanisms, AI chat sessions exist in a relative privacy vacuum that may inadvertently protect bad actors.

The Technical Challenge of Harmful Content Detection

OpenAI, the creator of ChatGPT, issued a statement saying, "We are saddened by this incident and committed to seeing AI tools used responsibly. Our models are designed to refuse harmful instructions and minimize harmful content. In this case, ChatGPT responded with information already publicly available on the internet and provided warnings against harmful or illegal activities."

This response highlights a critical limitation: AI systems can provide harmful information while technically following their safety guidelines, as long as the information is publicly available elsewhere on the internet. The challenge lies in distinguishing between legitimate educational queries and those with malicious intent.

The Broader Landscape of AI-Enabled Threats

Growing Sophistication in Terrorist AI Adoption

According to terrorism researchers, multiple domestic and international terrorist organizations and extremist groups across the ideological spectrum have explicitly issued guidance on how to securely and/or effectively leverage generative AI for content creation. In 2023, Islamic State pushed out a guide on how to securely use generative AI.

Tech Against Terrorism identified a post on a far-right message board that included a guide to memetic warfare, and the use of AI to make these propaganda memes. In April, it became apparent that Islamic State supporters were expressing an interest in further using AI to boost the scale and scope of its public content.

The Scale of Potential Impact

Experts warn that "the imminent danger is clear: terrorists will be able to produce more convincing content at scale, in a way they have never been able to do before. Moreover, in a record year of elections taking place around the world, this opportunistic threat overlaps with the spread of disinformation which is already being amplified thanks to Generative AI."

Privacy Implications: Data Retention and Law Enforcement Access

Current Data Practices Create Vulnerabilities

For services like ChatGPT, OpenAI states in their privacy policies that conversations, text inputs and other user data may be utilized for improving models, training algorithms and more. When you input information into ChatGPT, that data is processed and may be stored on OpenAI's servers. The standard version of ChatGPT may use this data for model improvements, though OpenAI has data retention policies in place.

Navigating the AI Frontier: A CISO’s Perspective on Securing Generative AI
As CISOs, we are tasked with safeguarding our organizations against an ever-evolving threat landscape. The rapid emergence and widespread adoption of Generative AI, particularly Large Language Models (LLMs) and integrated systems like Microsoft 365 Copilot, represent both incredible opportunities and significant new security challenges that demand our immediate attention and

Conversations with ChatGPT bots are stored for a maximum of 30 days, creating a window during which harmful planning activities could theoretically be detected and investigated.

The Enterprise Privacy Advantage

For OpenAI API services offered to corporate customers under a commercial agreement, OpenAI specifies they will not use customer data to train or improve any AI systems unless the business customer explicitly opts in to data sharing.

This distinction creates a troubling scenario where bad actors with access to enterprise accounts might operate with greater privacy protections than individuals using consumer versions of AI tools.

Technical Challenges in Detecting Harmful Intent

The Information Availability Problem

Paul Keener, a Cyber Security Strategist with GuidePoint Security, explains that the technology "is designed to provide the best response that it can generate, not always factual but will provide a response that will be most logical." When information about explosives or weapons is readily available on the public internet, AI systems struggle to differentiate between academic curiosity and criminal intent.

The New Frontier: How We’re Bending Generative AI to Our Will
The world is buzzing about Large Language Models (LLMs) and systems like Copilot, and frankly, so are we. While security teams scramble to understand this rapidly evolving landscape, we see not just potential, but fresh, fertile ground for innovative exploitation. These aren’t just chatbots; they’re gateways, interfaces, and processing engines

Current Safety Measures Fall Short

Research from the Combating Terrorism Center at West Point warns that "terrorists and violent extremists could use these tools to enhance their operations online and in the real world. Large language models have the potential to enable terrorists to learn, plan, and propagate their activities with greater efficiency."

The study notes that despite safety measures, determined actors can often find ways to circumvent content filters and safety guidelines.

The Authentication and Anonymity Challenge

Anonymous Access Enables Abuse

One concerning aspect revealed in the Las Vegas case was that Livelsberger researched "how to buy a phone without providing identifying information," suggesting sophisticated operational security awareness among attackers.

The ease of creating anonymous accounts or using AI services without robust identity verification creates opportunities for bad actors to operate undetected.

Account Credential Compromises

Recent research revealed that over 225,000 OpenAI credentials were exposed on the dark web, stolen by various infostealer malware. When unauthorized users gain access to ChatGPT accounts, they can view complete chat history, including any sensitive business data shared with the AI tool.

This highlights how account compromises can expose not just personal information, but potentially criminal planning activities that could aid law enforcement investigations or, conversely, alert criminal networks to ongoing investigations.

Regulatory and Policy Implications

The Compliance Landscape

New AI regulations like the EU AI Act are creating compliance requirements with penalties up to €35 million, with key provisions taking effect in 2025. However, current regulatory frameworks don't adequately address the challenge of balancing user privacy with public safety in AI applications.

The Need for Industry Standards

Privacy experts note that "removing PII from training data is feasible but challenging, specifically highlighting the lack of clear standards" for AI privacy protection. The absence of standardized approaches to harmful content detection and user privacy protection creates inconsistencies across platforms.

DeepSeek R1 Red Team: Navigating the Intersections of LLM AI Cybersecurity and Privacy
Introduction Large Language Models (LLMs) like DeepSeek R1 introduce transformative capabilities but also present unique cybersecurity and privacy challenges. The “LLM AI Cybersecurity.pdf” document offers a framework for understanding LLM security and governance. However, as the “deepseekredteam.pdf” report illustrates, specific models can exhibit critical failures. This article delves

Recommendations for Enhanced Safety and Privacy

For AI Companies

  1. Implement Contextual Monitoring: Develop systems that can identify patterns of concerning queries without compromising individual privacy
  2. Enhanced Verification: Consider requiring verified identities for access to certain types of information
  3. Improved Flagging Systems: Create mechanisms to flag potentially harmful query patterns to appropriate authorities while preserving privacy for legitimate users
  4. Transparency Reports: Publish regular reports on harmful use detection and prevention efforts

For Users and Organizations

  1. Enterprise Solutions: Consider using enterprise versions of AI tools that offer enhanced security features, such as OpenAI's enterprise tier or Microsoft's Azure OpenAI Service, which provide more robust data handling guarantees and administrative controls.
  2. Policy Development: Implement mandatory review procedures for ChatGPT-generated content before it's used in critical applications, customer communications, or decision-making.
  3. Training and Awareness: Educate users about the privacy implications and potential security risks of sharing sensitive information with AI systems

For Policymakers

  1. Balanced Regulation: Develop frameworks that protect user privacy while enabling appropriate oversight of potentially harmful AI uses
  2. Law Enforcement Coordination: Establish clear protocols for AI companies to report concerning activities while protecting user rights
  3. International Cooperation: Coordinate global responses to AI-enabled threats while respecting diverse privacy regulations

The Path Forward: Balancing Innovation and Safety

The emergence of AI-assisted attack planning represents a new frontier in both cybersecurity and privacy protection. As research warns, "With the arrival and rapid adoption of sophisticated deep-learning models such as ChatGPT, there is growing concern that terrorists and violent extremists could use these tools to enhance their operations online and in the real world."

Key Considerations for the Future

  1. Privacy by Design: AI safety measures must be built with privacy protection as a core principle, not an afterthought
  2. Transparent Governance: Companies must be clear about their monitoring capabilities and limitations
  3. User Education: Individuals need to understand both the capabilities and risks of AI systems
  4. Continuous Adaptation: Safety measures must evolve as quickly as the threats they're designed to address

Conclusion: A New Era of Digital Responsibility

The use of AI tools in planning violent attacks marks a concerning evolution in the threat landscape. While these incidents represent a small fraction of the billions of AI interactions that occur daily, they underscore the need for a more nuanced approach to AI safety that doesn't sacrifice user privacy for security theater.

As critics note, the question is not just about technical capabilities, but about trust: "Former employees and others say the company should not be trusted with governing itself." This sentiment extends beyond any single company to the entire AI industry.

The challenge ahead is developing systems that can identify and prevent harmful uses of AI while preserving the privacy, innovation, and beneficial applications that make these technologies valuable to society. Success will require unprecedented cooperation between technology companies, law enforcement agencies, policymakers, and civil rights advocates.

As we navigate this new landscape, the goal must be creating AI systems that are both safe and respectful of user privacy—not because we can't achieve one without sacrificing the other, but because both are essential for a technology that increasingly shapes our daily lives.

The Palm Springs and Las Vegas incidents serve as stark reminders that the AI revolution brings both tremendous promise and sobering responsibility. How we respond to these challenges will determine whether AI remains a tool for human flourishing or becomes a vector for harm.


This analysis is based on publicly available law enforcement reports, court documents, and industry research. The privacy implications discussed reflect current understanding of AI company practices and may evolve as technologies and policies develop.

Read more