You are currently viewing The lack of AI regulation may increase PTSD rates

Artificial Intelligence (AI) is transforming industries and changing how people live and work. However, AI’s disruptive impact could lead to unintended psychological consequences without appropriate regulation and oversight. Unregulated AI could create environments that contribute to heightened stress, trauma, and ultimately, Post-Traumatic Stress Disorder (PTSD) and Complex PTSD (C-PTSD).

The risks are significant, as exposure to high-stress environments and uncertainty are well-documented causes of these conditions. From job insecurity due to automation to AI-driven cyberbullying, understanding the risks of unmanaged AI can help society mitigate its psychological impacts.

Job insecurity and the risk of trauma

One of the primary sources of trauma linked to AI disruption is job insecurity. With AI increasingly taking over manufacturing, customer service, and even some white-collar jobs—workers face the looming threat of job displacement.

According to the World Economic Forum, automation could take over millions of jobs by 2030. This uncertainty creates a persistent, high-stress environment, especially for workers in vulnerable industries. The prospect of sudden job loss (with no option to upskill) can be a traumatic experience, potentially leading to symptoms of PTSD and C-PTSD, including anxiety, depression, and a diminished sense of self-worth.

The psychological impact of job insecurity isn’t just a theory; studies have shown that job instability correlates to increased mental health issues. For example, the closure of coal mines in the U.S. and U.K. led to economic instability and severe mental health consequences in those regions.

With AI capable of transforming multiple sectors, the fear of losing one’s livelihood can become pervasive, leaving workers vulnerable to trauma and chronic stress, which may exacerbate PTSD-related symptoms.

AI and cyberbullying

AI-driven algorithms on social media platforms can intensify instances of cyberbullying, especially among younger users. Algorithms designed to maximise engagement often push controversial or inflammatory content, making online harassment and abuse more visible and more challenging to avoid.

The rise of AI-generated deepfakes—doctored images or videos that appear genuine—has already caused online harassment and identity manipulation. Deepfake technology can harm individuals’ reputations, making them feel helpless and leading to feelings of isolation, anxiety, and hypervigilance, which are associated with C-PTSD.

In 2023, reports of cyberbullying using deepfakes surfaced globally, with cases of students and professionals suffering emotional distress due to these doctored videos. As AI advances, these tools become more accessible, raising the risk of more people falling victim to trauma induced by cyber harassment, false accusations, or character defamation. Without regulation, the ease of creating and sharing harmful AI-generated content could increase the prevalence of trauma-induced psychological conditions.

Continuous surveillance and loss of privacy

AI’s role in surveillance is another source of potential trauma. Facial recognition and predictive policing have been implemented in various cities worldwide, often sparking concerns about civil liberties and privacy. Individuals constantly monitored by AI may experience hypervigilance, feeling they have no private space.

This “always-watched” feeling can create a sense of paranoia and stress. In China, where AI-driven surveillance is heavily employed, there have been concerns over the psychological toll this level of monitoring takes on citizens. When people feel they cannot escape surveillance, they may experience ongoing stress, which could contribute to PTSD and C-PTSD symptoms.

AI in warfare and increased trauma in conflict zones

Unregulated AI in military applications raises significant concerns about trauma, particularly among civilians and soldiers in conflict zones. AI-driven drones and autonomous weapon systems introduce new types of warfare, where attacks may be unpredictable and highly destructive.

The fear and psychological strain associated with AI-driven weapons may lead to trauma that mirrors or even exacerbates traditional forms of war-induced PTSD. Soldiers exposed to unpredictable AI-driven combat scenarios are likely to face higher stress levels and potential trauma due to the unpredictable nature of these technologies. In 2021, the use of autonomous drones in Libya led to ethical concerns and underscored the psychological impact of such warfare.


While AI has the potential to bring substantial benefits, unregulated or unmanaged AI could disrupt lives in ways that increase the risk of PTSD and C-PTSD. Job insecurity, cyberbullying, invasive surveillance, and the use of AI in warfare each pose psychological risks that could amplify stress and trauma in society. Regulations to monitor and manage AI technology can mitigate some risks, ensuring AI benefits society without sacrificing individuals’ mental health. Addressing these issues is critical in preventing a rise in trauma-related disorders as AI continues to reshape our world.