More than 80 percent of teenagers and adults in South Korea expressed concern over AI cyber abuse, including deepfakes, according to a nationwide poll released in Seoul on March 30.
The survey, conducted by the Korea Media and Communications Commission between September and November, highlights growing anxiety about how generative AI tools are being misused. Findings show widespread awareness of cyber violence risks, with both younger and older populations reporting increased exposure and fear of long-term harm caused by manipulated digital content.
Widespread concern across age groups
The poll reveals that 89.4 percent of teenagers and 87.6 percent of adults believe AI cyber abuse is a serious issue. These concerns stem largely from the rapid rise of tools capable of creating realistic fake images, videos, and text. Teenagers pointed to the ease of content creation as a major risk, while adults emphasized the lasting impact of harmful material that can be repeatedly shared online.
The study surveyed 9,296 students ranging from elementary to high school and 7,521 adults aged 19 to 69. Data reported by Yonhap News Agency shows that exposure to cyber abuse remains significant despite slight shifts across demographics. Among teenagers, 42.3 percent reported experiencing some form of cyber abuse in 2025, marking a marginal decline from the previous year.
In contrast, adult exposure increased to 15.8 percent, rising by 2.3 percentage points. This suggests that while awareness among younger users may be improving, risks for adults are growing as digital platforms expand and AI tools become more accessible, contributing to increased AI cyber abuse incidents.
Platforms and perpetrators driving abuse
The report identifies text messages and online gaming platforms as the primary channels where teenagers encounter AI cyber abuse. Adults reported similar experiences but were more likely to face abuse through text messaging and social media platforms. These findings reflect how communication tools remain central to both connection and conflict in digital spaces.
Strangers were identified as the most common perpetrators for both groups, followed by friends. This pattern highlights the unpredictable nature of online interactions, where anonymity and distance often enable harmful behavior. The rise of AI-generated content further complicates the issue, as perpetrators can create convincing false narratives or images without direct interaction.
Officials have raised concerns about the psychological and social impact of AI cyber abuse. Kim Jong-cheol stated that cyber abuse extends beyond ethical concerns and can damage personal dignity while infringing on fundamental rights. His remarks underline the broader implications of AI misuse, which now affects not only individuals but also trust in digital ecosystems.
In response, the government has pledged to promote healthier use of digital platforms and strengthen awareness around responsible AI usage. Efforts are expected to focus on education, regulation, and improved monitoring systems to curb the spread of AI cyber abuse.
The findings reflect a critical moment as societies adapt to rapidly evolving technologies. While AI offers significant benefits, its misuse poses new challenges that require coordinated action from policymakers, platforms, and users.
Visit more of our news! CyberPro Magazine




