South Korea Launches Comprehensive AI Laws Amid Backlash from Tech Startups and Civil Society Groups.
The South Korean government has unveiled what it claims is the world's first comprehensive set of artificial intelligence (AI) laws, designed to regulate the use of AI technology in various sectors. However, the new legislation has already faced criticism from local tech startups and civil society groups, who argue that the rules are too strict or don't go far enough.
The AI Basic Act, which took effect last Thursday, requires companies providing AI services to label AI-generated content and conduct risk assessments for high-impact AI systems used in areas such as medical diagnosis, hiring, and loan approvals. The law also stipulates that extremely powerful AI models must have safety reports, although the threshold is set so high that no models worldwide currently meet it.
The legislation has been hailed by government officials as a model for other countries to follow, but tech startups and civil society groups have expressed frustration with the rules. Many argue that they will create uncertainty and stifle innovation, particularly since companies must self-determine whether their systems qualify as high-impact AI.
One major concern is competitive imbalance, as all Korean companies face regulation regardless of size, while only foreign firms meeting certain thresholds – such as Google and OpenAI – are exempt. This has led to warnings about the potential for a "technological arms race" in Korea.
Critics also argue that the law does not provide sufficient protection for people harmed by AI systems. Four organizations, including Minbyun, a collective of human rights lawyers, have issued joint statements highlighting the law's shortcomings and calling for clearer definitions of high-impact AI and exemptions for certain types of AI systems.
Despite the pushback, government officials maintain that the law is 80-90% focused on promoting industry rather than restricting it. The ministry of science and ICT has promised to clarify the rules through revised guidelines and expects the law to "remove legal uncertainty" and build a healthy and safe domestic AI ecosystem.
Experts note that South Korea has opted for a more flexible approach to AI governance, centered on trust-based promotion and regulation, which may serve as a useful reference point in global AI governance discussions. However, the country's unique path may also lead to challenges in enforcing the law and ensuring its effectiveness in regulating the use of AI technology.
The South Korean government has unveiled what it claims is the world's first comprehensive set of artificial intelligence (AI) laws, designed to regulate the use of AI technology in various sectors. However, the new legislation has already faced criticism from local tech startups and civil society groups, who argue that the rules are too strict or don't go far enough.
The AI Basic Act, which took effect last Thursday, requires companies providing AI services to label AI-generated content and conduct risk assessments for high-impact AI systems used in areas such as medical diagnosis, hiring, and loan approvals. The law also stipulates that extremely powerful AI models must have safety reports, although the threshold is set so high that no models worldwide currently meet it.
The legislation has been hailed by government officials as a model for other countries to follow, but tech startups and civil society groups have expressed frustration with the rules. Many argue that they will create uncertainty and stifle innovation, particularly since companies must self-determine whether their systems qualify as high-impact AI.
One major concern is competitive imbalance, as all Korean companies face regulation regardless of size, while only foreign firms meeting certain thresholds – such as Google and OpenAI – are exempt. This has led to warnings about the potential for a "technological arms race" in Korea.
Critics also argue that the law does not provide sufficient protection for people harmed by AI systems. Four organizations, including Minbyun, a collective of human rights lawyers, have issued joint statements highlighting the law's shortcomings and calling for clearer definitions of high-impact AI and exemptions for certain types of AI systems.
Despite the pushback, government officials maintain that the law is 80-90% focused on promoting industry rather than restricting it. The ministry of science and ICT has promised to clarify the rules through revised guidelines and expects the law to "remove legal uncertainty" and build a healthy and safe domestic AI ecosystem.
Experts note that South Korea has opted for a more flexible approach to AI governance, centered on trust-based promotion and regulation, which may serve as a useful reference point in global AI governance discussions. However, the country's unique path may also lead to challenges in enforcing the law and ensuring its effectiveness in regulating the use of AI technology.