Technology & Science
South Korea’s AI Basic Act Enters Force, Mandating Deep-Fake Labels and Human Oversight
On 22 Jan 2026 South Korea began enforcing its 2024 AI Basic Act, the first fully-implemented national law that obliges companies to watermark deepfakes, notify users of generative AI and place human monitors over “high-impact” systems.
Focusing Facts
- Violations can draw fines up to ₩30 million (≈US$20,400) after a one-year grace period.
- The statute names 10 sensitive sectors—nuclear power, criminal investigations, loan screening, education, medical care, drinking water, transport, etc.—that trigger extra transparency and safety duties.
- Firms with ≥₩1 trillion global revenue or ≥1 million daily Korean users (e.g., OpenAI, Google) must appoint a local representative under the Act.
You've read the facts. The perspectives are behind this line.
Perspectives in this article
- International mainstream outlets
- Tech industry / startup–focused tech press
- US local/state media pressing for child-safety regulation