Technology & Science

South Korea’s AI Basic Act Enters Force, Mandating Deep-Fake Labels and Human Oversight

On 22 Jan 2026 South Korea began enforcing its 2024 AI Basic Act, the first fully-implemented national law that obliges companies to watermark deepfakes, notify users of generative AI and place human monitors over “high-impact” systems.

By Priya Castellano

Focusing Facts

  1. Violations can draw fines up to ₩30 million (≈US$20,400) after a one-year grace period.
  2. The statute names 10 sensitive sectors—nuclear power, criminal investigations, loan screening, education, medical care, drinking water, transport, etc.—that trigger extra transparency and safety duties.
  3. Firms with ≥₩1 trillion global revenue or ≥1 million daily Korean users (e.g., OpenAI, Google) must appoint a local representative under the Act.

You've read the facts. The perspectives are behind this line.

Sign up for daily briefings and 5 full articles per week. No credit card.

Perspectives in this article

  • International mainstream outlets
  • Tech industry / startup–focused tech press
  • US local/state media pressing for child-safety regulation
Share

Related Stories