Amid the recent political turmoil following the impeachment of the President, the National Assembly successfully passed the Framework Act on Artificial Intelligence Development and Establishment of a Foundation for Trustworthiness (AI Framework Act) on December 26, 2024. With this achievement, the Republic of Korea (Korea) becomes the second jurisdiction worldwide, after the European Union, to adopt a comprehensive AI framework law, balancing regulatory requirements with the goal of fostering AI industry growth.
I. Key Provisions of the AI Framework Act
1. Definition of Key Concepts
- Artificial Intelligence (AI): Systems generating outputs, such as predictions, recommendations, or decisions, which impact real or virtual environments for specific objectives, with varying autonomy and adaptability (Article 2(ii)).
- High-Impact AI: AI systems that significantly affect human life, safety, or fundamental rights, and are used in sectors specified under the AI Framework Act and its Enforcement Decree such as energy, healthcare, nuclear operations, biometric data analysis, public decision-making, and education (Article 2(iv)).
- Generative AI: Systems producing text, images, videos, or other outputs based on the structure and characteristics of the input data (Article 2(v)).
- AI Business: Entities engaged in business related to the AI industry, including “AI development businesses”, which develop and provide AI systems, and “AI utilization businesses”, which offer products or services utilizing AI systems provided by AI development businesses (Article 2(vii)).
2. The Scope of Application
The AI Framework Act applies to activities conducted abroad if they impact Korea’s domestic market or users. However, the Act does not apply to AI systems developed and used exclusively for national defense or security purposes, as designated by Presidential Decree (Article 4).
3. Requirements for AI Safety and Trustworthiness
a. High-Impact AI
AI businesses must evaluate whether their AI systems qualify as high-impact AI before launching related products or services. The Ministry of Science and ICT (MSIT) may provide guidelines for identifying high-impact AI and confirm its status if requested (Article 33(1) – (3)). When providing products or services utilizing high-impact AI, AI businesses must notify users in advance of such fact (Article 31(1)).
Additionally, AI businesses are required to implement the following measures to ensure the safety and trustworthiness of high-impact AI:
- Develop and operate risk management plans.
- To the extent technically feasible, establish and operate a plan to provide explanations for AI-generated outputs, including the criteria used to infer such outputs, and the training data utilized to develop and use the AI.
- Establish and operate user protection measures.
- Ensure human oversight of AI systems.
- Maintain documentation demonstrating the measures taken to ensure the safety and trustworthiness of AI.
- Address additional measures to ensure safety and trustworthiness as determined by the National AI Committee (Article 34(1) – (3)).
When providing products or services utilizing high-impact AI, AI businesses should also endeavor to obtain prior verification and certification regarding high-impact AI systems and assess their potential impact on fundamental rights (Articles 30(3) and 35).
b. Generative AI
When providing products or services utilizing generative AI, AI businesses must notify users in advance of such fact. AI businesses must also label outputs of such products or services clearly as AI-generated, particularly when the outputs mimic real-world sounds, images, or videos. For artistic or creative expressions, this obligation can be fulfilled in a manner that does not interfere with the display or appreciation of the work (Article 31(1)–3)).
c. Other Requirements
For AI systems exceeding computational thresholds set by Presidential Decree, relevant AI businesses are required to:
(i) Identify, evaluate, and mitigate risks throughout the AI lifecycle.
(ii) Implement a risk management system for monitoring and responding to AI safety incidents.
(iii) Submit the results of (i) and (ii) to the MSIT (Article 32(1) – (3)).
Foreign AI businesses without a place of business in Korea that meet certain thresholds for user numbers or sales (to be specified in the Enforcement Decree) must appoint a local representative with a Korean address or office. The local representative is responsible for:
- Submitting the results of the implementation of safety measures for AI systems.
- Applying for the confirmation of high-impact AI by the MSIT.
- Supporting the implementation of safety and trustworthiness measures.
- Failure of the local representative to comply with the aforementioned obligations renders the foreign business liable for the violations (Article 36(1) – (3)).
4. Regulatory Investigations and Sanctions
The MSIT may conduct investigations into suspected violations of the AI Framework Act and issue correction or cease-and-desist orders upon confirming violations (Article 40(1) – (3)). Investigations may address potential violations of the following obligations:
- Notification and labeling requirements for generative AI outputs.
- Implementation of safety measures and submission of compliance results for AI systems exceeding computational thresholds set by Presidential Decree.
- Adherence to safety and reliability standards for high-impact AI systems.
- Non-compliance with correction or cease-and-desist orders, failure to fulfill notification obligations for high-impact or generative AI, or failure to designate a local representative may result in administrative fines of up to KRW 30 million (Article 43).
5. AI Development and Industry Promotion
The AI Framework Act requires the MSIT to implement measures for the production, collection, management, distribution, and utilization of AI training data. These measures include selecting and supporting projects that produce and provide training data. The MSIT is also required to establish an integrated system for managing and providing training data to the private sector (Article 15).
The Act establishes a legal framework to establish and promote AI Data Centers through administrative and financial support for their construction and operation. It further emphasizes fostering AI-related expertise by attracting international talent and supporting domestic employment opportunities (Article 25).
Key provisions in the Act also promote:
- The development and safe application of AI technology.
- Standardization of AI technologies.
- Support for small and medium-sized enterprises (SMEs) in adopting AI technologies.
- Promotion of AI start-ups and convergence initiatives.
- Facilitation of international cooperation and entry into global markets.
- Establishment of framework for AI testing and verification to support industry development and technological innovation.
These initiatives collectively aim to establish a strong foundation for advancing AI technology and ensuring its safe implementation in Korea.
II. Implications
The AI Framework Act aims to balance the establishment of essential baseline regulations for AI development and use while avoiding the imposition of onerous administrative penalties or criminal sanctions that could stifle innovation. The Act also emphasizes and prioritizes supportive measures to foster and advance AI technology and industry growth, including provisions for AI training data, the establishment of AI Data Centers, and workforce expansion. It provides a legal foundation for the creation of the National AI Committee and the AI Safety Research Institute to ensure consistent policy implementation and oversight of AI safety.
According to legislative procedures, bills passed by the National Assembly must be promulgated by the President (or Acting President) within 15 days of being forwarded to the government. The AI Framework Act specifies in its Annex that it will take effect one year after its promulgation, with the effective date anticipated in January 2026. Key details, including definitions of high-impact AI, computational thresholds, and safety measures will be further clarified through Presidential Decrees or notifications from the MSIT and companies are encouraged to remain informed on these developments. Additionally, given that implementing the measures required by the Act -- such as measures to ensure the safety and trustworthiness of high-impact AI – are expected to take significant time, companies should consider conducting a preliminary evaluation to determine whether their products or services may involve high-impact or generative AI and implementing the compliance measures outlined in the AI Framework Act to the extent possible to ensure readiness by the enforcement date.
If you have any questions regarding this article, please contact below:
Hwan Kyoung KO (hwankyoung.ko@leeko.com)
Sunghee CHE (sunghee.chae@leeko.com)
Kyung Min SON (kyungmin.son@leeko.com)
Il Shin LEE (ilshin.lee@leeko.com)
For more information, please visit our website: www.leeko.com