Artificial Intelligence in South Korea
In this guide, a global panel of legal experts analyse the key trends and developments in the fast-evolving world of artifcial intelligence. Through a series of engaging interviews, they discuss
the most important legislative, regulatory and policy initiatives affecting AI developers and look at what the future may hold in this exciting field.
What is the current state of the law and regulation governing AI in your jurisdiction? How would you compare the level of regulation with that in other jurisdictions?
South Korea has yet to enact comprehensive legislation to regulate artificial intelligence (AI), similar to the EU AI Act. Instead, various regulatory authorities have issued guidelines addressing specific aspects of AI. For example, the Personal Information Protection Commission (the PIPC) has provided guidelines on privacy and AI, while the Ministry of Culture, Sports and Tourism has focused on copyright issues related to AI. Additionally, various organisations, including the Ministry of Science and ICT (the MoICT) and the Ministry of Education (the MoE), and others, have also released AI-related guidelines.
Currently, several bills are pending in the National Assembly. While these bills vary in specifics, their overall aim is to promote the development of AI technology and industry rather than impose stringent regulations. Consequently, South Korea’s regulatory approach appears less stringent than the EU AI Act, which imposes strong sanctions on high-risk AI applications.
However, various laws and regulations incorporate the term 'artificial intelligence' and mandate its introduction and use across different sectors. For example:
- General Act on Public Administration: This law outlines the principles of public administration and explicitly allows the use of fully-automated systems (including those utilising AI) to impose disposition, promoting digitalisation in line with the Industry 4.0. However, it also stipulates that decisions made by such systems must be legally grounded, ensuring that AI does not infringe upon fundamental rights.
- Personal Information Protection Act (amended 14 March 2023; the PIPA): This act grants individuals the right to object to automated decisions that significantly affect their rights or obligations. Upon request, data controllers must either refrain from applying the automated decision or take appropriate actions, such as providing an explanation.
- Public Official Election Act (amended 29 December 2023): This law prohibits the use of AI-generated deepfake images or videos that are difficult to distinguish from real footage in election campaigns. As AI technology, particularly deepfakes, raises growing social concerns, this provision indicates a potential area for future regulatory developments.
We will provide a detailed explanation of the currently proposed bills later.
Has the government released a national strategy on AI? Are there any national efforts to create data sharing arrangements?
In 2019, the South Korean government introduced the National Strategy for Artificial Intelligence, outlining its vision and strategy for the emerging era of AI. Developed with input from all ministries, including the MoICT, the strategy focuses on several key areas:
- Investments in Semiconductor Technologies: Enhancing Korea's competitiveness in AI;
- System Development: Equipping citizens with foundational AI skills;
- Public Service Enhancement: Promoting AI-driven digital government; and
- Ethical Guidelines: Fostering a people-centered AI era through established ethical standards.
In addition to this overarching strategy, individual ministries have implemented AI-related policies relevant to their sectors. For example, in 2021, the Financial Services Commission issued the Guidelines for AI in the Financial Sector to ensure the stability of the financial industry and markets. In 2023, the PIPC unveiled the Policy Direction for the Safe Use of Personal Information in the AI Era, aimed at minimising the risks of personal data misuse while facilitating the safe data use for AI innovation.
Furthermore, South Korea has designated the National Information Society Agency (NIA) as a public data utilisation centre under the Act on Promotion of the Provision and Use of Public Data. The NIA is responsible for expanding the availability of public data and encouraging its use by the private sector. It also provides databases for AI training in fields such as law, patents, and healthcare, supporting the enhancement of AI performance and the development of AI-driven services.
What is the government policy and strategy for managing the ethical and human rights issues raised by the deployment of AI?
In 2020, the South Korean government announced the AI Ethical Standards, which are a key priority of the National Strategy for AI. AI Ethical Standards lay out three fundamental principles and ten core requirements aimed at promoting 'humanity' as the highest value in human-centered AI. The main elements of the standards are as follows:
- Three Basic Principles: To uphold humanity, AI development and use must follow (1) the principle of human dignity, (2) the principle of societal public interest, and (3) the principle of technological appropriateness.
- Ten Core Requirements: To ensure the practical application of these principles throughout AI development and utilisation, the following requirements must be met: (1) human rights; (2) privacy; (3) diversity; (4) non-infringement; (5) public interest; (6) solidarity; (7) data governance; (8) accountability; (9) safety; and (10) transparency.
Since the introduction of the AI Ethical Standards, individual ministries have also released guidelines tailored to their specific responsibilities. On 11 April 2022, the National Human Rights Commission issued the Human Rights Guidelines for AI Development and Utilization, setting standards that include transparency, the duty to explain AI decisions and the prohibition of discrimination. The guidelines also call for a risk rating system and a legal framework to enable regulatory oversight of AI according to risk levels.
Similarly, on 11 August 2022, the MoE introduced the Ethical Principles for AI in Education, which emphasise equal opportunity, fairness in education and transparency. Lastly, on 30 December 2020, the Ministry of Land, Infrastructure, and Transport unveiled the Ethical Guidelines for Autonomous Vehicles, outlining general principles for the development of AI-powered autonomous vehicles and specifying the responsibilities of key stakeholders, including designers and manufacturers.