Written by Vaibhav Saxena
On 10 December 2025, the National Assembly of Vietnam passed Law No. 134/2025/QH15 on Artificial Intelligence (the “AI Law”). This is Vietnam’s first dedicated statute governing artificial intelligence activities. The AI Law will enter into force on 1 March 2026 (“Effective Date”) and establishes a unified legal framework regulating the development, supply, deployment, and use of AI systems in Vietnam.
Purview
The AI Law governs:
- Activities relating to research, development, provision, deployment, and use of artificial intelligence systems;
- Vietnamese and foreign organizations and individuals participating in AI-related activities in Vietnam.
The AI Law does not apply to AI systems used exclusively for national defense, security, or cryptographic purposes, which remain governed by specialized legislation.
Risk Classification
The AI Law adopts a risk-based management approach, classifying AI systems into three categories:
- High-risk AI systems;
- Medium-risk AI systems;
- Low-risk AI systems.
Risk classification is based on criteria including potential impact on human life, health, lawful rights and interests, public interests, and social order, and shall be further guided by the Government.
Classification and Notification Obligations
- Organizations and individuals providing AI systems are responsible for classifying AI systems before putting them into use;
- Medium-risk and high-risk AI systems must be notified to the competent authority via the national AI information system before deployment;
- Re-classification is required where changes affect the system’s risk level.
Transparency and User Information
The AI Law imposes general transparency obligations, including:
- Informing users when they are interacting with an AI system;
- Disclosure requirements for AI-generated content, including audio, images, and video, in accordance with law;
- Transparency measures must be appropriate to the risk level and context of use.
High-Risk AI Systems: Safety and Control Requirements
High-risk AI systems are subject to enhanced regulatory controls, including:
- Risk management and safety assurance measures throughout the lifecycle;
- Human oversight mechanisms;
- Technical documentation and record-keeping;
- Compliance with applicable standards and technical regulations.
Conformity assessment requirements apply to high-risk AI systems in accordance with detailed regulations issued by competent authorities.
Incident Handling
Entities involved in AI activities must:
- Ensure AI systems operate safely and securely;
- Promptly report incidents or risks causing serious harm through the prescribed information system.
Where damage occurs, civil liability is determined in accordance with civil law, and rights of recourse between involved parties are preserved.
Forbidden Acts
The AI Law expressly prohibits, among others:
- Using AI systems for illegal purposes, to infringe lawful rights and interests of organizations or individuals;
- Using AI systems to threaten national security, social order, or public safety;
- Circumventing safety control or human supervision mechanisms.
State Management and National AI Infrastructure
The AI Law assigns unified state management of AI to the Government, with the Ministry of Science and Technology acting as the principal coordinating authority. While the Prime Minister shall issue National Strategy on Artificial Intelligence, subject to updates at least every 3 years’.
The Law also provides for:
- Development of national AI infrastructure and databases;
- Promotion of domestic AI capacity, data resources, and human capital.
Innovation Support and Testing
The AI Law supports innovation through controlled testing (sandbox) mechanisms for AI applications. Further, for financial and policy support measures for AI research, enterprises, and startups.
Specific conditions and incentives will be implemented through subsequent Government regulations.
Transition
AI system operating before the Effective Date are subject to compliance from the Effective Date with the timeframes being 18 months for such systems in the fields of healthcare, education, and finance, while 12 months for other such operating systems.
