
As generative AI rapidly evolves, its potential to drive innovation is undeniable but so are its risks. From deepfake scandals to AI-driven misinformation, the technology poses serious ethical challenges.
To ensure AI serves humanity rather than harming it, we must prioritise responsible AI development. Dr Sam Goundar, Senior Lecturer in IT at RMIT Vietnam, shares insights on how Vietnam can lead the way in ethical AI adoption.
The promise and perils of generative AI
Generative AI is transforming industries such as media, education, healthcare, and finance by generating new content from existing data. In healthcare, it enables AI-driven medical imaging, personalised treatments, and drug discovery. In education, it supports AI-assisted learning, plagiarism detection, and virtual tutors. In business and marketing, it powers automated content creation, customer support, and synthetic influencers.
However, these applications raise significant ethical concerns, as AI models can amplify bias, spread misinformation, and promote deepfakes, posing threats to data privacy, security, and workforce stability. These concerns are echoed by industry leaders; for example, Sam Altman, CEO of OpenAI, has repeatedly highlighted at major forums like Davos 2024 and TED’s ReThinking with Adam Grant series that although generative AI is redefining creativity, it requires human oversight to effectively address these ethical risks.
Recent incidents underscore the urgent need for responsible AI development and robust ethical guidelines. Cases such as AI-generated explicit images of Taylor Swift, AI robocalls impersonating President Joe Biden, unauthorised AI use in legal proceedings, and academic dishonesty, highlight growing concerns.
More alarming examples include chatbot-induced suicides, AI-generated child sexual abuse material, chatbot-encouraged assassination attempts , biased hiring algorithms, and exploitation of AI vulnerabilities. These issues reinforce the need for human-centric approaches to mitigate AI misuse and protect individuals and society.
AI trends for 2025: Responsible AI at the forefront
As artificial intelligence continues to evolve, the focus in 2025 is shifting towards responsible and human-centric AI, ensuring transparency, accountability, and trust in AI-driven systems. With growing concerns over bias, misinformation, and ethical risks, explainable AI (XAI) is becoming a priority, enabling users to understand how AI makes decisions.
Several countries, including the US, Canada, Brazil, the EU, the UK, Australia, China, India, and Japan, have already implemented AI regulations, and more are expected to follow in 2025. These policies aim to govern AI applications, ensuring ethical deployment across industries.
Rather than replacing human roles, AI is increasingly being designed to enhance and complement human capabilities. The rise of hybrid AI, which promotes collaboration between humans and AI-driven systems, is expected to gain traction.
Additionally, AI will see expanded applications in cybersecurity, improving threat intelligence and risk mitigation. In the realm of sustainable development, AI-driven solutions will play a key role in addressing climate challenges and advancing green technologies. As AI adoption accelerates, ensuring responsible governance will be essential to maximising its benefits while minimising potential risks.’
What should Vietnam do to develop responsible AI?
Vietnam is emerging as a leader in AI innovation. However, ensuring ethical AI development is critical to prevent bias, privacy risks, and loss of public trust. A strong ethical foundation and cross-sector collaboration are essential. To align AI with ethical principles, Vietnam should prioritise the following initiatives:
- Invest in AI ethics research and collaborate with universities to establish frameworks for responsible AI deployment.
- Integrate AI ethics into university curricula to equip students with knowledge of fairness, transparency, and governance alongside technical skills.
- Expand AI literacy programs to prepare professionals across sectors – executives, educators, and policymakers – to navigate AI challenges.
- Raise public awareness to help individuals and businesses understand AI’s impact on privacy, security, and decision-making.
- Encourage AI for social good to drive AI innovations in healthcare, climate solutions, and education, ensuring societal benefits beyond profit.
Beyond education and awareness, clear regulatory frameworks are essential to maintaining ethical AI development. Countries with strong AI governance, workforce readiness, and oversight will shape the global AI landscape. To position itself as a leader in Southeast Asia, Vietnam should take decisive action:
- Implement robust AI regulations to enforce data privacy laws, ethical guidelines, and protections against misinformation and bias.
- Strengthen AI legislation to safeguard users and address risks arising from rapid AI integration.
- Align with international AI governance models to adopt best practices while tailoring policies to Vietnam’s socio-economic context.
- Introduce an Ethical AI Certification to ensure organisations meet transparency, fairness, and security standards.
- Develop an AI Risk Classification & Auditing System to assess AI applications based on potential harm and mandate audits for high-risk systems.
Strategic policymaking will give Vietnam a competitive edge while ensuring AI remains ethical and accountable. As the country aims to become a regional AI powerhouse by 2030, its success will depend not only on technological advancements but also on strong regulations, ethical AI investment, and public awareness to safeguard against potential risks.
Story: Dr Sam Goundar, Senior Lecturer in IT, School of Science, Engineering & Technology, RMIT University Vietnam
Related
Discover more from Vietnam Insider
Subscribe to get the latest posts sent to your email.
Source: Vietnam Insider