Overview
Artificial intelligence (AI) is changing the way we live, and it’s rapidly making its way into nursing education. The integration of AI into nursing education is starting to change how we teach and learn in nursing, making it possible to personalize learning, create more realistic clinical simulations, and track student progress in new ways (Booth et al., 2021). AI refers to computer programs or machines designed to handle certain thinking tasks, like problem-solving or analyzing information, that we typically associate with people (Yakusheva et al., 2024). While AI has been utilized in higher education since the 1950s (De Gagne, 2023), today’s generative AI is far more advanced. It can recognize patterns and generate text, data, images, and other outputs based on learned information (Marr, 2023). In addition to nursing education, a significant portion of the healthcare industry is now leveraging AI, with over 85% of healthcare leaders either investing in or planning to invest in AI technologies over the next three years (Phillips Editorial Team, 2024). Ninety-four percent of healthcare companies and two out of three physicians reported using AI in some capacity within the past year (Henry, 2025). While the usage in nursing education is less known, preparing future nurses to interact safely, ethically, and effectively with AI is critical given the rate at which this technology is growing across healthcare settings.
The rapid integration of AI into nursing practice and education introduces complex ethical dilemmas that demand careful consideration (Wei et al., 2025). Key concerns include algorithmic bias (Yakusheva et al., 2024), data privacy risks (Henry, 2025; Wei et al., 2025), accountability in AI-driven decision-making (Cardona et al., 2023), and the potential erosion of humanistic care competencies (O’Connor, 2023). As nursing programs increasingly adopt AI tools, establishing an ethical framework for implementation becomes imperative to ensure responsible use (American Nurses Association, 2022). Given the commitment of the Colorado Nurses Association (CNA) to advancing nursing excellence, this summary presents key ethical challenges, regulatory considerations, recommendations, and implications for AI adoption in nursing education to better prepare both educators and students to engage with this technology.
Ethical Challenges in AI Integration
The following factors should be considered in nursing education to ensure the protection and provision of essential ethical standards inherent to the profession.
Algorithmic Bias and Educational Equity
AI systems in nursing education may inadvertently perpetuate biases present in their training data, leading to inequitable outcomes for marginalized student populations (Couper, 2024; Glauberman et al., 2023; Lifshits & Rosenberg, 2024). For example, voice recognition software used in virtual simulations may exhibit lower accuracy for non-native English speakers, potentially disadvantaging international students (Park, 2024). Similarly, facial recognition tools employed in proctored examinations have demonstrated higher error rates for individuals with darker skin tones (Yoder-Himes et al., 2022). These technical limitations raise serious questions about the fairness of AI-assisted assessments and the potential reinforcement of existing educational and healthcare disparities.
Academic Integrity in the Age of Large Language Models
The rapid adoption of large language models (LLMs) has created novel challenges for maintaining academic standards. Studies indicate that LLMs can generate sophisticated responses to exam questions and clinical case studies, creating new opportunities for academic dishonesty (Abd-Alrazaq et al., 2023). More concerning is the potential for nursing students to develop overreliance on these tools, which may compromise the development of critical thinking and clinical judgment skills essential for safe patient care (Glauberman et al., 2023; Pechacek & Austin, 2023). Additional concerns with LLMs in health sciences education include data protection and privacy risks, bias and discrimination, poor transparency, potential inaccuracies, ineffective integration into the curriculum or workflow, and lack of training among educators and users (Abd-Alrazaq et al., 2023).
Data Privacy and Security Concerns
The data-intensive nature of AI and modeling technologies presents significant privacy concerns for nursing education. These systems frequently process sensitive student performance data and simulated patient information, creating potential vulnerabilities under both the Family Educational Rights and Privacy Act (FERPA) (Couper, 2024; Glauberman et al., 2023) and the Health Insurance Portability and Accountability Act (HIPAA) regulations (Murdoch, 2021; Wei et al., 2025). Of particular concern is the insufficient anonymization of data in many AI-powered educational platforms (Ibrahim et al., 2025; Yakusheva et al., 2024), which could expose confidential student or patient information and erode trust in training systems.
Accountability in AI-Driven Assessments
The opaque nature of many AI algorithms presents challenges in evaluating student performance (Booth et al., 2021; O’Connor, 2023). When automated systems generate assessments, questions arise regarding recourse mechanisms for students who dispute outcomes. Current nursing education frameworks may lack provisions to address these concerns, necessitating updates to competency evaluation standards. Usability, technicality, and feedback mechanisms in AI systems during evaluation of nursing student learning remain significant concerns (Lifshits & Rosenberg, 2024). Clear guidelines and frameworks are needed to ensure responsible development and deployment of AI in nursing education, especially during testing and evaluation of the competencies deemed necessary for program completion, entry into practice, or advancement of scope.
Regulatory Considerations
As AI adoption in nursing education expands, regulatory oversight at international, national, and state levels is rapidly evolving to address ethical and legal concerns.
International Landscape
Global AI regulations in education remain fragmented, with countries like the United Kingdom and Canada developing frameworks focused on transparency, fairness, and bias mitigation. While the World Health Organization recognizes the potential benefits of AI in healthcare education, comprehensive global guidelines have yet to be established. The General Data Protection Regulation, a European Union regulation on information privacy, was recently updated to include the impact of AI on protecting individuals’ privacy (Henry, 2025). Training of healthcare students and providers on these protections is underway and may serve as a blueprint for other regions once completed.
National Landscape: United States
The U.S. Office of Science and Technology Policy issued a Blueprint for an AI Bill of Rights in October of 2022, emphasizing algorithmic transparency, data privacy, and human oversight. The next year, the U.S. Department of Education released a report entitled AI and the Future of Teaching and Learning advocating for increased educator involvement in AI decision-making. Although the American Nurses Association issued a position statement on the use of AI in practice in 2022, there has been no consistent guidance for applications in the educational setting. National organizations focused on education such as the American Association of Colleges of Nursing or the National League for Nursing may want to consider developing a set of advocacy principles to address the development, deployment and use of AI in education, as the American Medical Association did in 2024 (Henry, 2025). Byrne (2024) recently suggested the five rights of AI in teaching and learning in nursing, including the right purpose, platforms, placement, protections, and preparations. Embedding a set of consistent guiding principles into education programs would better prepare future nurses to engage in the development and implementation of AI in the workplace (Yakusheva et al., 2024). Ensuring that data protection regulations such as HIPAA keep pace with AI is of concern to nurses and patients alike (Wei et al., 2025) and these developments need to be rapidly integrated into the curriculum.
State Landscape: Colorado
Colorado has recently emerged as a policy leader in AI. In 2024, the state senate passed the Colorado Artificial Intelligence Act (CAIA), which, if implemented, would be the first comprehensive AI law in the U.S. As of mid-2025, the law is under active review, and its implementation, originally planned for 2026, is facing calls for delay and revision from industry stakeholders. Despite this uncertainty, the CAIA signals a growing legislative focus on the ethical governance of AI technologies. Among other things, this legislation mandates:
- Bias mitigation audits to prevent algorithmic discrimination.
- Transparency in AI decision-making, including assurance that students or employees can challenge AI-generated assessments.
- Mandatory annual risk assessments of AI systems.
- Consumer protections, including disclosures about use of AI systems.
The CAIA is especially concerned with “high risk” AI systems that make or substantially influence consequential decisions, including those related to healthcare. For example, both AI developers and healthcare providers have specific responsibilities under the act such as ensuring fair evaluations (of patients, employees, and trainees) and equitable treatment decisions. As licensed providers, nurse educators must adhere to the CAIA and education programs would be responsible for ensuring student awareness and compliance with the act during training. Given these regulatory developments, nursing programs in Colorado must adapt to these evolving policies to ethically and intentionally integrate AI into nursing education. Establishing a unified approach that aligns with global, national, and state-level requirements as well as our overarching professional and ethical guidelines in nursing is essential.
Recommendations for Ethical Implementation of AI in Nursing Education
In addition to the five rights from Byrne (2024) mentioned above, the CNA Research and Advisory Networking Team recommend the following:
- Conduct Regular Audits: Nursing programs should conduct regular AI system audits and utilize pre-audited tools to minimize bias, particularly for student evaluations and assessments.
- Protect Privacy and Data Security: Implement robust protocols for data anonymization and secure storage protocols to protect privacy.
Ensure Transparency and Accountability: Establish transparent accountability frameworks that include clear processes for reviewing AI-generated assessments. Appeal procedures should be in place, and both educators and students should be involved in oversight efforts. - Integrate AI Thoughtfully into the Curriculum: Design curricula where AI tools should complement, rather than replace, critical learning experiences to preserve essential humanistic competencies.
- Promote Human-Informed AI Systems: Both nurse educators and nursing students should be involved with the development, testing, implementation, and evaluation of AI technologies in the nursing education and practice settings.
- Collaborate for Systemic Change: Schools of nursing and other organizations should partner with accreditors, legislators, and clinical partners to develop ethical AI guidelines, provide AI ethics training for educators and students, and ensure AI use aligns with patient safety, equity standards, and state regulations such as the CAIA.
Implications for CNA and Nurse Educators
To support the ethical use of AI in nursing education, CNA should:
- Collaborate with accrediting bodies to establish best practices for AI integration in nursing programs and to develop ethical AI guidelines.
- Advocate for audited and bias-mitigated AI tools, especially those used in student assessment and evaluation and patient care.
- Provide professional development opportunities on AI ethics and responsible utilization to nurse educators.
- Work with legislators and regulatory bodies to ensure AI in nursing education aligns with patient safety and equity standards, including compliance with recent legislation such as the CAIA.
Conclusion
As AI use expands across healthcare, nursing programs must prepare students to engage ethically, safely, and effectively. Key ethical concerns include algorithmic bias, data privacy violations, academic integrity risks from large language models, and unclear accountability in AI-driven assessments. International, national, and state regulations, such as Colorado’s AI legislation, are beginning to address these issues, requiring that nurse educators prepare students to ethically engage with technology during training and in the workplace. To ensure responsible use, recommendations include regular AI bias audits, strong data governance practices, clear accountability processes, and the preservation of humanistic competencies in curriculum design. Along with global and national nursing organizations, CNA can take action to support clinical nurses and educators in developing best practices, advocating for fairness in AI applications, and providing targeted and relevant education about AI to uphold nursing standards in a rapidly evolving technological landscape. Establishing standards and best practices for AI integration in nursing education is essential to ensure consistency and quality. This includes developing guidelines for using AI in educational settings, evaluating the effectiveness of AI tools, and sharing best practices across institutions.
PREPARED BY:
THE COLORADO NURSES ASSOCIATION RESEARCH ADVISORY AND NETWORKING TEAM
Mona Hebeshy, University of Northern Colorado, School of Nursing
Natalie Pool, University of Northern Colorado, School of Nursing
Mavis Mesi, Co-Chair of GAPP, Colorado Nurses Association
Kennth Oja, University of Colorado, College of Nursing and Denver Health