The International Baccalaureate (IB) has published draft design principles to support schools in their adoption of artificial intelligence, as the organisation acknowledges the “real pressure” leaders are under to grapple with the impact of technology.
The five draft principles - developed by a small group of external advisers working alongside IB staff - are designed to guide schools “by providing a clear, values-led foundation that schools can adapt to their own context”.
The draft principles (in full below) acknowledge that “both inaction and recklessness fail…learners” and urge schools to pursue “responsible adoption” of AI, prioritising learners’ privacy and their emotional, social and cognitive development.
Adopting artificial intelligence in schools
The draft AI principles are as follows:
1. Caring and balanced: AI for human flourishing
AI must protect and nurture learners’ emotional, social and cognitive development. AI in the IB ecosystem must work equitably across the global community and support the human relationships central to an IB education.
2. Inquiry-driven: AI that deepens learning
AI should strengthen inquiry, critical thinking, creativity and collaborative problem solving - enabling learners to become active explorers of their own education through the thinking, effort and growth that define an IB education.
3. Educator agency: guided by capability, grounded in responsibility
Educators shape how AI is used in learning. This requires professional knowledge, openness to new approaches, school-level governance and clear accountability for outcomes.
4. Safe and transparent: AI that is accountable
Every learner’s data rights and privacy are non-negotiable. AI must meet strong safeguarding standards, and schools must be able to understand what AI tools do, how they work and where they fall short. High-stakes decisions require human oversight.
5. Continuously adapting: evidence-led, always improving
AI must demonstrate pedagogical value through evidence, reflection and honest evaluation. Both inaction and recklessness fail learners - responsible adoption means committing to learn and improve as we go. This is mindful innovation.
Boundaries of AI use
The IB also said the principles could help inform some clear “red lines” and “non-negotiable” boundaries of how AI is used to ensure any innovations remain safe.
The organisation gives the example: “Our second principle states that AI should deepen learning. A related red line could be that high-stakes decisions about grading, progression, and well-being must never be fully automated.”
The draft principles will now be discussed at forthcoming IBO conferences in Mumbai later this month and in Johannesburg in April. The IBO is also inviting schools to complete a survey to help with their refinement, too.