Rethinking Knowledge in the Age of AI
Here, we believe that innovation isn’t made only of tools, but above all of questions. And it was from a single question that the redesign of the TOK programme for our Diploma students began: what does it mean to know in a world where artificial intelligence filters, suggests, classifies, and anticipates everything we see?
Theory of Knowledge (TOK) is one of the three Core elements of the International Baccalaureate (IB) Diploma Programme, alongside CAS and the Extended Essay. It is a compulsory course for all IB students, with a very clear purpose: prompting them to reflect on the nature of knowledge, its origins, its limits, and the responsibilities that come with knowing.
TOK does not teach “what to know,” but rather “how we think we know what we know.”
In recent years, the very concept of knowledge has changed. Not because we’ve stopped thinking, but because we increasingly think less on our own. The information that reaches our eyes, or our feed, is selected by complex systems: algorithms that evaluate the likelihood we will interact with certain content, predictive models that decide what might interest us, filters that shape the informational environment around us. This invisible mediation has become so natural that we hardly notice it anymore.
And this is precisely where TOK becomes more crucial than ever.
Its original purpose, to question the nature of knowledge, now carries a new, almost civic urgency. Because questioning knowledge inevitably means questioning the technology that transmits it, modifies it, and often creates it.
For this reason, our TOK has been completely reimagined: not as a theoretical subject, but as a highly contemporary inquiry, interwoven with the Human+ programme and with our broader vision of education.
A New TOK, Rooted in a New Awareness: We No Longer Know Alone
From the very beginning, students confront an idea that overturns the traditional model of the “knower”: between us and the world there is always a filter. It is not simply a technological interface, but an epistemic actor with its own priorities, models, and values.
It is as if our relationship with knowledge were no longer a straight line, but a triangle: us, the world, and the algorithm deciding how the world appears to us.
This makes it essential to develop learners who can not only interpret, but interpret how interpretations are constructed. It is a deeper, more complex level of awareness but also a necessary one if we want students to grow as autonomous citizens in an increasingly automated society.
From Theory to Practice: Immersion in the Algorithmic Society
The new TOK unfolds as a journey through the big questions of knowledge, explored through the lens of AI: What does it mean to know? By what methods? From which perspectives? With what responsibility?
Understanding What We See and What We Don’t
Students examine how AI expands our access to information while simultaneously narrowing it through personalised filters. Here TOK becomes almost autobiographical: students analyse their own feeds and uncover how their digital identity is shaped by content that mirrors them, rather than broadens them.
Unexpected questions emerge:
Can I really say I chose what I know?
Can I call something “knowledge” if it was served to me simply because I am a good target?
Understanding How We Build Knowledge
Next comes methodology. Students compare three tools they use daily: the book, the search engine, and the language model. This is where they grasp the difference between a correct answer and a reliable process.
A book reveals its reasoning.
A search engine points to sources.
A language model often shows neither.
The most revealing moment? Realising that an AI model can give an answer that sounds plausible yet is epistemically empty: Where is the evidence? Why should I trust this? How do I verify it?
Understanding the Perspectives Through Which We View the World
Another module shifts the focus to perspective. AI is not a neutral “mind”: its data reflects what society chooses to measure and what it chooses not to measure. This opens discussions about bias, inequality, and excluded voices. Students discover that the real issue is not only “what AI does,” but the stories it has learned from.
Understanding Who Holds Responsibility
Finally, the journey becomes ethical. Students analyse real cases in which predictive systems are used in sensitive fields like security, justice, medicine. Here TOK becomes sharply contemporary: if an algorithm makes a decision, who is accountable? Can a machine know fairness, or only calculate it?
When Theory Meets the Real World: Students as Researchers
The most compelling aspect of the new TOK emerges in how students become active investigators. Their analyses are not just school assignments, but genuine critical explorations of our digital society.
Predictive Policing: When an Algorithm Creates Labels
One group examined predictive policing systems in the United States, discovering how some technologies have turned statistical patterns into lists of suspects. They focused on cases in which individuals, often young people or members of minority groups, were monitored not for what they had done, but for what a model considered “likely.”
Their insight was razor sharp: if past data is distorted, the predicted future is inevitably unjust. And above all: when a probability becomes a label, who pays the price of error?
AI in Courts: The Transparency That Isn’t There
Another group investigated risk assessment algorithms used in some U.S. courts to evaluate defendants’ likelihood of reoffending. They uncovered a paradox: an algorithm can influence a person’s sentence even though neither judge nor defendant knows the criteria behind the evaluation.
Students captured the heart of the issue: can a society consider legitimate a decision that cannot be explained? And further: can justice be delegated to a system incapable of understanding the complexity of human lives?
AI in Medicine: A Future That Heals But Under What Conditions?
Other students focused on medical AI, especially diagnostics and telesurgery. They examined a real case in which a surgeon operated from 7,000 km away using an advanced robotic system. The technological potential was striking, but so were the risks: if something goes wrong, who is responsible: the remote doctor, the software engineers, the hospital?
They also raised a crucial issue: such advanced technologies risk being accessible only to an elite, widening the gap between those who can be treated and those who cannot.
Technology Should Empower Humanity, Not Replace It
For us at H-FARM International School, the principle is clear: technology should amplify what makes us human, not overshadow it.
And the renewed TOK moves exactly in this direction. It helps students develop an awareness that goes beyond understanding technology, it helps them understand themselves in relation to technology: their biases, their methods, their values, their responsibility in constructing knowledge.
Because the future will not be shaped by those with access to the most powerful algorithms, but by those who can interpret them with wisdom.