What universities are getting wrong about AI and equity

Artificial intelligence is moving faster than most universities can respond.

Much of the current conversation has focused on academic integrity, assessment design and the risks of students using generative tools. These are important concerns - but they are not the whole picture.

The greater risk is that AI is being adopted without a clear understanding of its implications for equity, governance and student trust.

The problem is not just academic integrity

When AI is introduced without a justice lens, it can:

  • reproduce and amplify bias

  • create new forms of inequity in access and assessment

  • undermine trust between students and institutions

  • expose gaps in governance and accountability

The challenge is not simply how to manage AI use, but how to ensure that its adoption aligns with institutional values and responsibilities.

A question of readiness

What we are seeing across the sector is not a lack of interest in AI, but a lack of clarity about readiness.

Institutions are asking:

  • Are our assessments still fair?

  • Who is accountable for AI decisions?

  • Are we protecting student trust?

  • Do staff feel confident using AI responsibly?

These are not isolated questions. They are interconnected, and they require a structured response.

A justice-centred approach

This is the context in which we developed the Justice-Readiness for Artificial Intelligence (JR-AI©) framework.

JR-AI© is designed to help higher education institutions understand whether they are genuinely ready to adopt AI in ways that are ethical, equitable and aligned with their values.

It moves beyond narrow conversations about tools and policy to focus on the wider institutional conditions that shape how AI is used in practice.

Six questions institutions need to answer

At its core, JR-AI© asks six leadership questions:

  • Are our curriculum and assessments safe and fair?

  • Do we have clear, ethical governance for AI?

  • Are we protecting equity and student trust?

  • Are our data practices transparent and accountable?

  • Are staff confident and supported?

  • Are we acting responsibly in our civic context?

Together, these form a leadership “spine” for AI readiness - linking pedagogy, governance, equity and institutional responsibility.

From complexity to clarity

One of the challenges institutions face is that AI can quickly become a large, diffuse issue.

JR-AI© is intentionally designed to be focused and practical.

Through a light-touch diagnostic process—combining document review, interviews and leadership discussion—it provides:

  • a clear readiness overview

  • the top risks and opportunities

  • short, actionable recommendations

The aim is not to create another large project, but to provide clarity and direction.

Acting early, acting responsibly

There is a window of opportunity.

Institutions that approach AI through a justice-centred lens now have the potential to:

  • avoid embedding inequity into new systems

  • build trust with students and staff

  • align innovation with institutional values

Those that do not risk repeating familiar patterns—where new technologies reproduce existing inequalities at scale.

Where next?

The question is no longer whether AI will shape higher education.

It is how.

The institutions that respond most effectively will be those that treat AI not only as a technical challenge, but as a question of equity, governance and responsibility.

We are currently working with institutions to explore these questions in practice. If you would like to discuss how JR-AI© could support your institution, please get in touch at hello@liberateus.co.uk.

Previous
Previous

What we’re still learning from Living Black at University

Next
Next

From insight to implementation: what happens after Living Black at University?