Events
News
16 December 2025

Responsible AI: Thinking better, designing smarter, leading fairly

Share:
What happens when AI moves faster than our thinking?

A recent event at Eden Smith brought together researchers, designers and practitioners to explore a more human way forward – one where AI strengthens cognition, improves experiences and reflects the values we actually want to scale.

Across three sessions, one theme kept surfacing: AI works best when people stay firmly in the loop.

AI should sharpen thinking, not replace it

The day opened with a challenge to one of the most common assumptions about generative AI: that efficiency always equals progress.

Drawing on research into cognition and memory, the first session with speaker Roseanne explored how over-reliance on large language models can weaken recall, critical thinking and creativity. The insight was stark - when AI does the thinking for us, the thinking often disappears altogether.

But this wasn’t a call to step back from AI. Quite the opposite. Used intentionally, AI can become a powerful sparring partner – helping people test ideas, surface blind spots and strengthen judgment.

The takeaway was simple and practical: use AI to sink low-value, repetitive tasks, and to think alongside you on the work that really matters.

Human-centred AI is designed around outcomes, not automation

Few technologies divide opinion quite like chatbots, and for good reason.

Through real-world examples, the second session with speaker Sarah Wyer, Corndel Coach, unpacked why so many automated customer experiences fall short. The issue isn’t the technology itself, but what it’s optimised for. When success is measured by deflection and containment, friction doesn’t disappear, it’s just pushed onto people.

A more effective approach starts with a different question:
“What does success feel like for the human on the other side?”

From sentiment detection and escalation routes to systems thinking and context awareness, the session highlighted how human-centred design leads to better outcomes for customers and organisations. Automation should remove complexity from the whole system, not quietly increase it.

Responsible AI starts with the choices we make

The final session brought the focus firmly onto responsibility, bias and fairness, and the reminder that AI is never neutral.

From gender bias in language models to real-world consequences in hiring, credit and risk assessment, AI reflects human decisions at every stage. The data we choose. The teams we build. The questions we fail to ask.

Rather than abstract principles, the session offered a practical framework organisations can apply immediately – from building diverse teams and testing outputs, to creating cultures where AI decisions can be challenged openly.

What connected every conversation

Across thinking, design and ethics, the same ideas kept reappearing:

  • AI is powerful, but context is everything
  • Human oversight isn’t a limitation, it’s a strength
  • Success should be measured in human outcomes, not system efficiency
  • Used well, AI doesn’t shrink capability, it expands it

At Corndel, that balance between human intelligence and technical capability sits at the heart of everything we do. Because the future of AI won’t be shaped by tools alone, but by the people confident enough to question, guide and use them well.