The next decade will be shaped by intelligent systems and the people who choose how they are built and governed.

BASELINE is a long-form conversation series with founders, academic, policymakers, and operators working at the frontier of AI and digital systems.

These conversations surface judgement, intent, and lived experience - helping you orient yourself for what comes next.

**NEW** For leadership teams, that same work also informs a fixed-scope BASELINE Execution Sprint, designed to turn organisational insight into clear, executable decisions.

Watch. Listen. Engage with intent.

info@baselinepodcast.com

​The AI Wild West is Dead: The Multi-Million Dollar Blindspot No One Sees - Martin Gibson

From 2026, the EU AI Act changes what accountability means for AI systems. For high-risk AI use cases, organisations will be expected to demonstrate traceability, record-keeping, and oversight. In practice, this means being able to show what an AI system did, why it did it, and who was responsible. Logs alone are no longer enough. In this episode of BASELINE, Martin Gibson explains why most organisations will not fail AI compliance because they lack data, but because they cannot reconstruct a defensible story across systems and time. We explore: 

• Why logs are not the same as evidence 

• What the EU AI Act actually expects in practice • Record-keeping, retention, and traceability for AI systems 

• Why AI compliance is a systems and infrastructure problem • How organisations get trapped by fragmentation and vendor lock-in 

• What it means to build AI systems you can defend under audit or investigation

 Subscribe to BASELINE for long-form conversations exploring AI, creativity, and human intelligence.

What Makes Stories Human the Age of AI - Archie Brooksbank

In this episode of BASELINE, filmmaker Archie Brooksbank reflects on storytelling, music, and the human connections that shape how we feel, remember, and act. Drawing on his work across film, sport, and culture, including projects involving figures such as Lionel Messi, Jamie Redknapp, and Lewis Hamilton, as well as collaborations with teams and brands including Red Bull Racing, Aston Martin, and McLaren, Archie explores why interaction, trust, and presence matter more than perfection. 

As AI and automation accelerate across the creative industries, this conversation asks a deeper question: what happens if we replace human experience with efficiency? This is not a discussion about tools or technology trends. It is a reflection on why storytelling has always been a shared human act, and why some things cannot be automated. 

 Subscribe to BASELINE for long-form conversations exploring AI, creativity, and human intelligence.

Why AI Decisions Are Slipping Out of Leadership Control - Jonny Williams

Artificial intelligence is already shaping organisations, governments, and public services. Many of the most important AI decisions are being made by default rather than by design. In this episode of BASELINE, Ian speaks with Jonny Williams, Chief Digital Adviser to the UK Public Sector at Red Hat, about why AI feels confusing, why value feels elusive, and why leadership matters more than models. 

This conversation explores the real architecture behind AI systems, how technology stacks align with national interests, and why transparency and openness are now prerequisites for trust. Jonny explains why AI is not primarily a technical challenge, but a leadership and operating model challenge, closer to the role of a COO than a CTO. 

Topics include AI sovereignty, Wardley mapping, open versus closed models, national infrastructure, mechanised government, digital identity, and why organisations risk losing agency if they allow AI decisions to drift to technologists alone.

Artificial Intelligence: Utopia or dystopia?

How Your Values and Identity Hold Up in an AI Influenced World - Professor Kevin Money

AI is influencing how we work, how we communicate and how we make decisions. But the deeper change is personal. It affects our values, our identity and the stories we tell ourselves about who we are. In this episode, Professor Kevin Money explores how identity is formed, why it feels fragile during rapid change and what helps people stay grounded in an AI influenced world. 

Kevin is a behavioural scientist and Co-Director of the John Madejski Centre for Reputation at Henley Business School. His research focuses on identity, reputation, motivation, trust and responsible leadership. He advises the UK Government, global businesses and nonprofit organisations on how people think, feel and act. 

This conversation looks at the psychology behind identity, emotional labour, belonging and the impact of trauma. It also considers how AI amplifies uncertainty and what we can do to stay anchored to our values while the world around us changes.

Artificial Intelligence: Utopia or dystopia?

Why Low Earth Orbit Is Becoming Dangerous - Bianca Cefalo

Low Earth Orbit is getting crowded, chaotic and harder to manage. In this episode, Space DOTS founder Bianca Cefalo explains why satellites are failing, why 90 percent of orbital anomalies have no known cause, and how space weather and hidden threats are reshaping the environment above Earth. 

We explore the reality of Kessler syndrome, rapid material degradation, non kinetic attacks and why current data is no longer enough. This is the intelligence layer we need for the next decade of space operations.

https://www.space-dots.com

Artificial Intelligence: Utopia or dystopia?

But Risk Has Never Stopped Progress. Dr Magda Ramada

In The Age of Imperfect Machines, global InsurTech leader Dr. Magda Ramada Sarasola (WTW) joins BASELINE to explore one of the most overlooked questions in the AI revolution: What happens when AI fails and how do we insure against it? From the fear of the first elevators to today’s non-deterministic AI systems, this conversation uncovers why innovation doesn’t come from eliminating risk, but from learning how to live with it. 

Magda explains how the insurance industry has always enabled progress by absorbing uncertainty so society can keep moving forward and how that same principle now underpins AI assurance, accountability, and agentic systems.

Artificial Intelligence: Utopia or dystopia?

Ex-Google Engineer: Why Digital ID Will Decide AI’s Future

Digital identity is no longer just about logging in, it’s about who and what we trust in a world where AI agents are multiplying faster than humans. In this episode of BASELINE, I speak with Jacoby Thwaites, ex-Google technologist and now CTO of Magic ID. We dive into the UK’s plans for mandatory digital ID by 2029, Switzerland’s national e-ID referendum, and why identity has become the #1 attack vector in cyber security. 

Jacoby explains why humans today are “ghosts in the digital world,” why bots already outnumber humans online, and how trusted communities could give people and AI equal standing in digital society.

Artificial Intelligence: Utopia or dystopia?

The Algorithmic Leap | AI, Innovation, and the Rules of the Game - Dr John Fletcher

In BASELINE052, Ian Smith sits down with Dr John Fletcher an Oxford, Imperial, and Cambridge-trained physicist and Chief Scientist at The Innovation Game. John is not just a world-class academic - he’s a rare innovator who bridges the gap between research and real-world impact.

At The Innovation Game, he is building technology that creates genuine competitive advantage, not just another layer of AI hype. His work proves how algorithms can solve critical problems in logistics, medicine, and science showing that the future of AI is about application, not buzzwords.

https://www.tig.foundation

Artificial Intelligence: Utopia or dystopia?

We need your consent to load the translations

We use a third-party service to translate the website content that may collect data about your activity. Please review the details in the privacy policy and accept the service to view the translations.