Predictions for 2026 (or: What I'm Actually Thinking About)

The Predictions for 2026 that no-one asked for. Some of them are real, some are more just hope, most of them are firmly tongue in cheek. Also I set out 8 ideas I'm thinking about when it comes to tech and AI infrastructure.

A glass ball in a palm of a hand with the sunset
Photo by Drew Beamer / Unsplash

It's December 2025, and honestly I'm tired. Not in a bad way. Though my running has gone to shit recently, I quit drinking and found no noticeable benefit, it's dark and I just spent two days in a microsoft environment trying to just do, well, anything. Ok, maybe in a bad way...

I'm thinking about a lot of things recently. The cheese toastie van, obviously. But I'm also considering where I want to put my energy next year and yes I'm thinking about the world and technology and AI. Because of course I am. Everyone is.

But, before we get into the bigger ideas here are my totally unasked for predictions for 2026. Some of them are real, some are more just hope, most of them are firmly tongue in cheek.

The Predictions Nobody Needs

Prediction 1: Someone, probably a senior leader or a trustee, will ask you about quantum computing. I fucking guarantee it. And you should ignore them. Unless obviously you are working on something so specific that only quantum computing will help you, like climate change modelling.

Prediction 2: Community will matter more than audience. The era of trying to reach a million people is dying. In 2026, you'll realise that having a group chat of 15 people who actually give a shit about you is worth more than 10k LinkedIn followers who are just bots autofarming engagement.

Prediction 3: Pizza will still be my favourite food and I will perfect it (I won't).

Prediction 4: The best technical solution you implement will be something incredibly boring like "we standardised our file naming convention" and it will save more time than any AI tool.

Prediction 5: Authentic voice will begin to rise up again as everyone just gets fucking sick of AI slop. People want to feel something. They want to throw their computer out the window and make a Yaz record. They want to read things written by humans who are tired, who aren't sure, who don't sanitise. Not perfectly optimised, SEO-friendly, AI-generated content that says nothing in 1000 words.

Prediction 6: There will be some major AI discrimination/shitstorm in the UK related to public services. The organisation won't have kept proper logs of how decisions were made, because the model is not open so they can't and there wasn't anything in place to really evaluate decisions of agents. This will trigger a wave of "oh shit, we need evaluation infrastructure" panic.

Prediction 7: Microsoft will release a new set of guidance and documentation that will not make any sense. In fact, hard as it is to believe, it will make less sense than before.

Prediction 8: People will lean more and more into early 2000s tech. iPods. Things that are offline but still digital. Because we're all exhausted by being constantly connected to everything, because they sound better and because maybe, just maybe we want quality rather than convenience.

Prediction 9: Someone will write a think piece about how AI will solve poverty/homelessness/inequality. The actual solution will remain "give people money and stable housing".

Prediction 10: Interest in Data trusts or place based cooperatives will return. At least two UK pilot projects will launch, probably in health or local government or something place based...sorry neighbourhood based. They'll struggle with governance more than technology.

Prediction 11: You will wonder where Tom and his cheese toastie van is because you know it makes sense.

Prediction 12: Smaller, specialised models will start outperforming general-purpose LLMs for specific social sector use cases. The "foundation model" hype will start to crack. Local deployment will become viable.

Prediction 13: GDPR enforcement will finally catch up with AI. At least one organisation will get properly fined for AI-related data processing violations.

Prediction 14: Someone will build a proper open-source alternative to Microsoft Forms that doesn't make you hate life. Please. It'll get traction in the social sector...who am I kidding, it won't, but please do it anyway.

Prediction 15: The North East will take steps to establish itself as a Public Interest AI & Tech hub. The pieces are there. I can hope can't I?

Prediction 16: I won't do a running race, but I will run more and explore more. I will try to set another FKT, which will be probably be soundly beaten again.

What I Started Thinking About

Ok enough of that. I actually did start doing some thinking a bit more deeply about what's actually happening with AI.

Not the hype, or the the next big model, and certainly not AGI. But the gaps. The infrastructure that's missing.

I've been thinking about what's missing in the AI ecosystem, the infrastructure gaps between "look what AI can do" and "AI that actually serves people fairly and safely." At the moment everything seems to be about building capability or adoption, without building the infrastructure that makes capability usable, safe, accountable, fair.

We're busy pushing adoption, deploying AI agents without oversight. We're training or using models we can't edit or correct. We're fragmenting services that need connection. We're trapping people's context in silos.

And we're deploying AI without infrastructure to know if it's actually helping. We're training models on data scraped or bought without consent. We're optimising for efficiency without planning for inevitable failure.

Infrastructure > Capability

Here's what I think is happening: we're making the same mistake we always make with technology.

We're focused on the capability, what can it do? But are ignoring the infrastructure, the governance, how we adapt it, fail safely, learn from it, make it accountable.

I see this pattern everywhere:

  • Build the data warehouse, ignore the data governance
  • Deploy the new system, skip the people bit
  • Launch the platform, forget the community building
  • Adopt the AI, miss the evaluation infrastructure

I get it, Infrastructure is boring.

What I'm Actually Considering for 2026

So if I was going to build something in 2026 (and I might), it wouldn't be another AI model. It wouldn't be another platform. It wouldn't be another SaaS tool.

It would be infrastructure.

The boring, essential, democratically necessary infrastructure that the social sector needs if AI is going to serve people rather than extract from them.

Eight(at time of writing) specific gaps, to be precise:

The Manager: Runtime supervision for AI. Not "trust the model" but "build the oversight layer that makes trust possible."

Unlearning models: Model editing and unlearning. Because democracy runs on mutable systems, not immutable ones, and we need to be able to fix AI when we get things wrong.

The Citizen AI Network: Coordination infrastructure for fragmented services. Not centralised mega-systems, but protocols that let diverse organisations and agents work together.

Just In Time Interface: Component libraries for just-in-time software. Not platforms, but building blocks the sector can assemble into exactly what each organisation needs.

My Memory: Citizen-owned context that's portable across services. Your data, your control, your services.

The Witness: Evaluation infrastructure that tracks outcomes over time. Not "did the AI work?" but "did this help people?"

The Library: Community-governed training data commons. Because ethical AI needs ethical data, and ethical data needs consent, governance, and representation.

The Safety Net: Fail-safe infrastructure for when AI inevitably fails. Design for resilience, not efficiency.

What Happens Next

Over the next...umm few weeks(?) I'm going to explore each of these eight gaps in a bit more depth. (I'll update the links as I go)

Not because I have all the answers. I pretty sure I don't, but I think these are the right questions. The infrastructure questions that get ignored while everyone chases capability.

And maybe if enough people start asking these questions, someone, maybe even me, will build the answers.

One Last Prediction

Prediction 5001: In 2026, most AI conversations in the social sector will still be about capabilities and use cases. "Can AI do X?" "Should we use AI for Y?"

But a few conversations, maybe just a few, will be about infrastructure. About governance, evaluation, training data, fail-safes, coordination protocols, citizen data sovereignty.

Not the loudest. Not the most exciting. Not the ones that get all the funding.

And those will be the important conversations, the ones I want to have