A few weeks ago, I was debugging my demo Open Badge generator with Claude Code. I’d added support for the latest standard built on verifiable credentials. They wouldn’t validate. I was frustrated. Then, entirely unprompted, Claude announced that my test suite was broken and actually, the badges did validate. It knew this because it had created a test badge, uploaded it to my Cred.Scot backpack and confirmed acceptance. All by itself. I hadn’t asked it to do any of that.
I stopped typing and stared at the screen. The AI had walked across a bridge I’d spent a decade building and it had done so as though there was nothing to it.

In 2016, I stood up at an event and pitched the idea that Open Badges should carry rich reflective evidence. That over time, this growing portfolio of evidence would become more valuable than the badges themselves. A heckler asked the obvious question: “Who’s got the time to read all of that?”
Fair point. My answer — that computers would, one day, and they’d be good at it — earned eye rolls from the front row.
After the session, one of the sceptics told me over coffee that what mattered was institutional trust. A badge from a good provider was enough. The evidence was beside the point, because examining it at scale was impossible.
They were right about the problem but wrong about the conclusion. Consider a hundred people holding the same badge from the same trusted institution. The badge tells you they all met the threshold. Only the evidence tells you what each of them actually did, how they reflected on it and how they applied it. The sceptic assumed this limitation was permanent. I assumed it was temporary and focused on making sure the evidence was worth reading when something finally could.
I should be honest: what I expected was clever keyword matching through natural language processing. The AI capabilities we have today blow what I envisioned then out of the water.
Following that event, I spent the years in between building the evidence layer, both through my day job delivering badges and through my personal project, the Cred.Scot backpack (a platform where people upload and manage their awarded badges with all the reflective evidence, endorsements and notes that come with them). In 2024, I started experimenting with AI analysis: when a badge is uploaded, the backpack scrapes the criteria and evidence URLs, feeds everything to an AI and returns guidance to the recipient. It recommends what additional evidence to add, who to ask for endorsement and how to keep building on what the badge represents. The badge becomes the start of a process, not the end.
Then came Model Context Protocol (MCP), a standard that lets AI systems interact with external tools, created by Anthropic and now adopted by OpenAI and Google. I built an MCP server for the backpack and suddenly the AI wasn’t confined to the platform. It could be in any conversation, anywhere.
That matters, because here’s the quiet irony of the AI age: everyone has access to a brilliantly articulate assistant, but few are giving it anything real to work with. People paste a CV into a chatbot and ask for help with an application. The AI produces something polished and almost entirely hollow. Not because it’s bad at its job, but because it has nothing verified to draw from.
Open Badges stored in an MCP-capable backpack change this. When an AI connects to a backpack, it stops guessing. It reads the criteria, the reflective evidence, the endorsements and draws inferences. When I asked Claude to find my badges related to critical thinking, none mentioned those words. It found them anyway, inferring the skill from what the badges actually described.
And it’s not just one way. The AI can interact with the backpack and add to it. Attach a note to a badge about how you applied learning in practice through a voice conversation. Have it build a collection for an upcoming supervision, with a narrative for each badge explaining its relevance. Need an endorsement? It pulls the link, drafts the email and sends it through your connected account. The backpack becomes something you rarely open directly, your AI is already in there.
Where does this lead? Recruiters swamped with hollow AI applications will eventually want better data, not better sifting. People will connect their backpacks to recruitment systems the way they once sent speculative CVs. AI agents will assess candidates from verified evidence. Individuals, freed from endlessly rewording applications, will spend their time doing the work: practising, reflecting, gathering endorsements. Building something real.
It’s possible that someone will be sought out, their backpack reviewed and an interview invitation issued. All while they sleep. I know how that sounds. I’ve seen the eye rolls before.
But the heckler’s question from 2016 has been answered. Who’s got the time to read all of that evidence? Not a person. An AI. One that treats the task as routine, reaches for the tools it needs without being asked and doesn’t even pause to admire the bridge it just walked across.
Try it: Create a free account at bp.cred.scot, upload a badge (or make a demo at badges.dgty.uk) and connect your AI via MCP at bp.cred.scot/mcp. Then ask it about your badges.
Who’s got time to read all that? © 2026 by Rob Stewart is licensed under Creative Commons Attribution 4.0 International