The Single ‘Most Useful Thing’ Cultural Organisations Should Learn About AI
Putting AI in its place with Jocelyn Burnham
AI - eh?
What’s it all about? On the one hand we hear that AI is a gamechanger that's going to take away our jobs.
On the other hand, every time I've used ChatGPT the results have been quite pants.
So where's the disconnect coming from?
Really pleased to have AI expert and all-round superwoman Jo Burnham as guest poster this week to speak to this. I first came across Jo when she was presenting at last year's AMA conference on AI.
Why should we be bothering with AI at all? (I hear some of you ask)
When I talked to Jo about writing this piece she explained that the moment we're at with AI now is like the early days on the Internet. Right now, it's all to play for. Today Internet discoverability today is largely federated through closed-source, for-profit proprietary systems. Facebook. Google. App Stores. But in the early days, it was a much more open playing field. The creatives, and the technologists can ride the adoption curve of AI now. And – like the early days of the internet – become the creators who matter.
Over to Jo...
The most powerful AI tool for your organisation isn't what you're buying — it's what you already have.
Over the past year, I’ve had the pleasure of working with organisations like Arts Council England, Shakespeare’s Globe, Tate and Kew to develop AI training resources for their staff and learn what front-line cultural colleagues are actually using AI for. It’s been eye opening.
While I’ve learned that developing a productive mindset for self-learning, innovation and collaboration is an essential practice for teams investing in their AI futures, there are still some fundamental AI misconceptions within the sector.
The key to unlocking AI’s real abilities
If there’s a single ‘thing’ most people in the culture sector would benefit to understand about AI, it’s this: ‘If you’re not using your own data with an AI, you aren’t going to see its true abilities’. These abilities include making incredible insights from huge swathes of data, sharpening the accessibility of our published content and becoming a ‘critical friend’ to bounce ideas off during various stages of planning and development.
Before we go further, here’s a fundamental reminder: never upload sensitive or confidential data to an online AI without signoff from your organisation and assurance that you are operating within GDPR compliance. Unless confirmed otherwise, assume nothing you upload to these tools is kept private.
Large Language Models like ChatGPT, Claude and Gemini have mostly learned about the world from what information they’ve been able to download from the internet (with perhaps a few extra bits thrown in from books, newspapers and TV archives). That means, when taken ‘fresh out the box’ and loaded on your computer, they can only make broad guesses based on that data.
And here’s the thing: that data is pretty rubbish. There’s a lot of it, sure – and that allows for some clever emergent abilities when it’s all combined and processed – but it still doesn’t know much about the context of your sector (let alone your organisation, and let alone your organisation’s internal goals and strategy).
So it probably shouldn’t be surprising that when you ask these models to do something potentially useful for your team – like suggest blog post titles, draft short copy or provide feedback on reports – its suggestions often feel bland, repetitive and mediocre. That isn’t because the models themselves are simply ‘bad’, but it’s because this isn’t how they’re built to work effectively.
Suggestion: Reframe how you work with these tools
Think of it this way: you wouldn’t expect somebody you’ve just hired to speak articulately about your organisation’s projects and strategy until you’ve given them a few coffees and days to catch up on your year-end reports and internal onboarding documents. The same is exactly true for an AI: to be meaningfully effective for any task, an AI model needs to be primed with your data. It needs to learn more about you than its training data has given it, because its training data was incredibly broad and shallow.
But what about that warning that we shouldn’t upload sensitive data to an AI? This is essential, yes, but I’d also invite you to reflect on the data and documents your organisation has saved which are, approval and GDPR pending, reasonably harmless to share with these systems.
Often these are documents you might already have online; I mentioned year-end reports earlier, and these are a brilliant start. They’re often detailed, well-written and summarise the intentions and activities of your organisation clearly and through a means which has already had buy-in from stakeholders. If nothing else, experiment with uploading your most recent year-end report to your AI of choice, asking it to read the document to better understand your organisation and its aims, and then see how its abilities compare with what it would output previously. You can expect to see an improvement right away.
The types of data you use will be as unique as your organisation
If you want to go further, look around and start conversations about what else could be used. Some organisations are creating ‘data sandboxes’, where a large collection of documents and spreadsheets (which have been fully signed off for this purpose) are accessible to employees who’d like to experiment with uploading them to an AI as a way to prime them and learn about their capabilities. Being proactive about this also reframes the issue; instead of only telling employees what they shouldn’t do with AI, you can offer them resources so they can start experimenting right away.
Further down the line, perhaps as your organisation crystallises its policies and has access to a fully secure AI (such as one being used offline on your own server), your organisation will likely find ways to innovate in this space particular to your working style. The types of data which are going to be effective for any individual organisation will differ, as every organisation produces different types of data and some will turn out to be much more useful than others.
For instance, an organisation with long-serving employees might benefit from AI-transcribed interviews to capture institutional knowledge. The key is to identify your unique, valuable data sources – and these will always differ organisation to organisation.
Be specific to your needs and goals
By using your unique data, you can transform an AI from a generic tool into an increasingly capable resource specific to your needs and goals.
Being safe and responsible about how we use data with an AI isn’t the same as using no data at all. Consider what your organisation is comfortable using, and then experiment to see what sort of results you get. If nothing else, this process can be quite a lot of fun, and will certainly get you into an inventive headspace which will only continue to serve you.
As AI technologies continue to evolve, staying informed about their capabilities, limitations, and potential impacts is also crucial. This on-going learning process is a key requirement for our sector maintaining tangible digital literacy, and appreciating the full capabilities of these systems requires them to be used effectively.
For now, I look forward to seeing how AI continues to find new applications in the cultural sector over the coming months and years. It’s going to be quite a ride, but based on what I’ve seen, we have every reason to be optimistic about our ability to adapt and stay innovative.
Jocelyn Burnham (she/her) is a speaker and trainer on artificial intelligence and its implications for the culture and heritage sector. Her website is www.aiforculture.com and you can follow her on LinkedIn to stay updated on news, experiments and case studies related to AI within the sector.
That’s it from me this week.
Got a story you’d like to share with the Cultural Content community? Or a project you’d like to talk through —> georgina@onefurther.com