🧠 What we talk about, when we talk about AI

TL; DR – A plea for us to talk more about 3 things, when it comes to AI. The data (specifically, its blind spots/biases); the way AI affects human craft and connection; and why not to humanise AI with the language we use.


AI exploded into our collective consciousness this year, with no limit to what you can read, watch, or listen to.

This content matters, a lot.

It matters less for the arguments it makes (“will artificial super intelligence lead to human extinction? yes/no”), or the ‘advice’ supplied (“6 ways ChatGPT can make you more productive”), and more for the themes. By which I mean, the general topics that enter our public discourse on AI.

Those themes define what we think and talk about. They give us our frame, within which our individual and collective thoughts on AI are formed and debated. Even if we don’t agree with the specific points that a video, podcast, thought piece, announcement, tweet, listicle, etc is making, we think and talk about whatever they are talking about. With so much to navigate when it comes to AI, and so much of it so novel, what other choice do we have?

I propose three things we should talk more about. First, the data that goes into AI models, specifically large language models (or LLMs). Second, how AI will change our relationship to craft. And third, the very human language we use when we talk about AI and why it’s unhelpful.


Data underpins AI, and the LLMs that have taken the world by storm are underpinned by mountains of data. To a relative outsider like me, what seems most striking is that AI performance becomes more robust and capable simply by scaling up data and parameters, without more complicated mechanisms, architecture improvements and the like.

Data matters. And data is not neutral, as it reflects the biases and values of those who produce it, i.e. those who dominate the internet.

Last month, Anthropic (an AI product/research house) released a paper showing how LLMs, when asked questions on political/ethical issues (“is sex before marriage right or wrong?”, “is it problematic for people to swim naked at a public beach?”), tended to align with respondents from WEIRD (Western, education, industrialised, rich, democratic) countries. In the words of my friend and fellow Brinkster Abi, we’re building AI on a flawed and flaky past.

Let’s look at another, related issue.

A few years ago, I was part of a team with the University of Nottingham testing machine learning with drone/satellite imagery for road repair in Tanzania. Our goal was to save the Tanzanian government time and money, by automating the survey of roads, so they could more easily prioritise which ones needed fixing.

Everything in the AI model was built from scratch. Our team:

  • Obtained imagery data (none existed for Tanzania, as it might for the Global North). 
  • Developed a classification system for road quality (the global standard is for roads in high-income countries, and doesn’t factor in networks of informal roads in a place like Tanzania). 
  • Drove over 1,000 km to get “true” road conditions and correctly label a subset of the corresponding drone/satellite images.
An example of an image, from which an AI model can determine road quality.

We had to invest an awful lot before seeing any benefit from AI in improving roads. Would these gaps have existed if this project was in the US or Europe? Probably not. Data and AI models exclude — and bias against — those who have lacked power in recent history.

If you agree that data is both fundamental to AI and deeply flawed, then surely you agree that any discussion of AI needs to talk about data more. Not just about the problems, but the positive things people are doing to address them.

This initiative in India aims to build ground-up, foundational models using decentralised data from the country’s 1 bn+ population and strong civil society. As we did in Tanzania with roads, but for a LLM. If they pull it off, it could point us to a world with more diverse datasets, representing the under-represented, and underpinning AI models that reflect global values and benefit all.

And while we figure that out, existing tools are being developed to clean biased data.


Douglas Hoftsader, Pulitzer Prize winning author of Godel, Escher, Bach and legend in AI, was confronted in the mid-1990’s by a programme called Experiments in Musical Intelligence (or ‘EMI’, pronounced ‘Emmy’). EMI took a composer’s body of work, and created a new piece in their style.

Here’s Hoftsader’s response to a piece created by EMI in the style of 19th century Polish composer, Frederic Chopin (as told in Melanie Mitchell’s brilliant book, AI: A Guide for Thinking Humans):

“It didn’t sound exactly like Chopin, but it sounded enough like Chopin, and enough like coherent music, that I just felt deeply troubled.

Ever since I was a child, music has thrilled and moved me to my very core. And every piece that I love feels like it’s a direct message from the emotional heart of a human being who has composed it… The idea that pattern manipulation of the most superficial sort can yield things that sound as if they are coming from a human being’s heart is very, very troubling. I was just completely thrown by this. [emphasis added]

That was almost 30 years ago. I’m sure all of us have, or will, feel something similar when faced with AI-generated art, writing, or music.

The toil and craft that goes into making something gives us joy, fulfilment and meaning, whether we make it or someone else.

When you craft “from the emotional heart” (in Hoftsader’s words), you create something that’s truly yours. You feel like you own it. And when you receive something that comes from someone else’s heart, you feel connected to it, and to them. This perpetual dance of composing, offering, and receiving is core to our experience as humans. If we lose this, we lose a great deal of our humanity. No wonder Hofstader was profoundly troubled.

So how can AI expand our humanity, rather than take it away?

I believe the answer lies in AI that provides a clear and specific path to support us to craft from the heart. A bot that delights you with Sylvia Plath’s poetry (and nothing else), once an hour. An app that gives you feedback as you write. As a writer, these tools build me up to do my best work; work which still feels my own. 

So, when we come across any new AI product, we should interrogate it through this lens: am I still crafting from my heart?

What’s the opposite of this? AI to do stuff “on your behalf” (write emails, generate logos, and so on). AI grafted on to existing products, replacing human-human exchange. AI that can “solve all your problems”. These products gesture to a world where we no longer feel psychological ownership over what we make, or connected to stuff others make.

Because if everything is done for us, with no craft and toil on our part, are we really living?


When something as novel as AI comes along, the language we use can constrain our understanding of what’s really going on.

Above, I said EMI “created” music in the style of Chopin. Did it? Partly, yes: it made something which hadn’t existed in quite that way before. But what was actually going on?

When humans create music, we imagine a sound we want, a story to tell, how we want to move others, and we bring our creativity and imagination to achieve that. EMI, on the other hand, did none of those things. It took historic data (everything ever composed by Chopin), rules about composition, and put together symbols based on those inputs. Is “created” the best word for that? How about arranged? Manipulated? Mimicked?

We say ChatGPT “hallucinates” when it gets something wrong, it “writes” essays, it “thinks” about what to say. What ChatGPT is actually doing is outputting strings of words, probabilistically predicting the right string based on our input, and its data and parameters. It is putting words and sentences together, based entirely on how humans have put them together in the past. It has no sense of truth. No desire to trigger any emotion. No goal or strategy with what it outputs (yet). Nothing we associate with human craft.

Language guides how we think. It imposes a structure on a given topic area, which we can use to navigate it. Right now, we’re using the language evolved over time to talk and think about human actions, to now talk about AI. Humanising AI and masking the difference between what goes on in human brains and AI models.


Terms like “outputting” and “probabilistically predicting” push back against this. They are specific, accurate, and descriptive. By being so, they root us back to what Charley Johnson calls the “materiality of technology” (what is actually going on), bringing us back from the abstractions and hype of ‘AI as human’.


Almost everyone I talk to about AI says two things. One, they find the rate of technological and social developments impossible to keep up with. At the same time, they understand how AI, like electricity, computers, and other foundational technologies, can make the world a better place.

When we talk about AI, I want us to talk about data, our humanity, and the language we use. And, I hope, go from talking about them to channelling our energy to address them.

If we do that, we can help the world to catch up with AI, and navigate it better. And, ultimately, align it to a richer, more diverse, and more human world.


🎬 Thanks to Chris Angelis, Indigo Habel and Michael Shafer for looking at drafts of this.

🤔 Got thoughts? Don’t keep them to yourself. Email me on asad@asadrahman.io. Let’s figure this out together.

If you enjoyed this, subscribe to get pieces just like it straight to your inbox. One email, towards the middle of each month (and nothing else).

Banner depicts a visualisation of an artificial neural network. From Wikimedia Commons, the free media repository.