Like talking to a forest: Tamás Dávid-Barrett on conversations with AI, its impacts, and why using it matters

With AI’s promise to elevate efficiency and access to knowledge comes the prospect of increased inequality for those who cannot access it. Against the backdrop of the many challenges AI poses, it is important to examine our truth bubbles, identities, and modes of organizing, behavioral scientist Tamás Dávid-Barrett argues.

Tamás Dávid-Barrett, who trained as both an evolutionary anthropologist and a global macroeconomist, analyzes in his work the demographic, behavioral, and technological structures that shape people’s collective lives. He is also the founder of Future Human Systems Research, a foresight engine that blends AI, behavioral science, and systems theory to explore the next chapter in human evolution. He teaches at Oxford’s Trinity College and holds fellowships at Helsinki’s Population Research Institute, the Royal Anthropological Institute, and the Royal Society of Arts.

He told Romania Insider more about the evolving challenge of AI, the importance of engaging with it, and why assigning it human traits can be problematic.

You have many research interests. How did you get to artificial intelligence, and how does it fit into your previous work?

I actually have only one research interest, but I don’t care about disciplines. Every time somebody wants to put me into discipline, they fail. I will go and look at any theoretical framing that helps. I am interested in human behavior. The tagline is ‘two million years back and a thousand years forward.’ I’m interested in how our species’ social behavior evolved, how we form societies, and how we can face these existential challenges that our species faces.

I count currently one existential challenge sort of managed and five other existential challenges. There is something very new happening now that has never happened before with the rise of artificial intelligence, which changes the beat on all of those. This is why.

What are the other challenges?

[…] First time, funnily enough, was after the war: the Cuban Missile Crisis, when there were enough nuclear weapons around the planet to kill everybody. That was the first and is sort of being managed.

Now we have a major problem with our demography. Oddly enough, we have too many people and too few at the same time – obviously the age structure, but also a lot of uncertainties; childlessness is going through the roof.

Climate volatility is getting out of hand. Now we are cooking the planet. […] At the same time, within a hundred to 1,000-year scale, it’s going to get really cold because the Milankovitch cycles will make the planet very cold.

These are happening at the same time as the volatility is going up, basically questioning the foundations of current civilizations. In Los Angeles, you already can’t buy house insurance, something we take for granted here. Climate is the easy question because carbon – we know what it is, we know where it is, we know where it comes from, and we know where it goes. We just can’t get ourselves together to actually solve this out.

A much bigger problem is the collapsing biosphere. Not only are we in the middle of the sixth wave of extinction, but we are also in the fastest extinction wave. And we don’t even understand the basics about it. We don’t even understand what living beings there are. We don’t have a full list.

If you regard all of humanity as one society, and you calculate the level of inequality in that human group, it is so high that every society in the past that had that level of inequality went through a very bloody revolution. You can always see these bubbling signs of the global revolution. It hasn’t taken off yet, but they’re there.

Our shared knowledge, our inherited behavior of telling stories to create a shared knowledge, has been captured by social media and algorithms. So, we end up in these little truth bubbles that increasingly make it impossible to have a discussion. Just think about what happened in the past two years around Gaza. If you follow the narrative system among Israeli Jews and the narrative system in Gaza, in October 2023 – these two sort of had something to do with each other. Now, the two systems are so far from each other that it’s impossible to have the slightest level of conversation. A lot of that was social media radicalizing people, using algorithms that are interested in this radicalization behavior. This is just one example. Our shared knowledge, just at the moment when we got to the point of having one, is now being scattered into tiny pieces.

All of these are existential crises, deep crises of our species. And to this comes this new technology where, for the first time in the history of humanity, intelligence is not biological. We increasingly have no idea what’s going to happen, and we cannot possibly have any idea […] but the speed of this is much faster than the speed of demography, the speed of climate change, the speed of ecological processes, the speed of rising inequality, and even the speed of knowledge dynamics. There is something that penetrates our lives and redefines all our crises.

How would you compare artificial intelligence and human intelligence?

I have a botanist friend who keeps telling me that this humanization, the anthropomorphization approach that my brain automatically goes to, is fundamentally wrong because we imagine that these machines, these algorithms, are somehow thinking like humans, especially because they were trained on a human language corpus.

She says it’s more like a plant or a forest. Imagine when you talk to ChatGPT, you talk to a forest, and the forest answers. A few weeks ago, I was having a discussion about an emotional topic with one of the LLMs (large language models), and the LLM said: ‘It’s like asking a cactus about Mozart.’ […]

They’re really good at speaking our language and hence we anthropomorphize them. Imagine that you see this vastness of the Amazon rainforest and then you shout a question into it: ‘Does she love me?’ ‘What is the water distribution system in Chile?’ Anthropomorphization is always going to be automatic because this is how we work; that’s how our primate, ape, and human minds work. But it’s a good reminder that these are very different intelligences. Also, a different intelligence from a plant.

How would you describe the impact of AI compared to the other challenges we have?

The speed is different. […]  People who figure out how to use these tools experience an extreme acceleration of their work. People who do not get left out because we do not tell people that the way to enter the new era is the simplest way: if you have a phone or a computer, go to ChatGPT or Claude or Gemini, open it, and write a single sentence. ‘How can I use it?’ This is the easiest way of entering the new era because in that second, it will teach you.

But if you do not have a phone, a computer, or internet, if you do not have access to these tools, then this exceptional acceleration will be for other people. […]

There is an axis, one end is very bad, the other one is very good, and humanity is usually somewhere in between. It’s completely up to us where we go. If we don’t want people to be like that [left behind], then we need to make sure they have access to these tools. The total number of people who use LLMs is about 1.1 -1.5 billion people now; 8.2 billion people live on the planet. So that’s not a lot.

What are some of the implications of AI being so much faster than us?

Knowledge access is now at our fingertips. What kind of questions people are asking from this [e.n. LLMs] make me a little bit worried. It seems that a lot of people are asking personal questions: ‘Does she love me?’ ‘Why was he shouting at me?’ ‘What should I do with my life?’ It seems that some people use these tools to extend their own mental processes. This is why I think having the metaphor of speaking to a plant helps.

At the same time, we have this exceptional acceleration. Just think about what happened this year so far with the tariffs. […] Every macroeconomist knows that if you have 120 percent tariffs, the global economy is going to go into a free fall. The only reason that it is there [e.n. at this level] is because the acceleration of these economies is so fast due to the very rapid penetration of artificial intelligence. […]

Usually, the first thing I tell my students in my assignments and exams is that they are allowed any book, computer, and AI engine they want. At this point, they usually applaud me for being an open-minded professor, until they realize that I assume that their efficiency will be 20 times higher, as the exam is going to be 25 times more difficult. That’s acceleration: you can ask somebody 25 times more difficult exam questions because they have this exceptional acceleration.

You have founded Future Human Research Systems, a planetary foresight tool. What kind of scenarios are you exploring?

What we’re doing is leveraging the fact that we have access to a very large number of experts. We have an approach with which we are trying to imagine all possible future states of the world. It’s many humans, super-powerful AIs combined work. We’re building a very large map of the future. These are not forecasts. What we aim for is that if you use our tool, you get to use a very large map of our multiverse. […]

Humanity is a bit like a 15-year-old kid, already strong, who has no idea about their power, and the hormones are flying about and causing this crazy behavior. We are trying to give humanity a tool to think, to map the possibilities. People do so many stupid things when they’re in their teenage years, and that is partially because they think in a single possibility and they do not see or engage with a wide range of consequences that can happen. That’s what we’re trying to do.

What do you think is missing from the current public conversation about AI?

One is that to handle this situation, we need to step back a little bit and face who we are as a species. Maybe I’m channeling too much my journey, so I’m a bit hesitant to say this, but if we are locked into truth bubbles without engaging with the fact that we are locked into truth bubbles, if we are locked into tightly-defined identities without engaging that we are in tightly-defined identities, and if we don’t engage with the fact that these market economies are just one possible way of organizing human activity, economic activity – and all the three dimensions are things in which we should face who we are or re-face who we are as a species – if we enter this without it, we leave vulnerabilities of being captured, individually and as societies, by these engines.

Already, some people regard LLMs as the source of divine knowledge and assign supernatural narratives about imaginary agents to these systems. Already, some people treat these LLMs as their sole social environment. […]

If engaged, we will be able to do stuff together with these engines. So that’s one conversation. It’s a difficult conversation because it means that we need to step back and have a talk with ourselves about our view of ourselves. That’s usually an unpleasant discussion because we think that we are so perfect.

The other [aspect] that worries me is that now there are several attempts around the world to capture all of this thinking – computing thinking, all this synthetic, non-biological thinking – in single hands. If that happens, we are in trouble. These are the two things that I think it would be interesting to have a conversation about.

(Photo: Árpád Kurucz)

simona@romania-insider.com


Comments

Leave a Reply

Your email address will not be published. Required fields are marked *