Skip to main content

Yuval Noah Harari Sees the Future of Humanity, AI, and Information

In this edition of The Big Interview: Renowned historian, philosopher, and futurist Yuval Noah Harari talks with WIRED Japan Editor-in-Chief Michiaki Matsushima about the nexus of artificial intelligence, information, and the human experience.

Released on 05/01/2025

Transcript

Do you think you will be able

to trust the super intelligent ais that you're developing?

And then they answer Yes.

And this is almost insane

because the same people who cannot trust other people,

for some reason,

they think they could trust these alien ais.

[upbeat music]

So welcome to the Wired big interview.

Thank you. It's good to be here.

And that, I believe the main theme of the,

your new book Nexus.

Yes.

So we would like to know how should we live

with especially that AI

or you know, super intelligence in the

society in the future.

The first question in the late nineties, actually,

there is a one idea that, that if the internet will split,

you know, globally, and then

it'll bring the world to peace,

because the information will tell the truth,

and every people could get access to the every information

and maybe multi understanding will grow.

And finally, the human being becoming wiser.

However, you said that such a view of information,

it's more like naive.

Yeah. Could you explain why?

Yes, because information is not truth.

Most information is not about representing

reality in a truthful way.

The main function of information is to connect a lot

of things together, to connect people together.

And you sometimes can connect people with the truth,

but it's often easier to do it with fiction

and fantasy and so forth.

Some of the most important texts in history,

you know, like the Bible,

Hmm.

They connect millions of people together,

but not necessarily by telling them the truth.

In a completely free market of information,

most information will be fiction or fantasy, or even lies

because the truth is costly.

Whereas fiction is cheap.

To write a truthful account of anything, of history,

of economics, of physics, you need to invest time and effort

and money to gather evidence to fact check.

It's costly.

Whereas fiction can be made as simple

as you would like it to be.

And finally, the truth is often painful or unpleasant.

Whereas fiction can be made as pleasant

and attractive as you would like it to be.

So in a completely free market of information,

truth will be flooded, overwhelmed

by enormous amounts of fictions and fantasies.

This is what we saw with the internet,

that it was a completely free market of information

and very quickly the expectations

that the internet will just spread facts and truth

and agreement between people turned

out to be completely naive.

Recently, Bill Gates

and the interview of the New Yorker,

initially he thought the digital technology will actually

empower the people.

But finally he realized

that social networkings is totally different than the

previous digital technologies.

And he said he relied too late.

And he also said

that the AI is also the totally different technology

than the previous one.

And if ai, it's totally different than

what the technology we have previously is

anything we could learn from the history if there is nothing

equivalent to the ai.

And the most important thing to understand is

that AI is an agent and not a tool.

I see. Previous information

technologies, I mean,

I hear many people say the AI revolution,

it's like the print revolution

or it's like the invention of writing.

And this is a misunderstanding

because all these previous information technologies,

they were tool in our hands.

If you invent a printing press, you still need a human being

to write all the texts.

And you need a human being to decide what books to print.

AI is fundamentally different.

It is an agent, it can write books by itself.

It can decide by itself to disseminate these ideas

or those ideas, and it can also create entirely new

ideas by itself.

And this is something unprecedented in history

because we never had to deal with a super intelligent agent,

but there were of course other agents in the world like

animals, but we were more intelligent than the animals.

We were especially better than the animals at connecting.

Why do we control the planet?

Because we can create networks of thousands

and then millions

and then billions of people

who don't know each other personally,

But can nevertheless cooperate effectively.

10 chimpanzees can cooperate

because among chimpanzees,

cooperation is based on intimate knowledge.

One of the other, But a thousand chimpanzees cannot

cooperate because they don't know each other.

A thousand humans can cooperate.

Even a million humans

or a hundred million humans,

like Japan today has more than 100 million citizens.

Most of them don't know each other.

Nevertheless, you can cooperate with them.

How come that humans manage

to cooperate in such large numbers

because they know how to invent stories and shared stories.

Religion is one obvious example.

Money is probably the most successful story ever told.

Again, it's just a story.

I mean, you look at a piece of paper,

you look at at at a coin, it has no objective value.

It can nevertheless help people connect and cooperate

because we all believe the same stories about money.

And this is something

that gave us an advantage over chimpanzees

and horses and elephants.

None of them can invent stories like money.

But AI can, which again,

the emphasis on intelligence may not be a,

a very may be misleading.

Okay. The key point about ai,

it can invent new stories like new, maybe new kinds

of money and it can create networks

of cooperation better than us.

So you mentioned a lot about the religion.

The important things is that you wrote in the book

that those kind of, the, you know,

acceptance vision itself of the religion

will affect about the acceptance of AI itself.

Yes.

In the Japanese or Asian way of, in animism way, we accept

naturally more like a area intrusions living together

in the same environment or like I would say multi-species

things. Yes.

Maybe it's vulnerable to accept about what AI

will tell or something.

But could you tell if also the advantage

of the those things?

What would you say?

Well, I think that the basic attitude towards the AI

revolution should be one of that avoids the extremes

of either being terrified that AI is coming

and will destroy all of us,

but also to avoid the extreme of being overconfident.

I see. That,

oh, ai, it'll improve medicine

and it will improve the education.

It'll create a good world.

We need a middle path of first of all,

simply understanding the magnitude

of the change we are facing.

That all the previous revolutions in history

pale in comparison with this revolution.

Because again, throughout history,

every time we invent something,

so we still have human beings making all the decisions.

So for instance, in the financial system,

I just recently read an article in Wired

about an AI that created the religion

and wrote a holy book of the new religion and also created

or helped to spread a new cryptocurrency.

And it now has in theory, $40 million dollars,

this AI. Wow.

Now what happens?

If AIs start to have money,

start to have money of their own,

and the ability to make decisions about how

to use it if they start investing money in

the stock exchange.

So suddenly to understand

what is happening in the financial system, you need

to understand not just the ideas of human beings.

You also need to understand the ideas of ai.

And AI can create ideas which will be

unintelligible to us.

Hmm. The horses

could not understand the human ideas about money.

I see. So I

can sell you a horse for money.

The horse doesn't understand what is happening.

Hmm. Because

the horse doesn't understand money.

The same thing might happen now,

but we will be like the horses, the horses and elephants.

They cannot understand the human political system

or the human financial system that controls their destiny.

That the decisions about our lives will be made by a network

of highly intelligent ais that we simply can't understand.

Hmm.

The AI trust network, we can't understand.

And sometimes we say those things as not singularity,

not only singularity, but hyper object.

Like hyper object means what we, you can't understand.

And that's context often said in environment things,

you know, that the nature of the kind of system earth

we can't understand fully.

So, you know, human being really struggling about how

to deal with, adapt with those change of the climate

or you know, the big systems

and maybe the AI is just coming up to the top list of

how could we deal with how human being could do, you know,

make being flexibility

or even just deal with those hyper object

or just a singularity.

How could we do that?

You know, if you can't understand fully,

Ideally, we should be able to trust the ais

to help us deal with these hyper object,

with higher complex realities, which are

beyond our understanding.

But the big paradox of the AI revolution,

I think is the, the paradox of trust.

We are now in the midst of an accelerating AI race

with different companies

and different countries rushing as fast as possible

to develop more and more powerful ais.

Now, when you talk with the people

who lead the AI revolution with the entrepreneurs,

with the business people, with the heads of

the government and you ask them, why are you moving so fast?

Hmm. They almost

all say that we know it's risky,

we understand it's dangerous,

we understand it would be wiser to move more slowly

and to invest more in safety.

But the other company

or the other country doesn't slow down.

They will win the race.

Yeah. They will

develop super intelligent AI first

and they will then dominate the world.

They will conquer the world and we cannot trust them.

This is why we must move as fast as possible.

Now you ask them a second question, you ask them,

do you think you will be able

to trust the super intelligent ais that you're developing?

And then they answer yes.

And this is almost insane

because the same people who cannot trust other people.

Yeah. For some reason

they think they could trust these alien ais.

Yes. You know,

we have thousands of years of experience

with human beings.

We have some good understanding of human psychology,

human politics.

We understand the human craving for power,

but we also understand how to check the pursuit of power.

And how to build trust between humans, with ais,

with super intelligent ais.

We have no experience at all.

I see.

So this situation, the the safest thing would be

to first of all, build more trust with other humans.

Humans. So it's amazing

that today we have these networks

of trust in which hundreds of millions

of people cooperate on a regular basis.

And there is no such thing as a completely free market.

Some things can be created successfully

by competition in a free market.

We know that.

But there are certain services,

goods, essentials that cannot be maintained just

by competition in a free market.

Justice is one example.

Let's say it's a free market.

I sign a business contract with you,

and then I break the contract.

So we go to a judge, we go to a court, I bribe the judge.

Suddenly you don't like the free market.

You say, no, no, no, no, no.

Court should not be a free market.

It shouldn't be the case that the judge ruled in favor

of whoever gives the judge most money.

In that situation, you don't like the free market so much.

Hmm. There is always

some kind of sub stratum of trust.

I see. Which is

essential for any competition,

Negative scenarios about democracy becoming populism

or authoritarianism.

Yes. But would

you think about the positive side

of, you know, using AI

to encourage the more trust network?

More democracies.

Is there like any path we could make, we could use,

you know, like those new technology to enhance a democracy?

Absolutely. I mean, we've seen for instance

that in social media there are algorithms

that deliberately spread fake news and misinformation

and conspiracy theories

and destroyed trust between people,

which resulted in a crisis of democracy.

But the algorithms don't have to spread fake news

and conspiracy theories.

They did it because they were designed in a certain way.

The goal that was given to the algorithms

of social media platforms like Facebook or YouTube

or TikTok, was to increase engagement, maximize engagement.

This was the goal of the algorithms.

And the algorithms discovered by trial

and error that the easiest way

to maximize engagement is by spreading hate

and anger and greed.

Because these are the things that make people very,

very engaged.

When you are angry about something, you want

to read more about it

and you tell it to other people, there is more engagement.

If you give the algorithms a different goal, for instance,

increased trust

or increase truthfulness,

then they will not spread all these fake news.

They can be helpful for building a good society,

a good democratic society.

Another very important thing is

that democracy should be a conversation

between human beings.

I see. For that,

you need to know, you need to trust

that you are talking with another human being.

Increasingly on social media

or generally on the internet, you don't know if

what you are reading is something

that a human being has written

and is spreading or is it a bot?

This destroys trust between humans

and makes democracy much more difficult.

But we can have a regulation, a law that bans bots

and ais from masquerading as human beings.

If you see some story on Twitter, you need

to know if this is being promoted

by a human being or by a bot.

And if people say, but what about freedom of speech?

Well, bots don't have freedom of speech.

I mean, we don't need to.

I'm very much against censoring

the expression of human beings.

But this doesn't protect the expression of bots.

I see.

Bots don't have freedom of speech.

And that context, I remember that some

of the one big company in Japan trying

to make the AI constellation, you know, just connecting AI

and even just connect with human being and ai.

Yeah. And it just let them

to discuss something important like,

like a multi-stakeholder democracies.

Yes. So AI

will declare they're ai.

And they have really different intelligence like

alien intelligence.

And would you think, you know,

in the near future human being will have a discussion

with alien intelligence would make us wiser.

Absolutely.

Because yes,

ais on the one hand can be very creative

and can come up with ideas that wouldn't occur to us.

So talking with an AI can make us wiser.

But ais can also flood us with enormous amounts

of junk and of misleading information,

and they can manipulate us.

And the thing about AI is that, you know, as members

of society, we are stakeholders.

For instance, the, the, the, the sewage system.

We need the sewage system

because we have bodies we can become sick.

Yeah. If the

sewage system collapses, then diseases like dysentery

and cholera spread, this is not a threat to ai.

For the ai it doesn't care if the sewage system collapses.

It cannot become sick, it cannot die, doesn't care about it.

We need to remember it's not a human being.

It's not even an organic being.

Its interests, its worldview are alien to us.

When you talk with people you know,

like we are now talking to each other,

the fact that we are physically

beings is very, very clear.

Ultimately, they also has a physical existence

because ai, they don't exist in some kind

of mental field.

They exist in a network of computers

and servers and and so forth.

So they also have physical existence, but it's not organic.

So what is most important things

for you when you think about future?

Hmm.

I think the two key issues,

one we've covered a lot, which is the issue of trust.

If we can strengthen trust between humans,

we will also be able to manage the AI revolution.

The other thing is the, the fear, the threat.

I mean, throughout history people live their lives

inside you can say a cultural cocoon made

of poems and legends

and mythologies, ideologies, money, all

of them came from the human mind.

Now increasingly,

all these cultural products will come from a

non-human intelligence.

And we might find ourselves entrapped

inside such an alien world

and lose touch with reality

because AI can flood us with all these new illusions

that don't even come from the human intelligence,

from the human imagination.

So it's very difficult for us to understand this illusion.

I see.

Thank you very much for all the interviews.

Thank you. It's really inspiring

and a great message even for Japanese readership.

And Wired Japanese readership too.