Skip to main content

Season 3

Episode 13: “The interesting part is when it breaks down.”: The Slithery Dee, Part 2, Lionel & Jim

Wednesday, April 5, 2023

we are currently working to improve the audio player. Please check back.
Lionel Cassin and Jim Infantino with headphones on

Lionel and Jim continue their discussion of AI, and move into the idea of human understanding through modeling, Isomorphism, stories, and the importance of forgetting.

Transcript (assembled by an automaton)

Lionel:
an instant message. Yeah, here it is. So the New York Times article is you can have blue or red, you can have the blue pill or the red pill and we're out of blue pills.

Jim:
We're out of blue pills.

Lionel:
Right. And let me just find the perfect thing. They talk about how the problem is that AI's hack language and language is the fundamental operating system. This is the paragraph I love. Humans

Jim:
You okay?

Lionel:
access to reality.

Jim:
Yeah,

Lionel:
We are cocooned

Jim:
that's true.

Lionel:
by culture, experiencing reality through a cultural prism. Our political views are shaped by the reports of journalists and the anecdotes of friends. Our sexual preferences are tweaked by art and religion. The cultural cocoon has hitherto been woven by other humans. What will it be like to experience reality through a prism produced by non-human intelligence? For thousands of years, we humans have lived inside the dreams of other humans. God's pursuit ideals of beauty and dedicated our lives to causes that originated in the imagination of some prophet, poet or politician. Soon we will also find ourselves living inside the hallucinations of non-human intelligence. And I think that first sentence though is just really a key axiom which is that humans often don't have direct access to reality. In fact,

Jim:
can be.

Lionel:
like 99% like I've never, I've never seen Moscow.

Jim:
True.

Lionel:
there at the Battle of Borodino.

Jim:
I wasn't

Lionel:
I don't,

Jim:
there when the wall came down in Germany.

Lionel:
right. You know, we live our concept of the universe is based upon stories, mainly. Our actual direct experiential evidence is trivial.

Jim:
Thank you. Thank you.

Lionel:
Is a drop compared, I mean, that has so little effect on our world image. I mean, it has some effect, but it'd be interesting to come up with an atlas of that, like to actually sit down and say, what do I believe? and how much of, you know, pick 20 things I believe and 20 things that motivate me. And where do they come from? They come from something I've actually experienced or does it come from an article I wrote, I read in the New York Times or a show I watched or something that my brother told me 25 years ago.

Jim:
Right.

Lionel:
really interesting. And I think, and that's what I was trying to get at, is that we, you know, everybody says, you know, okay, money is a consensual hallucination, and etc, etc, etc. Everything is, most of what is constructed in our minds is based upon hearsay. And as I said before, what AI is going to do is it's just going to, I think one possible future is it's just gonna kill off all electronic media or it's gonna kill off electronic entertainment or

Jim:
How

Lionel:
it'll kill

Jim:
so?

Lionel:
us. Because

Jim:
Okay,

Lionel:
you won't be able

Jim:
so

Lionel:
to believe it.

Jim:
this sounds like it's going to be another uplifting and inspiring episode of Funny Not Funny.

Lionel:
Well, let's just talk like you and I.

Jim:
Alright, so the goal is to talk like you and I. Don't we do that? Don't we do that?

Lionel:
No, I changed my tenor a little bit, but I really think it's fat. Why would you believe? I mean, I sent you, of course, again, I sent you the things about the, I mean, Borges anticipated all this.

Jim:
Yeah, and I'm sorry I haven't read, I haven't read, first of all, I haven't read that book and I haven't read the article and I'm just been really busy. I believe I've been really busy. I think I've been really busy. I think I've been really busy. I think I've been really busy. I think I've been really busy. I think I've been really busy. I think I've been really busy. I

Lionel:
That's

Jim:
think I've

Lionel:
fine.

Jim:
been really busy.

Lionel:
But

Jim:
I think I've been really

Lionel:
it

Jim:
busy.

Lionel:
comes

Jim:
I think I've

Lionel:
down

Jim:
been really busy.

Lionel:
to, he wrote short stories, and he wrote two short stories that are relevant. We've already talked about one, which is the Library of Babel,

Jim:
Babel,

Lionel:
which

Jim:
which yeah,

Lionel:
is

Jim:
I have to

Lionel:
the

Jim:
read.

Lionel:
infinite, not the infinite, but the very large library, which contains every book ever written, you'll just never find it.

Jim:
Right.

Lionel:
And that's sort of one piece of the puzzle. And the other piece of the puzzle was a story called Talan Akbar at Orbis Tertius, which unfortunately, his collection of his stories, a collection called Labyrinth, they put as the first story. And it's really confusing. When you read

Jim:
Mm-hmm.

Lionel:
that as the first, it's a totally confusing story. But if they put it at the end, it would have made complete sense because what it's about, basically, spoiler alert,

Jim:
Yeah.

Lionel:
it's about a bunch of a secret group of people during the 1700s who get together and decide they're going to create an imaginary world. And they're going to write

Jim:
Okay.

Lionel:
an that's based upon fundamentally different philosophical principles than this world. It has something to do with the philosophy of Barclay and the British empiricists or something that I don't know. That's for the history, the philosophy majors of the world to talk about. But basically what they do is they create this encyclopedia

Jim:
Yeah.

Lionel:
and they very covertly, and it takes place over 200 years, and they covertly leach it into society, and eventually it takes over society, people start to believe it's true and people start discovering things that are in the encyclopedia. And part of the thing about, and one of the things about Talon, this magic world, is this

Jim:
Hmm.

Lionel:
whole, and I think this actually shows up by the way in Anathem, is that this belief that if enough people believe that something is true, it'll manifest itself.

Jim:
Ah.

Lionel:
There's this famous thing where they take these prisoners and they say, oh, if you find this object, to go down to the river and you're going to dig around if you find this object, we'll let you all go free or something like that. And lo and behold, they dig for a while and they find it eventually. The implication being that in this world in Toulon, if enough people think that something's true, it manifests itself and it becomes real.

Jim:
Mm-hmm

Lionel:
Which is of course, a reference to the encyclopedia and a reference to the people who read the encyclopedia. They and they

Jim:
and they

Lionel:
start

Jim:
start.

Lionel:
believing it's true, and they start applying it to their world. And at the end of it, at the end of it, Borges says, Talan has taken over our world. It's being taught in the schools now, and the terms, you know, Hara nista, there's all these weird terms he introduced. And so put those two together, take the library of Babel and Talanuk Bar and Orbus Tertius and shove them together. And you get exactly what those guys are saying in the New York Times, is that in AI could weave the world that you want. And that's the most horrible thing is that

Jim:
Yeah.

Lionel:
it knows because it knows your browsing history and it knows the videos you watch on YouTube and it knows your date of birth and it knows this and it knows that. It knows exactly what you want to see. And it would be like intellectual fentanyl.

Jim:
Yeah, yeah.

Lionel:
And would you even know that you're being manipulated? And if so, would you have the strength to resist it? Now, of

Jim:
Yeah.

Lionel:
course, the question is why? and A, I want to do that. Well, it wouldn't be that it'd be entertainment people trying to increase quote unquote, engagement,

Jim:
Yeah,

Lionel:
which is becoming which is

Jim:
your

Lionel:
becoming

Jim:
attention,

Lionel:
one of

Jim:
yeah.

Lionel:
which is becoming one of the great euphemisms of our age. Engagement. No, it's not engagement. You're eating, you're sucking my time is what you're doing and my eyeballs.

Jim:
Right.

Lionel:
But the whole point is that you could create a completely fictional world, just like that New York Times which is that we live in a cocoon of things based upon what other people have said. And as he says, the cocoon has been woven by human beings for many years. And sometimes that cocoon has been very dangerous.

Jim:
Right.

Lionel:
I mean, that one's not, you know, that human bill one is not, not terribly reassuring either, but when an AI gets ahold of it,

Jim:
Right.

Lionel:
they'll be able, you know, they'll be able to give you exactly what you want, exactly what you want. Or as he says there, there's another great paragraph. I'm almost done. great paragraph

Jim:
Hey

Lionel:
in the article where they said, you know, and everybody's talking about what's going to happen when these AIs get agency. And the authors say, who cares? They don't need agency. They don't need to pull a trigger on a gun. They just need to tell you a story that makes you pull the trigger on.

Jim:
Yeah.

Lionel:
And I was like, you're at exactly, exactly. We just need to tell you the story that you're defending liberty or you're defending the homeland or you have to go kill the fascists. So anyway, happy stuff. How you doing? Ha ha ha ha. Ha ha ha. Ha ha ha. Ha ha ha. Ha ha ha. Ha ha ha. Ha ha ha. Ha ha ha. Ha ha ha. Ha ha ha. Ha ha ha. Ha

Jim:
Yeah,

Lionel:
ha ha. Ha ha ha.

Jim:
very uplifting. That's great. I was doing fine. I was doing fine.

Lionel:
Well, the other news is that Elon Musk and like a couple of hundred other a technology leaders came out today with an open letter saying we got to pause to say I think it's out of control.

Jim:
Hmm.

Lionel:
And we don't understand what it's doing. And it's being led by the wrong people. And it has it has the ability to really blow everything that we hold tears to smithereens like.

Jim:
And

Lionel:
So

Jim:
what do you think will happen?

Lionel:
nothing,

Jim:
Yeah.

Lionel:
it's too powerful, it's just, it's too easy. The, another article was about, they were talking to copywriters. And some of the copyrightists said, yeah, I'm out of a job. I mean, because chat GPT can write my stuff so much better and faster. But one of the top cop, you know, a person who already had a lot of clients said, this is great. I can get chat GPT. I charge the same amount of money to my clients, but now I'm doing three times. He says I basically triples

Jim:
for how

Lionel:
my

Jim:
long,

Lionel:
work output

Jim:
for how long? Why would

Lionel:
right

Jim:
you pay? That's the ultimate thing. I mean, that's great this week, you know, but then

Lionel:
Mm-hmm.

Jim:
why would you pay down the road? Yeah, it's a really good question.

Lionel:
Mm-hmm.

Jim:
I actually, I look to, so in the tangent here, in the production of the Iceland, the runoff variations that I've been working on with Kurt Yunala, I added up all the time of the final mixes that we did and were like three minutes under 30 minutes and you know I said that the the Icelandic government will reimburse me 25 percent, but it has to be more than 30 minutes So now we need to figure out what to do create a new song or do a remix So what I did was I took one of the tracks and I ran it through an AI remixer Online and it was awful. It was you know completely unacceptable. No good crap just random But, you know, I don't know, maybe there, you know, obviously there will be, there will be better versions of it. But I don't know, you know, it's a lot of the artwork that I see out there. It's also, it's also junk. It's just new. You know, it's interesting because it's new. It's not interesting because it's great. I think with copy, it's a little, it's a little trickier. But...

Lionel:
Well, it depends on whether it's functional copy. I mean, if you're writing press releases,

Jim:
Mm-hmm. Mm-hmm

Lionel:
you know, that's a very, very formulaic thing. I actually did it poorly for a couple of months. And like, if you're doing press releases about new semiconductors being released by blah, blah, blah, you know, it follows an incredibly defined structure. There's the anodyne quote from the president of the company. There's three sentences about the feature sets and the sales benefits of the new comp... La, la, la, la, la, la. stuff, forget it, goodbye.

Jim:
Mm-hmm.

Lionel:
That's AI land all

Jim:
Right.

Lionel:
the way.

Jim:
Right.

Lionel:
So it really depends upon the nature of the thing, but it leads into another fascinating, I'm sorry, you were talking about, it's

Jim:
Oh no, I'm done. Oh, that was my whole thought.

Lionel:
the,

Jim:
It was my entire thought.

Lionel:
well, it leads

Jim:
Oh, that was my whole thought. It was my entire thought. Oh, that was my whole thought. It was my entire thought. Oh, that was my whole thought. It was my entire thought. Oh, that was my whole thought. It was my entire thought. Oh, that was my whole thought. It was my entire thought. Oh, that was my whole thought. It was my entire thought. Oh, that was my whole thought. It was my entire thought. Oh, that was my whole thought. It was my entire thought. Oh, that was my whole thought. It was my entire thought. Oh, that was my whole thought. It was my entire thought. Oh, that was my whole thought. It was my entire thought.

Lionel:
to an interesting thing I've been thinking about, which is that artwork, you know, you say that the artwork is new, but is it good? And so that to me is a fascinating question. If you listen to, when you listen, when I listen to Beethoven

Jim:
Mm-hmm.

Lionel:
and I like it, how

Jim:
Yeah.

Lionel:
much of it is the music of Beethoven? And how much of it is the story of Beethoven? And how much is it the other times I've listened to this thing and good things are happening? Or how much of this particular piece by Beethoven is used in this movie and I like that movie? You know, many times what we love, I think, not many times, but I think sometimes what we love artwork is much more the story behind the artwork and the person behind the artwork. I mean, Marcel Duchamp famously took a urinal and said it's art. Because he

Jim:
Right.

Lionel:
hung it on the wall in a museum, it became art.

Jim:
Right.

Lionel:
It's funny. It's cool. I like thinking about it. I like Marcel. I find Marcel Duchamp really interesting. And so I find that interesting.

Jim:
Oh, there's a story there, and there's a commentary on

Lionel:
There's a person,

Jim:
a... Yeah.

Lionel:
there's a human being,

Jim:
point of view.

Lionel:
right? And there's, and people evolve and people and the humans, you know, if an artwork generated by a computer, I mean, that's fine for like a lot of stuff. Like if you just want to do a book cover and you just need a book cover, okay? You know, and you're just writing another science fiction novel and 10 million get written every year or

Jim:
Yeah.

Lionel:
you're wearing something like that. Fine, okay, I just need a book cover. It's gotta be kinda cool. But if you're talking about quote unquote art, whatever that means, whatever it means, I think we certainly are interested in artists. People are fascinated by Andy Warhol. People are fascinated by Caravaggio. People are fascinated by you name it. And a lot of our enjoyment and a lot of what we, spent a lot of time looking at or reading about are things where we have a personal connection to the people who made the thing like, you know, learning everything about that band or following a band and seeing what they come up with and what their next album sounds like versus what they did the last time and how did they change over time? Will that happen with an AI? Will anybody want to listen to it? So I don't know, I think,

Jim:
Yeah.

Lionel:
I think I'm always fascinated by art and how much of our appreciation of art and artistic endeavors is about the art itself, or is it about this sort of epiphenomenon that we create in our mind of the artistic endeavor of this person or this group of people. I mean, so.

Jim:
Well, yeah, I mean, I think some things, you read the latest book by so-and-so to find out what their imagination has done. You go to the restaurant, if there's a new dish, then you wanna try it, or there's a special because you like the chef and you like the way things are made. And could a machine, churn out food that would be interesting in the same way?

Lionel:
We're gonna find

Jim:
I don't know.

Lionel:
out.

Jim:
We'll find out.

Lionel:
We're gonna

Jim:
Definitely

Lionel:
find out

Jim:
can

Lionel:
real

Jim:
churn

Lionel:
soon.

Jim:
out fast food. So if you want a burger, you know, burger

Lionel:
Right.

Jim:
and fries, they know how to do that.

Lionel:
And if you want a cheap book cover, and if you want some,

Jim:
Yep.

Lionel:
just some cheap background music for a commercial.

Jim:
Right, so this is all sort of the cheap. And we've

Lionel:
I'm not cheap,

Jim:
seen

Lionel:
I'm being

Jim:
this,

Lionel:
mean,

Jim:
we've,

Lionel:
but not cheap, but utilitarian.

Jim:
no, but we've, yeah, we've experienced this sort of thing before where something becomes easy or automated and a lot of crap is generated.

Lionel:
Right.

Jim:
Or like in the, what was it, the 70s or the 80s, like everyone in every advertising agency was on cocaine. And you could tell,

Lionel:
Thank you. Bye.

Jim:
right? I mean, look at the ads. You know, Coke, it's totally, wait, is it? It's, Oh, I used to know this. It was,

Lionel:
Oh, it's new it's now, it's totally

Jim:
it's

Lionel:
wow.

Jim:
newest now. It's totally wow. It's leading the way, it's showing

Lionel:
It's showing

Jim:
you how.

Lionel:
you how

Jim:
It's

Lionel:
it's

Jim:
the

Lionel:
the

Jim:
way

Lionel:
way

Jim:
that you

Lionel:
that you

Jim:
feel

Lionel:
feel when you

Jim:
when

Lionel:
know

Jim:
you know it's real. Coke is it. And it

Lionel:
it's

Jim:
really

Lionel:
real.

Jim:
was,

Lionel:
Coke

Jim:
yeah. Yeah.

Lionel:
is it?

Jim:
Yeah. Yeah.

Lionel:
Big

Jim:
Yeah. Yeah.

Lionel:
real.

Jim:
Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah.

Lionel:
Whoa.

Jim:
Yeah. Yeah. Yeah. Yeah. Yeah, I can't believe you remember. It's new. It's now because we remember it.

Lionel:
You

Jim:
So

Lionel:
taught

Jim:
there's something

Lionel:
it to me.

Jim:
actually oh, I did

Lionel:
You

Jim:
it was

Lionel:
taught

Jim:
I did

Lionel:
it to

Jim:
when

Lionel:
me.

Jim:
I heard that I just thought what the hell is going on?

Lionel:
This is poetry.

Jim:
What is happening?

Lionel:
It's so empty and

Jim:
It's

Lionel:
it's

Jim:
so

Lionel:
so appealing

Jim:
empty

Lionel:
and it's so catchy. It's awesome. And it's so self referential. It's painful.

Jim:
Oh,

Lionel:
Could AI come over that? Who knows?

Jim:
could have, could

Lionel:
Who

Jim:
have.

Lionel:
knows? Who

Jim:
Yeah.

Lionel:
knows? Who knows?

Jim:
Have AI's. Do they have a time machine? That's

Lionel:
No. Anyway.

Jim:
Hmm. What I love is, there was, I forget which cartoon it was. It wasn't XKCD, but it was something minimal like that. And somebody was at a rally shouting, what do we want? And the crowd goes, a time machine. And the first person goes, when do we want it? And the crowd goes, it doesn't matter. And the crowd goes, it doesn't matter.

Lionel:
That is good.

Jim:
Thank you.

Lionel:
I like that.

Jim:
Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you.

Lionel:
Yeah.

Jim:
Um

Lionel:
So let's talk about something else completely

Jim:
We

Lionel:
unrelated

Jim:
have

Lionel:
to

Jim:
to,

Lionel:
AI.

Jim:
we have

Lionel:
We

Jim:
to get

Lionel:
have

Jim:
away.

Lionel:
to. We're becoming so tedious. We are tedious, but we're becoming even more tedious.

Jim:
I don't mind us being tedious. I just, I think, you know, there's gotta be something else to talk about this. This, you know, what I really liked what you touched on was this idea of language being the operating system of the mind. And the, the whole idea of modeling, there is, there is somebody on Twitter that I follow. He's always asking questions. George, something or other. And he's every day, he's, he's got like 24 questions a day and they're all sort of science religion, you know, how do we know, you know, does time exist outside of our minds, stuff like contacts, that sort of stuff. But one of the back and forths I had with them was that, you know, we got to this notion of modeling, which has really been fascinating me for a while, which is, and I discovered this through meditation, the way that we model the situation, it's not unlike the their favorite places. So and also to wherever their borough is. Rats are very good with this. I mean there's obviously the rats in the maze and they can memorize very complicated maze structures and it's sort of a sequence, right? It's like well I go down this way and I turn left and then I turn this and I turn right and they go down that way and then I you know in the cheese. But the human mind is very similar. You know when we think about I'm gonna go home and the mind has a model for that. The mind has a model of well I'm gonna go down this which are right, it's very like the rat.

Lionel:
Yeah.

Jim:
But we do that for everything, you know? And obviously, yes, we haven't been to Moscow but we have a mental map. And it mostly is just the Kremlin, you know, and the, whatever, Red Square that's standing outside, you know, whatever we've seen in the photos, that's Moscow to us, right? Or maybe

Lionel:
All right.

Jim:
we've seen some movies where there's some nightclubs, some dingy nightclubs in a basement or something, you know, and that's Moscow. apartment building that we've seen in a movie set. So we've created a mental model of a place we've never been.

Lionel:
Right.

Jim:
We've seen the wall come down in Berlin and we have these models built up of pictures that we've seen in the mind kind of puts together okay well all right but Berlin was divided and so you know the wall would have kind of gone somewhere like this and then people might go down to street and see the wall and the wall starts coming down and then that's a do it for all of our experiences, including

Lionel:
Huh?

Jim:
other people. So I have a line, I have a lineal model.

Lionel:
Uh-huh.

Jim:
I have a model of you. And I can, I'm like, well, I'm going to talk to you in my head. I'm going to rehash that part where you didn't like my book. And I'm going to argue with you about it.

Lionel:
Uh-huh.

Jim:
And I have a model in which I know what you're going to say, right? Or I think I know what you're going to say. Or my model behaves in a certain way. And then when I'm talking to you, am I talking to you? Or am I talking to the model?

Lionel:
Uh-huh.

Jim:
And if what you say doesn't line up with the model that I've created with you, then what do I do? Not created with you, but created of you.

Lionel:
You'll pause and you'll have to

Jim:
pause,

Lionel:
stop and think.

Jim:
maybe.

Lionel:
And you'll have to adjust them. You have to adjust the model and you'll have to adjust and think about what you're talking about. But

Jim:
And then...

Lionel:
I don't think that's the point that you're driving at. I mean, we do have, you said this guy has 24 questions each morning and he's talking about models.

Jim:
Well, yeah. I've been trying to get him to kind of... He talks about what does this exist, right? Or does it exist in the present? Or a star sends out light and we see it 10 million years later. Do we know that the star exists?

Lionel:
No,

Jim:
A question

Lionel:
you don't.

Jim:
like that. We

Lionel:
No,

Jim:
don't.

Lionel:
of course

Jim:
We

Lionel:
you

Jim:
know,

Lionel:
don't.

Jim:
and that's what I'm saying. We have a model of that star. We have an idea of where it would be if it existed because of the light that we received. But that's also true of light that we receive from the wall that bounces off the lamp.

Lionel:
great.

Jim:
We are, that is all delayed. We're not seeing any of that happening in the present. We're seeing it very, very slightly behind.

Lionel:
Uh-huh.

Jim:
So, I was saying, you know, we're creating a model. And he's like, yes, modeling, you know. So apparently this is sort of out there. This is not, I thought I had kind of made up this idea of modeling and I wrote a blog post about it on my website a long time ago. But I think in terms of the way we deal with what we think is reality, the problem is people are not quite ready to say what Don Smith said on our podcast, which is, if you want to know about reality, go take a philosophy class. I'm going to tell you about what works.

Lionel:
Right.

Jim:
You know, because physics is not about reality, and this guy on Twitter is really acting like, how can you have religion, how can you believe in anything that religion says because we have science? And science tells us what reality is.

Lionel:
Maybe it doesn't.

Jim:
But it doesn't. Science is a very effective model that works.

Lionel:
It's a, yeah, that's its core thing is the predictive capabilities. The precision and the precise predictions are hugely important. But as we discussed before, one of the, and this sort of loops at the conversation we had about isomorphisms, it's like, you know, why is it so important? And I think we're talking to Don Smith about this. Why is it so important that you have to change your view of the sun circling the earth and replace that with the earth the sun because that's not how people calculate where the plants are going to be in the sky on any given day. They have equations.

Jim:
Mm-hmm

Lionel:
You don't need to have any mind model because that's one of the classic models of the mind is, you know, the solar system. We have this model in our mind. It's like there's a thing in the center. There's other things that go around it. And for a couple of thousand years, the thing in the center was the earth. Then

Jim:
Right.

Lionel:
some guy came along and then all of a sudden the thing in the center is the sun. But honestly, Mariners don't care and government officials don't care and people making calendars don't care what goes around what all they care about is if you give me a set of equations, tell me that the sun's gonna rise at this location at this time. And if it does exactly that, hooray, I don't care if you tell me it's three turtles circling a dragon as long as your equations work. And that's why, and I think Don was talking about that and a lot of people talk about that's why, you know, when Copernicus came out with Deirema day revolution of us. I hope it's, I always got them. There's day rare on Orbeez. I think that's Kepler's. And then there's day revolution of this, which is,

Jim:
Thank

Lionel:
the church

Jim:
you. Bye.

Lionel:
didn't care. The church didn't care. The church is like, yeah, whatever. Yeah, Pete, you astronomers are constantly saying wacky crap all the time. You're saying all kinds of things are going on. Who cares? All we want to know is when's Easter going to happen? Okay, baby. We just want to

Jim:
Yeah.

Lionel:
turn a crank and

Jim:
Yeah.

Lionel:
you spit out a number. We good. So the reason is why do we have these models? And the answer is, that we don't think in equations. Human beings don't think in equation. And what happens

Jim:
No.

Lionel:
is you map, you map certain things onto another system. And then you let that system run ahead. And so you have system A and system B, okay? So you have electricity, how it actually behaves. And then you have water. This is a classic one. And you map electricity onto water, your belief of water, because like electricity, flows from place to place and you can accumulate it like in a battery and it can go very, very fast or it can go very, very slow and it goes in a circuit, blah, blah, blah, blah, blah, blah. And the great thing about that is that it really is a very, very good analogy. And if you use that analogy, you can move very quickly in electrical theory. You can say, oh, okay, we have this thing in water, we have this thing called pressure, we're gonna call it voltage. And we have flow and we're gonna call that amperage and we're gonna, But what's even more exciting is when it breaks down and you ride that hobby horse of this Isomorphism you take this isomorphism means two systems have some kind of one-to-one Corresponds with each other and so

Jim:
Mm-hmm.

Lionel:
you map you know you take one system You map it onto a system. That's that you that is easier or perhaps you already know more about and then you you run with That system then you map it back onto the original system. You go. Oh, yeah, it works. It works really cool

Jim:
Run.

Lionel:
this and do that. And then what happens at some point, you know, hopefully the interesting part is when it breaks down. And it's like, okay, we mapped it and it's supposed to do this. And now it's not doing that anymore. So our

Jim:
right.

Lionel:
whole idea of how this thing was working, you know, that's what happened to the Ptolemaic system. The Ptolemaic system said that the sun and the other

Jim:
Right. Yeah.

Lionel:
It worked fine for a long time. And then finally people got to the point where they needed more precise measurements of things. They said, hey, these numbers are not coming out right.

Jim:
Yeah.

Lionel:
And they said, okay, well, we'll add epicycles. And then they had to add

Jim:
Right,

Lionel:
epicycles

Jim:
yeah,

Lionel:
to epicycles.

Jim:
they back up. Yeah.

Lionel:
And then eventually the isomorphism becomes so strained after a while that you say to yourself, because the isomorphism, they are mapping the solar system, the behavior of the planets in the sky onto this model on paper. paper of

Jim:
Right.

Lionel:
everything has to move in perfect circles. That's number one. That's the axiom of your isomorphic system here and the sun's at the center. And using that glue and that pair of scissors, you've got to come up with something that matches what actually happens in the sky. And for a thousand years, it did a pretty damn good job. And then all of a sudden, because of the advances in civilization, the numbers weren't good enough anymore. We got to stick with this model because it served us so well. So we're just going to add more little perfect circles and we're to keep the

Jim:
Right.

Lionel:
sun, to keep the earth at the center

Jim:
Yeah.

Lionel:
and then a blue part. And eventually somebody had to map it onto a different isomorphism. So yeah, models, isomorphisms, and we do it all the time. That's what the guy wrote, Gert Lecher Bach talked about all the time. That's how human beings work by metaphors. They map one system. to another system. And by jumping between those mappings, you can get inspiration. If system A, you map system A under system B, and you say in system B, if I do this and this, this happens. Does that happen in system A? You go back to system A and you do this and this and yeah, this happens. Okay, now I can map that back over to B. Now, okay, if I do this and this, I get C and you go back over to system A. Do I get C? No. Uh-oh.

Jim:
Right.

Lionel:
Okay, now there's a flaw in the isomorphism. What do we do now? And we do this, we were constantly, we're creatures of metaphors. This is like a situ- you know, how do I deal with this situation? Well, it's like

Jim:
Yeah,

Lionel:
this other

Jim:
right.

Lionel:
situation

Jim:
So you're saying like

Lionel:
I dealt

Jim:
electricity

Lionel:
with.

Jim:
flowing like water, you know, and then you're increasing the size of the pipe and it's letting more electricity through

Lionel:
Right,

Jim:
and

Lionel:
or how do I

Jim:
right.

Lionel:
deal with conflict at work? Well, I'm not sure how to deal with, if I don't know how to deal with conflict at work, I know how to deal with conflict at home.

Jim:
Hmm.

Lionel:
I mean, my favorite story I love to tell is when this Marine guy at the end of the Vietnam War, he was given a task, which was a ship of refugees from South Vietnam had been taken over by pirates. And he was instructed by the US government to go and take the ship back from pirates. And he said,

Jim:
Hmm.

Lionel:
well, I must have missed that class at officer training school

Jim:
Good night.

Lionel:
because I don't remember the class about taking a ship full of innocent people back from a group of heavily armed pirates.

Jim:
Right.

Lionel:
He says, but I do know how to take a building. They taught me

Jim:
Thank you.

Lionel:
how

Jim:
Thank you.

Lionel:
to take a building. And the important thing about it is you start from the top and you go down. That's rule number one, because hand grenades fall down.

Jim:
Right.

Lionel:
the top and you work your way down and that's how you take over building. So he says, I'm going to use that to take over ship. And he did. So

Jim:
time.

Lionel:
we as human beings, we do these things where we make analogies. And

Jim:
Right.

Lionel:
to make analogies, we have to map one system onto another. And to a certain degree, that's language. I have this concept and I have to map a sound onto it or a concept of my mind. And that of course leads to the whole Chomsky thing. Anyway, I'll stop there. Because I'm just. I'll just keep going unless I hit a barricade.

Jim:
Wait, I like the time with language. And of course, then that also goes, that goes directly to Wittgenstein and Wittgenstein sort of, you know, Forsau all. But in some ways, I think that's a good point. I think that's a good point. I think that's a good point. I think that's a good point. I think that's a good point. I think that's a good point. I think that's a good point. I think that's a good point. I think that's a good point. I think that's a good point. I think that's a good point. I think that's a good point. I think that's a good point. I think that's a good point. I don't want to let go of this modeling thing, but I do think it has a time with language because I think that our models are not ultimately image-based and picture-based. I think that if you kind of break them down, you find some kind of story. There's some kind of story about, you know, my friend Lionel, there's some kind of story about Tiananmen Square. There's that story becomes the existence of the thing in our mind. unless there's a contradictory story that hits up against it, we will just keep going with the same story. So, I have a conversation with my friend. My friend starts behaving in ways that my story about my friend isn't consistent with that. And so I have to kind of figure out, well, what's happened? Now, maybe my friend has lost his mind. Maybe my friend has had some kind of break or maybe they've changed in some fundamental way or maybe just my story's

Lionel:
Maybe I've changed.

Jim:
inaccurate. Maybe

Lionel:
Maybe,

Jim:
I've changed.

Lionel:
maybe

Jim:
And then

Lionel:
I'm having a problem.

Jim:
the deep thing is that we also have a model of ourselves.

Lionel:
Uh-huh.

Jim:
And we're constantly referring back to that model. And in meditation, that is sort of the ultimate problem is that we have a story that we tell about ourselves, a series of stories we tell about ourselves, a model of who we are. And that model creates misery for us and for other that is the kind of fundamental holding on to this rockhard reality of who we are and what we do is the source of our suffering. I've really, I've walked, I've walked us right into a corner. There's

Lionel:
Right

Jim:
nowhere to go from there.

Lionel:
now, no.

Jim:
I don't

Lionel:
Well,

Jim:
know.

Lionel:
you're talking about, you're going through the emotional, like a psychological emotional direction with this whole thing, which is

Jim:
Yeah.

Lionel:
we have ways of thinking about the world. We have these models and you're right. We have ways of thinking about ourselves and things have to make sense. And so,

Jim:
Right. They have to be consistent with the stories we tell.

Lionel:
Yes, and there

Jim:
I think

Lionel:
are...

Jim:
that's what we mean by making sense.

Lionel:
Right. There has to be a story. And the interesting thing, the thing I've thought about lots of times is that it's very deeply related back to Claude Shannon information theory,

Jim:
Mm-hmm.

Lionel:
which is to a certain degree, like everything, much of what you and I are is encoded in a strand of DNA. It's incredibly compact data storage format. And

Jim:
Right.

Lionel:
to me, science is sort of like that. If I needed to recreate this planet, what would, how could I boil that down to the simplest essentials? If I need to recreate this universe, how would I do it? And the answer is probably boils down to like, you know, 50 basic physical constants, a fistful of particles, initial conditions and boom

Jim:
All right, all

Lionel:
and then stand

Jim:
right.

Lionel:
back and grab some popcorn, grab

Jim:
Yeah.

Lionel:
a chair. But the same thing is that why do we have stories because, oh well this is the other one, I've probably talked about 16 times before, which is that I believe that we don't remember any, we very often do not remember things, we reconstruct things.

Jim:
Yes.

Lionel:
we need good solid rules on how to reconstruct memories. And we remember salient points, and then we use these basic rules and stories we have about the universe, which is that things don't happen randomly. People just don't do this, or if somebody does this, it's because they did that. There's certain common cause and effect chains that we memorize. And so you can start with some

Jim:
Go.

Lionel:
initial conditions you know, somebody was crying at the end of a birthday party.

Jim:
Mm-hmm.

Lionel:
Okay, and so, you know, what happened? Well, you know, you can sort of recreate that and say, well, it's an emotional incident and probably, you know, maybe how old were they? Well, they're children. Well, maybe some day they didn't get the toy that they want. And we can very quickly spawn scenarios, stories,

Jim:
Right.

Lionel:
about what happened. And then we compare those stories stories we know about those people or those situations. We say, oh, it probably wouldn't be this because he often didn't cry that much. So it must have been this thing. And maybe, oh, but it was because it happened near the time when his father died. Okay.

Jim:
Yeah.

Lionel:
And we recreate things based upon these elements that I think that we have. And that's why, you know, sometimes we find out that the stories we have about things are completely wrong.

Jim:
Yeah, exactly. No, it's like you have a conversation with somebody and it's short. And then you create the rest of the conversation in

Lionel:
Mm-hmm.

Jim:
your mind, in a model of that person, and all the reasons why they said what they said and all the things that are happening to them and what's going on in their life. And then you reconnect with the person and the person's like, nah, I just, my favorite show was on, or, you know, I had

Lionel:
Right,

Jim:
to go to the bathroom. You know,

Lionel:
or maybe they're wrong, but I mean, there are

Jim:
yeah.

Lionel:
certain things where, you know, I've talked to people about certain things that I remember, they said, no, that's absolutely not what happened. It was, you

Jim:
Right.

Lionel:
know, you hit me.

Jim:
Mm-hmm

Lionel:
And then they find like five other people who say, yeah, I was there too and you hit him. And then, and that's a very

Jim:
Right,

Lionel:
common

Jim:
you can read it.

Lionel:
thing is that, you know, you, so memory is fallible, but I think, I think we reconstruct memory because we tell these stories and these stories are based based upon logical building blocks, that we have to make sense of a very complex, you know, very big world, we lead very complex lives, we do a lot of different things, we interact with a lot of different people, and you need to know how to deal with stuff. And it's like, well, nobody taught me how to take back a ship from a bunch of heavily armed pirates, but they taught

Jim:
All right.

Lionel:
me how to take a building. And to a certain degree, that's good enough. It's good

Jim:
Mm-hmm.

Lionel:
enough. to

Jim:
It

Lionel:
sit

Jim:
works.

Lionel:
you down and teach you about 17,000, all this, you know, every single possible scenario because that would be the library of Babel. There's an infinite number of scenarios. I'm going to teach you about 12 and then you're going to have to improvise. You're going to have to make it up and you're going to have to do the best to map one situation. You have to be smart enough to be able to map one situation onto another. I don't know how to do this, but I know how to do this thing over here. It's the best I got. It's the closest to this situation. Situation over here so I can map them on to each other. It's there. They're both structures They're both armed interventions blah blah blah blah blah and we do this all the time. We're constantly

Jim:
And we

Lionel:
How

Jim:
do it

Lionel:
am I

Jim:
quickly

Lionel:
gonna deal with this

Jim:
too.

Lionel:
conversation? Well, I've

Jim:
Yeah.

Lionel:
had

Jim:
Yeah.

Lionel:
no other conversations like this and it usually starts off this way and it goes that way and then if I say that Like you said, how do I get home? Well,

Jim:
Right.

Lionel:
I've done it a million times. How would

Jim:
But

Lionel:
I

Jim:
I'm

Lionel:
get

Jim:
going

Lionel:
to

Jim:
on

Lionel:
Baltimore?

Jim:
my bike. Right. I'm going on my bike so I know that when I bike out, there might be a car coming. So I've got a model of the street with cars. I've got a model with the street with no cars and then I see which one it is. I

Lionel:
Right, so

Jim:
mean,

Lionel:
if

Jim:
we do it so quickly.

Lionel:
somebody handed you a moped,

Jim:
Yeah.

Lionel:
and you'd say, well, I've never pulled out into traffic with a moped, but I've done it with a bike. Done it with a bike.

Jim:
How hard could it be?

Lionel:
Well, no, but it's just, it's how we're constantly improvising

Jim:
Right.

Lionel:
in the world. And the only way

Jim:
Well,

Lionel:
you

Jim:
right.

Lionel:
can, go

Jim:
So

Lionel:
ahead.

Jim:
that means that the model is active, not just in memory, although we're drawing on memory. We're drawing on memory. I've got a model of what's going on outside my window right now. We're drawing on memory to create the model. We create the model of the event as it happens. And

Lionel:
Yep.

Jim:
then that story that we're telling about the event as it's happening is what we have in memory.

Lionel:
Right.

Jim:
But then we have to retell it to ourselves.

Lionel:
Yep.

Jim:
But the thing that we remember is the story that we've been retelling to ourselves like a long game of telephone,

Lionel:
Yes,

Jim:
where we

Lionel:
absolutely.

Jim:
tell ourselves, then we drop a couple of things and we add a couple of things and we, you know, drop a couple of things and add a couple. And then we end up with this memory that may not fit with anybody else's memory of the event,

Lionel:
Great.

Jim:
which we didn't even experience directly when it was happening, because we were creating a model of it at the time. So it's very, I mean, you think about this and you start to feel like, oh, you know, we're just minds in a vat. You know, there's nothing out there.

Lionel:
Well, it

Jim:
But...

Lionel:
doesn't it doesn't matter whether there are any things you'll never find out. I I just get

Jim:
Right.

Lionel:
kind of tired of all these discussions about, well, are we brains in a jar? It's like, doesn't you're never going to find out? You're

Jim:
Right.

Lionel:
never going

Jim:
Yeah,

Lionel:
to figure

Jim:
man.

Lionel:
it out. So just give it up and just enjoy your life. I mean, it's it's really the most arid and unfulfilling discussion in the history of mankind.

Jim:
Well, there goes the rest of our hour. That was all I wanted to talk about. I'm sorry, I'm sorry. I'm sorry. I'm sorry. I'm sorry. I'm sorry. I'm sorry. I'm sorry. I'm sorry. I'm sorry. I'm sorry. I'm sorry. I'm sorry. I'm sorry. I'm sorry. I'm sorry. I'm sorry. I'm sorry. I'm sorry. I'm sorry. I'm sorry. I'm sorry. I'm sorry. I'm sorry.

Lionel:
Yeah, but you're never gonna decide it. That becomes a religious argument. That becomes

Jim:
Right.

Lionel:
a religious

Jim:
Well, it clearly

Lionel:
discussion

Jim:
didn't

Lionel:
at that point.

Jim:
take part, right? He had to go to God. That was the only place he could go. It was like, well, benevolent being wouldn't blah blah blah. Thank you very much. But I don't mean it in terms of like, are we really minds in a vat? I mean that there is sort of a step missing when we think that we are studying reality when we're studying science. We are creating a very effective model of reality that works really well. But everything that we that we experience has to go through our senses. And because it has to go through our senses, it has to get through the brain and has to be processed in some way, turned into a model. And that part of the scientific method is never really included. It's always assuming that we all kind of process information roughly the same. We try to get around it by using, you know, undeniable, the photon actually hit the sensor, we get a yes or a no. You know, we can all agree that it was yes, or we can't, we can agree that we don't know whether it was ES or no because of Heisenberg's uncertainty for

Lionel:
Yeah,

Jim:
its principle.

Lionel:
but I don't think there's a lot of problem with that. I mean, I don't hear many accounts of where somebody performed scientific experiments and two people who were there say it meant this, and two other people said it meant absolutely nothing. I mean, there's a lot of concordance.

Jim:
Yes, there's a lot of concordance.

Lionel:
There's an enormous amount of concordance. And again, whether something's out there or not, that's fundamentally a religious argument, because there's no way you're going to be able to prove it. The whole difference between, you between religion and science is that science is refutable by evidence.

Jim:
It is falsifiable.

Lionel:
It's falsifiable. That's

Jim:
Yeah.

Lionel:
what was his name, Popper, Carl Popper.

Jim:
Hmm.

Lionel:
And he said the essence of science is that it is falsifiable at any time, at any time anything could fall presented with the correct evidence.

Jim:
Yeah.

Lionel:
And that's not how religion works.

Jim:
No, religion is not falsifiable.

Lionel:
It's not a

Jim:
Although...

Lionel:
fossil. It's the whole point of religion is that it is in fossil file because it's meant to reflect deeper, more fundamental truths about the moral and the existential sphere of life. And so therefore, evidence is irrelevant. And so it's they're just they're not related to each other in any way whatsoever. I think I see religion as more sort of a hopeful thing.

Jim:
Yeah,

Lionel:
It's what we want to

Jim:
and

Lionel:
be.

Jim:
a fearful thing too. I mean, both.

Lionel:
It can be, but it's meant to be, I think to a certain degree, it's meant to be an aspirational thing. It's meant to be something, and of course, there's one answer. I always say this when I talk about religion, because nobody says this and it always makes me very angry. One answer is that it's the word of God, period. Okay,

Jim:
down.

Lionel:
that's one answer. It's the word of God, and therefore, we don't need to talk any further about whether, It has any value or not. It does in and of itself. Okay, fine. Next level analysis is that, yeah, it's not about predicting when the sun is going to rise on the 14th of December. It's not about predicting how much coal you need to burn to raise a quart of water one degree Celsius. That's not what religion is concerned about. Religion is concerned about what are you going to do with your life? What's your relationship? to a moral world, what's your responsibility and obligation to other people and to the universe in general. So they're just, they're two completely different planets and so that to me is just never, the whole point of science is that it is falsifiable. It is obsessively concerned with objective data of

Jim:
Yeah.

Lionel:
the data of the senses,

Jim:
Yep.

Lionel:
taste and touch and smell. anything in our minds other than sensations? Yeah, there are, you know, Kant talked about that, you know, the, the, the, what do you call them, the blah, blah, blah, space and time. And, you know, in the human mind, there is senses of, there's concepts of space and time. And he's kind of right. You know, in the mind, there are, we do order perceptions and, and assemble them. So there's something there that didn't, that just isn't from the photons that come through our eyeballs.

Jim:
All right.

Lionel:
It's a slightly arid conversation. What I find much more interesting is this whole idea of isomorphisms and also compression and information because we had that whole conversation about Claude Shannon information theory.

Jim:
Right.

Lionel:
And it's about DNA. How can I reliably recreate this? How can I compress this reality down into the most compact form? And I think that's, because I think that's what stories are about.

Jim:
Yeah.

Lionel:
I think, I think, and I think the whole point is we're re, we create things. We don't remember anything. Well, that's a wrong statement. We don't remember. I don't think we remember as much as we think we do. I think we remember

Jim:
Thank you. Thank you.

Lionel:
key data points and then we just, we reconstruct it. We reconstitute it and we,

Jim:
Yeah.

Lionel:
and, and we use analogies and we say, oh, this situation is like these key data points are. like this key situation over here. So in this situation, the person cried at the end of the birthday party. So they probably cried at the end of this birthday party too, or maybe it was just an emotional situation at the birthday party. We map things back and forth, but we do it because I think we do it. I think knowledge is not about remembering, it's about forgetting. That

Jim:
I'm Harry Min.

Lionel:
I think that If you remember everything, it's chaos. If you this is another Borges story, few

Jim:
Mm-hmm.

Lionel:
days, the memorials who remembers everything he sees. And as as such, his life is a nightmare because when you say the word table, he doesn't have any concept of the world table. He just has millions and millions of memories of all different kinds of tables. And so therefore he has no concept of what a table is. way he can calm himself down is to curl up in his bed and point his head towards, he points in the direction where he knows he's staring into an area underneath the river, which he has never experienced. And then his mind is quiet because he can

Jim:
Hmm

Lionel:
imagine something that he's never seen. But as soon as he opens his eyes and walks out of the world, it's a riot because he not only sees the table, but he sees the table at every single instance of time.

Jim:
right

Lionel:
across the floor. And so that's, of course, the great thing about a Borges story is that that's what's happening to us. So at some point we have to throw a lot of that out. We filter it out and the question is, go ahead, go ahead.

Jim:
Yes,

Lionel:
And so we're gonna have to go

Jim:
yes.

Lionel:
through this. And so we're gonna have to go through this. And so

Jim:
We

Lionel:
we're

Jim:
filter

Lionel:
gonna have to

Jim:
so

Lionel:
go through

Jim:
much

Lionel:
this. And

Jim:
out.

Lionel:
so we're

Jim:
Yeah.

Lionel:
gonna have to go through this.

Jim:
Yeah.

Lionel:
And so we're gonna have to go through this. And so we're gonna have to go through this. And so we're gonna have to go through this.

Jim:
We have an eye is not like a video camera. That's, you know, the eye does not see the whole picture. The eye sees a tiny piece of the picture. I see the rim of your glasses. And I see that you, now I'm looking at your headphones, but the mind puts together the model of Lionel with the glasses and the headphones on. You don't actually see

Lionel:
Yes.

Jim:
the entire thing at once. You can take it in, but it's not,

Lionel:
No.

Jim:
you know, you're actually looking at something very specific usually when you're, when you're looking at something. And so this idea that we have some kind of photograph is not really true. We have hints of, you know, gestures of a photograph from which we assemble the whole picture. And we assemble it according to the model. to.

Lionel:
Your bandwidth is cutting in and out, Jim. I'm sorry.

Jim:
Yeah.

Lionel:
Hang on, it looks like it's resolving itself now. Give it a few seconds.

Jim:
You can guess what I was saying. You can probably put together, piece it together.

Lionel:
No, no, but I did, but it says exactly, well, the whole point is we don't know what's going on. Because, and as again, trotting on well-trot ground once again, what goes on between the moment of photon hits our retina and the moment we have an image in our mind, all kinds of stuff goes down. Are you still there?

Jim:
Sort of, yeah, I mean I've got nothing. I'm trying to do an internet speed test, and I've got nothing. Oh, now it's fast again.

Lionel:
Okay, you there?

Jim:
Yeah, yeah, I am. So all

Lionel:
Okay.

Jim:
that stuff that I said, you can kind of just guess what I would have said

Lionel:
No,

Jim:
and

Lionel:
no, but

Jim:
that'd

Lionel:
I did

Jim:
be

Lionel:
hear it because it chopped it up and I heard it.

Jim:
Yeah.

Lionel:
It chopped it up and I heard it and it's true. And I think what happens is so the question is who's throwing out the frames? I mean, every

Jim:
Right

Lionel:
time you every time you read a book, like I'm reading a book, I'm reading the book that my brother hates, which is The Birth of the Modern by Paul Johnson, which is

Jim:
the artist.

Lionel:
a massively great book. It's a lot of fun. It's also a thousand pages, I think at least. But as deep and deep as it is, he's throwing out an infinite number of details. I mean,

Jim:
Right.

Lionel:
this is sort of the paradox of reality is that you have to throw out so many details. It's irrelevant. Oh, I was working at my computer and there's a truck, and there's a problem with my computer, there's a truck on the loading dock that's beeping, backing up. Why'd you mention that? Oh, because that's what's happening. I'm telling you everything that's going on, and

Jim:
Yeah,

Lionel:
we don't do

Jim:
yeah,

Lionel:
that.

Jim:
but that's not part of the story. We don't

Lionel:
It's

Jim:
need that.

Lionel:
yeah, and so there's this concept of what's relevant and what's not relevant

Jim:
All right.

Lionel:
And that's a fairly complex that might be a very complex concept about what's relevant and what's not relevant, but I'm saying that part of And and when we sleep, you know, people say that when we sleep one of the things that happens during deep sleep is that the memories are pruned And that the details are washed away and the things are boiled down and simplified which makes total sense to me

Jim:
Right.

Lionel:
It's parsimony, which is you want to do things in the simplest and most lightweight manner you possibly can.

Jim:
Well, so here's my idea for what the compression is. And it goes back to the old philosophical argument of what is a chair, which I never really got when I was studying philosophy, but now I think I have a better understanding of it. Because when you think of a chair, there's nothing about chairness, as Plato would say, the form of chairness that you could find in all the chairs that exist, that we call chairs. You look at, Herman Miller, and you look at something in your living room that is like Swedish designed, and you look at a dinner table chair, and you look at a bean bag, and a Papason chair, and those are all chairs. None of them have anything to do with each other.

Lionel:
Right.

Jim:
Well, they have, except that for their use, you sit in them, right? I mean, you know that you sit in them, but now you've already compressed it down to the use, and even more, we just call it a chair. It's something that is called a chair. So you saw something, and you had had no idea that it was a chair, it was designed within reach, right? And you saw that and you're like, what is that thing? It's a chair. Okay, great. That now goes into the set of all things that can be called chair.

Lionel:
Thank you. Bye.

Jim:
And we just say chair. And then that goes into the story, that goes into the model. And we've compressed all of this information about an object, a collection of things that used to be other things, that will eventually become other things when they're thrown in the garbage and they break down. We get that. state of that matter is called a chair

Lionel:
Bye.

Jim:
because we call it that and that's it like and that's it for everything you know

Lionel:
And the concept of chair is very deep. Like, as you're talking about this, I was thinking about like, what other things can you say? Typically a chair is manufactured. Like, would you call a log stump a chair? Not really, your concept

Jim:
Yeah.

Lionel:
of the word chair is that it is a thing that's been fabricated by human hands.

Jim:
Mm-hmm.

Lionel:
Hofstetter, the guy who wrote Gertel Escher Bach and who wrote Meta Magical

Jim:
Yeah.

Lionel:
Themas, he actually embarked upon something that he had a couple of inquiries that I thought were actually fat that I had worded. I had always wondered about myself. And one of the fascinating ones he did was typefaces. He said,

Jim:
Yeah.

Lionel:
it's amazing how far you can distort ABCDEFG and human beings can immediately pick it up.

Jim:
That's right.

Lionel:
They can immediately pick it up and say, oh, that's the alphabet. That's a typeface. And,

Jim:
Yep.

Lionel:
you know, just amazing visual processing power to be able to pick out patterns to, as you say in your song, interpreting the static. It's amazing

Jim:
Right.

Lionel:
how much we can see in stuff that does that. And he does that with a lot of other phenomena, like, you know, can you, you know, pattern recognition and stuff. And so there are some very, very deep, you know, and

Jim:
and set theory, I mean he was big on set theory and the flaws and the problems with set theory.

Lionel:
Right. So the key thing is that we have these very fast rules of thumb, which are, like you said, stories

Jim:
Yeah.

Lionel:
or analogies to other things. I know how to handle this situation. I don't know what to do when being chased by an elephant, but I've been chased by a cheetah. I

Jim:
Yeah.

Lionel:
know. But the important apart and say, I know I've been chased by a cheetah, but I'm being chased by an elephant. So I don't know what to do. I'm just going to stand here. You don't. So how do you know? How do you determine what's relevant and what's irrelevant? Because part of what the decision you're making at that point is that the fact that it's an elephant versus a cheetah is irrelevant. I got to run.

Jim:
So, so. This is gonna be a multi-part conversation, vinyl. I think this is great. We went, last time we talked about AI, it morphed into cognitive theory and philosophy. Let's continue it, if we don't have another guest, we had once again our guests cancel, but let's continue it next time, because I actually have to get home, and this is relevant, to read to my kids, and I'm reading the short happy life of Francis Macomber by Ernest Hemingway.

Lionel:
Oh.

Jim:
And that is essentially the key action that happens in that story is he runs away from the law.

Lionel:
Cool.

Jim:
That's

Lionel:
So

Jim:
the

Lionel:
there's

Jim:
salient

Lionel:
there's there's

Jim:
point.

Lionel:
a synchronicity here. Yeah, let's pick

Jim:
Yeah.

Lionel:
it up next month or next week Okay

Jim:
Yeah. Do you mind? Is that all right?

Lionel:
That's fine. I'm gonna reread I This gives me a chance to I really want to reread some of the the Hofstetter stuff. It really it's it's He's really good to read. He was thinking about all this AI stuff back in the late 1960s

Jim:
Yeah.

Lionel:
When they were using Lisp And he has continued to write. He's like old school really old school. But anyway, yeah, let's wrap it up.

Jim:
I'll be joining you from Asheville, North Carolina next week. I'll be down there visiting a friend.

Lionel:
Cool.

Jim:
But I'll bring my gear and we can do the talk.

Lionel:
Okay, see you then.

Jim:
All right, Lano.