Our Thoughts Tonight

ChatGPT: Artificial Intelligence & Society

February 24, 2023 Our Thoughts Tonight Season 5 Episode 1
Our Thoughts Tonight
ChatGPT: Artificial Intelligence & Society
Show Notes Transcript

In this episode, we delve into the rising influence of ChatGPT and artificial intelligence, and explore the implications it may have on both society. We also engage in a discussion about the fallacies of Patrick Bet-David, the prevalence of academic elitism, and the trendy yet misleading topic of whether AI possesses consciousness. Join us as we navigate these complex and thought-provoking issues. 

ON OUR THOUGHTS TONIGHT! 
 
Follow us on Twitter: 
https://twitter.com/Crucem_sanctam
https://twitter.com/HumanRedacted

Speaker 1:

Welcome to our thoughts tonight. Join us as we chat in a mellow mood about music, philosophy, psychology, and anything else that will come to mind. Enjoy

Speaker 2:

Our thoughts tonight.

Speaker 3:

Our thoughts tonight. Welcome everybody.

Speaker 2:

[inaudible] here. Hello. And I'm your host, Alex Trebek. Alice

Speaker 3:

Trebek. I want a lot of different stuff. So tonight we're talking about ai, we're talking about, uh, academic elitism. We're talking about Ukraine,

Speaker 2:

Maybe Ukraine.

Speaker 3:

Yeah. Right, right. Um, uh, I know I'm gonna hit on stem, the stem, you know, fields a little bit. So we're gonna be catching up a little bit, uh, hitting some, uh, recent events and all that. So yeah, before all that, uh, I think we should go into Christian sank.

Speaker 2:

As always. M we are back on our thoughts tonight.

Speaker 3:

Our thoughts tonight. So, yeah, we, uh,

Speaker 2:

Patrick bet David.

Speaker 3:

Oh, yeah, yeah, we were talking about that. Yeah. Uh, do you wanna talk about that a little

Speaker 2:

Bit? No, sure, why not? He's just, he's kind of a joke.

Speaker 3:

Yeah. I, again, I don't know too much about him. I've seen clips of him talk and stuff, but he seems very, uh, he reminds me of, um, what's that guy's name? Jesse Lee Peterson. Was it? Who's that? Uh, Jesse, who's the, uh, the guy who, uh, he, he calls everybody every day a beta or whatever. He's like an old black guy. He's very, very

Speaker 2:

Conservative. Oh,

Speaker 3:

Yeah. Yeah. I think it's, I think it's Jesse. Oh, is it Jesse?

Speaker 2:

No, no. Um,

Speaker 3:

He's like, yeah, he calls everybody. Yeah, I think he interviewed, uh, I think he interviewed Kyle. What was it, Kyle? No, he interviewed Destiny at one point, like years ago. And he's like, dust du Destiny. That's, that's a girl's name, huh?

Speaker 2:

Or something? Uh, Thomas Sol. Is that the guy you're talking about?

Speaker 3:

No, I, no, no. He's, uh, this guy is like senile, almost<laugh>. Like, he, he's obsessed with like, men acting, uh, like woman or whatever. Like, he's like, it's a big problem or something for him, but he, he, you know, the Patrick Tutor or whatever reminds me a lot of him, cuz it's, feels like it's not intellectual in any way. I, it's not like it's gonna be intellectual, intellectual at all. Sure. But there's no veneer of it either.

Speaker 2:

Does Patrick bet David claim don't to be an intellectual?

Speaker 3:

Don't, I don't. I mean, I don't know too much about him, but from the clips I've seen, it's, I don't get that impression at all. Yeah. It's, it's, did I see Lee Peterson? This dude, beta Mill? Oh,

Speaker 2:

Yeah. Yeah. Okay. Yeah. That guy's like completely irreverent. I think it's a joke. Maybe.

Speaker 3:

I honestly

Speaker 2:

Maybe think he's a joke because it's like he's kind of Yeah. Insane. Yeah. He makes no sense.

Speaker 3:

So, yeah, I guess like the main topic tonight is, uh,

Speaker 2:

Chat. G p t. Yeah. Artificial intelligence. And yeah, it's kind of, I think the, for the first time that we can see in any, like ex in an extremely disruptive way, like chat G P T I never thought I would see Google shaking in their boots. Yeah. My entire life.

Speaker 3:

Yeah. I mean, I haven't used it. I think, I think I saw my brother-in-law used it. I'm not sure if that was the same program, but I, I mean, I don't know. Isn't like, I mean, Google has so much like, historical, almost like just, you know, it's the incumbent, you know what I mean? Yeah, yeah. So like, I don't know if you No,

Speaker 2:

They're still very powerful, but yeah. They've got to figure something out now. Yeah. Because everyone, I mean, I've been saying this for years, like, search is awful right now. I don't know if you're like, if you notice that too, but you can't search for anything. Yeah. Uh, without getting a thousand ads and a bunch of irrelevant Yeah. Uh, topics.

Speaker 3:

Yeah, that's true. Um, but chat gt and I don't know too much about, it's just like, it's more of a questionnaire, not questionnaire, but like, like, you can't, like look up, uh, I think we were talking about before, you can't look up, you can't look up articles, you can't like research

Speaker 2:

Essentially. Not yet. But if you've seen and there it comes with its own set of problems, obviously. Yeah. Because of the biases, but we'll get into that. Yeah. But also it is more functional, uh, in the Bing version of it, which, because Microsoft invested 10 billion into it, and now they've, uh, joined it with Bing. Yeah. And they're letting, uh, journalists trial run it essentially. And it look it like sources, articles, and uh, I mean, it it's pretty amazing. It looks pretty great. Yeah. It's gonna also probably be contentious as well. Yeah. Because, you know, people are gonna start trying to monopolize and and so on. Yeah. As always. But that's, that's not the AI's fault necessarily. It's the people. It's capitalist adventures and Yeah. And everything else. And just capitalism in general.

Speaker 3:

I remember I, uh, on my vacation recently, we did use some sort of like, AI question chat thing. It might have been chat G B T actually, uh, I remember

Speaker 2:

Asking, did it look like this?

Speaker 3:

It was something, yeah. Yeah. It's, that's judge then. Yeah. Yeah. I remember asking like, uh, it was like critiques of philosophers, whatever, or whatever. Mm-hmm.<affirmative>. I was trying to see how accurate it could be or how good the critiques

Speaker 2:

Could be. Let's try it. Gimme, gimme, uh, the most complex philosophical question you can think of

Speaker 3:

A question I was thinking philosopher. Oh, okay.

Speaker 2:

Question. Whatever. Yeah. Some gimme a complicated or more in-depth, uh, philosophical question. And let's ask Chad. J

Speaker 3:

P t. Okay. Oh. Okay. I guess I can do something from the, from my thesis or something re regarding it. Uh, okay. It's not too complicated. Can you keep the principle of the identity of Indis discernible while getting rid of substitution tivity

Speaker 2:

Getting rid of subs

Speaker 3:

Substitute.

Speaker 2:

All right. Let's, let's ask. Okay. So can you keep the principle of identity of indiscernible while getting rid of substitute? Let's see what Chad, g p ts to say, yes, it is possible to maintain the principle of identity of indiscernible while rejecting the principle of substitute tivity, the principle of identity of indiscernible pi. I states that if two objects have all the same properties, then they are identical. In other words, if two things are indiscernible in all respects, then they must be the same thing. The principle of substitution Tivity, on the other hand, states that if two expressions refer to the same thing, sorry. Then they can, can be substituted for one other without changing the truth, value of the sentence in which they appear. The principle is sometimes called law, law of identity.

Speaker 3:

Wow. That's crazy. That

Speaker 2:

Actually there's more too, but I'm not gonna go on.

Speaker 3:

Yeah. That's crazy. Yeah. I mean, I don't think it's a, uh, I mean, it's a contentious question. I think that's crazy. Interesting. Yeah. That's a way that's a dude. Yeah.

Speaker 2:

It's cool. It obviously has its downsides, but Yeah. I don't think I've ever seen anything like this.

Speaker 3:

Yeah. That is pretty crazy. Can you, can you like, respond to the same thing? Yeah. Yeah. Be like, okay, uh, how should I word this? Because Okay. The reality of it is, it, it's true, but it, it's very, it needs to be specific as to what kind of property acquisition it's talking about. So be like, I don't know how to question that. Like, let's say, uh, I don't wanna say encoding. I want to make it say encoding.

Speaker 2:

You missed a part. Yeah. It says, I apologize for the mistake. Let me try to improve, provide a more complete answer.<laugh> to your question. If they can eventually incorporate all of the possible like, academic literature in these Yeah, that would be insane. Yeah. It'd be such a great tool.

Speaker 3:

That'd be cool. I remember, so I remember when I, when we did this in Washington or whatever, I, I looked up critiques on his, and it was like very generic, like, kind of mundane critiques. And I was like, well, I was like, I don't even know if this is really even accurate to, cuz it was like, like one of the things was, um, uh, one critique against erl is that he can be very, uh, like subjective or two in the subjective sphere of things. Mm-hmm.<affirmative> and like, like kind of, but like, not really. Like that's what, that's not what ERL would say, you know? Um, so yeah, it's, yeah. It's not complete. I guess this just came out, right? Yeah.

Speaker 2:

So like, yeah. I mean, they've been working on it for a few years, but this is the first version. Yeah. They've made it public maybe I think in November, maybe a little bit earlier. Oh, okay. But yeah, it is, this is essentially the research version. They, they give you a disclaimer Yeah. Saying that, you know, your feedback is helping us improve. Right. Right. It's not like live yet. They do allow you to upgrade a plus right now, which doesn't have any other different features, but, uh, it gives you like, faster response speed because sometimes it's so packed that it won't respond to. And if you're like using it for something, you're like, oh. You know?

Speaker 3:

Yeah. Yeah. Gotta wait. That's coming. That's, that's cool. But it's just, man, it's weird though. Yeah. I don't know. It's like, I mean, internet was already weird, but now it's getting kind of, it's insane. I don't know.

Speaker 2:

Yeah. Yeah. It really is. I mean, it's really helpful. People are talking about the biases Yeah. And stuff. Like, what do you think about that?

Speaker 3:

The bias? Oh, oh, implicit bias and stuff like Yeah. Yeah. I mean that, yeah. There has to be some sort of like, quote unquote vetting process, I guess. I'm not even sure how do they actually even determine this stuff. Like, who, like, I mean, it depends on who's actually in charge of this. Right. And

Speaker 2:

Then Yeah, it's, uh, and

Speaker 3:

Also the sources that

Speaker 2:

They're getting, uh, which I think Elon had a Elon Musk had a part in starting at one point, but it was meant to be open. Yeah. Uh, open source or whatever it's called.

Speaker 3:

It also depends on like what source are available online anyway. Yeah. And what's like, you know, at the top or whatever. I'm sure. Cause even if there's like no implicit bias or whatever, there's like, not everything's, you know, online, you know? Uh, and that's probably gonna affect us somehow. And I actually wonder in terms of like academic, like sources, like if it has access to, if it will have access to all academic sources, you know, or if they're gonna be like, behind paywalls or

Speaker 2:

General, I mean, they are behind paywalls now, so it's probably not going to, but what you can do is, uh, and in fact I do this, I, I read so much research literature right now for classes that I copy the entire article after I've read it, and I ask it to summarize it Yeah. For me. So to see if I understood the points. Yeah. Yeah. That's cool. Or I'll take like, the results because, you know, there's, the results are arguably the most important part of the article, but also the easiest thing to confuse and get wrong. Yeah. But yeah. So I feed that into there and I make sure that, you know, maybe I check to see if I understood it, if you miss anything. And sometimes it misses things too. Yeah. Yeah. This version, at least it gets things wrong. So you kind of have to be aware Right. And challenge it. But if you challenge it, it corrects itself, which is crazy.

Speaker 3:

God's weird.

Speaker 2:

Which, which kind of gets into the conversation a little bit of like,

Speaker 3:

Yeah,

Speaker 2:

I guess this is a conversation we've had like a thousand times, but at what point does something, has it have its own subjective experience?

Speaker 3:

Oh, yeah. Yeah. Well see, I feel like my answers has changed since last time. Well, I think it's just still the same, but I think like, I don't know, I've sort like viewing everything much more phenomenologically and like, I know in the past, like, well, I don't know, like, I think there's two different, like, approaches you can take. At least philosophically there's like the metaphysical route you can take with that. And then like, the phenomenological route you can take with that meta physically. I feel like you, it's not too hard to be like, oh, we're, you know, it's almost there. You know, AI is almost there to have some judging experiences. Phenomen, phenomenologically. I don't think it's anywhere near or if it's ever gonna get there. Yeah. I don't know. I mean, are you talking about like chat G B T having like, its own, like, you

Speaker 2:

Know, I know people like to talk about that, but I, I'm starting to think that it's just, I don't know if it's confusing because one aspect of me sees like the computational theory of mines. Yeah. That's really popular. And, and think of it and it, and it makes sense to a certain degree Yeah. That we are just computing things. So what is in, what's the difference between this thing? Yeah. Besides the fact that maybe it doesn't have emotions, allegedly, that's what it's potentially programmed to think or say. Yeah.

Speaker 3:

See, I, I think so. I think that's what, that's where the phenomenological parts come into play. Like in phenomenology, there's like, uh, it, it kinda depends on which phenomenologist you're asking are you're talking to. But there usually is two different ways to approach like the world. Um, like an hustle. There's like, you know, you can kind of approach it in a scientific way. Yeah. Right. But most of us live in it in like, something called a life world high degree. You have like being, being at hand and, um, uh, right. Sorry, p at hand and right to hand with boober. You have an i it, uh, sorry, A thou it relation, the point is there's different, like, ways to relate to the world. So like, if an, if a robot or whatever an AI is analyzing something, like, like for example, if we're like talking about human, you know, the human organism as like a mechanism, I'm like, oh, it, it takes in objects or it, it understands things, uh, and it analyzes it or whatever. And it's very, you know, quote unquote systematic, the brain and the body and everything. That perspective is only one different perspective. There is a sense of like, the human actually like living in a different sense in a natural. It's, it's hard. It's not hard to put in the words. Like, for example, right. Uh, the phrase like, I know a person, like I know Joe. Right. I don't know what does that actually mean? It is gonna connect, but like, I don't know Joe, in the same sense that I know a math theory or, or whatever. I know him on a personal level. Right. And that kind of relation of this personal relation, this, I don't, it's non objectifying relation. I don't think it's capable of any sort of ai. I think in AI possibly might be able to have an it relation of like, talking about data or, or understanding data, or maybe even like cog, you know, being cognitive of, of it somehow. Like that might be possible. But I don't think it's ever gonna be like truly experiential in the sense that a human is like, it's, no, I don't, I can't. Why

Speaker 2:

Wouldn't it be though? Because if it has a, a capacity for memory, which this one Yeah. Like if you had a, an hour long conversation Yeah. And you referred back after an hour to the beginning of the conversation, it remembers it. But if you have a, if you have a, a database memory that can hold, let's say multiple lifetimes worth of information, who's to say that it couldn't recall things or people

Speaker 3:

No, no. Memory is fine. I mean, for example, I can still, like, I can analyze Joe too, but I guess what I mean by the thou distinction or, or, um, uh, I don't how it doesn't really fit here too much, but like, or a Australian life world, like the AI is never gonna like, take the world as it can analyze things. It can, it can store memory, it can focus in on information. But I, I feel like it's very hard to really let the AI just be itself without that going on. So that's kinda what I mean by the whole life world thing is like, we constantly live our lives, like not actually focusing on anything specifically. Or like, we're, we're colonizing it to a certain extent, but it's not like, it's not like we're like object oriented, like analyzed people all the time. Whereas I think an AI will constantly always be that, like, it's, I, it's hard to like, imagine like a computer just like, you know, just chilling essentially. Yeah.

Speaker 2:

Yeah. Just doing nothing.

Speaker 3:

Yeah. And it's doing nothing. Or like, you know, or, you know, or, or another way to put it is like, you know, if you're talking to someone and like, there's two different ways you can talk to someone, right? You can like, talk to someone and you wanna like, get something out of them. Right. You're, you're, you're object oriented. You're focusing on the decisions you make, what you say, you're focusing on your next words, you know, you have goal orientation, I want this from them. Um, or you talk to a person like very naturally, like just letting if like, you're not even conscious of it, essentially. Whereas an ai, I think by definition, is always calculating very quote unquote intently, even that, if you can even use that word. Yeah. I think in that sense, I don't think it's ever gonna get to an experiential state. Mm-hmm.<affirmative>,

Speaker 2:

It wouldn't need to, you

Speaker 3:

Know, it wouldn't need to. But I don't,

Speaker 2:

Because it doesn't have the, at least I think we do because there's a, like a physiological benefit Yeah. To our brain that, uh, it benefits from rest or just Yeah. Being lazy and like that I think kind of has, um, kind of is tied with our, like, creativity, you know? Yeah. You know, you just kind of wa let your mind wander. I don't think Yeah. Because computers or artificial intelligence don't necessarily have to do that.

Speaker 3:

Yeah.

Speaker 2:

But then again, screensavers, it's essentially just a computer chilling.

Speaker 3:

Yeah. Well, I, I, well then, well then I think it, then it gets fuzzy in terms of interpretation of what you mean by like, chilling. Because I, I guess<laugh> because like, I don't know, I think the whole Mm. It's just, it's just different cuz in the sense of like an actual experience, like, like for, I guess I won't go phenomenology, but another metaphysical route to take is like, like, is the computer ever gonna have like, quality or like, um, qualitative experience? Like Sure. It seems like it, you know, you can use your inductive logic and like, okay, it seems like it, it's responding like a human would or whatever. But like, I'd be very suspicious, um, at, at least in terms of technology right now, if there's an actual experiential state, um, there's nothing indicating that it would, it just seems like it on the surface.

Speaker 2:

Yeah. I agree

Speaker 3:

With you. Yeah. I think Ally, you could get to that point, but I don't think we're there

Speaker 2:

At all. I think there, yeah, there are definitely questions to raise, but I think really the big thing is that people anthropomorphize artificial intelligence.

Speaker 3:

Yeah. Yeah. Yeah. Where it doesn't need to be essentially mm-hmm.<affirmative>. Um, but it seems like it, and that, that'd be it. You know, that'd be enough for some people.

Speaker 2:

Should we take a break?

Speaker 3:

Yeah, yeah. I'm sure I'm done.

Speaker 2:

All right. We'll be right back on our thoughts tonight. Our

Speaker 3:

Thoughts tonight.

Speaker 2:

We are back on our thoughts tonight. Our thoughts tonight. And we're talking about chat, G P T and the emergence of AI in all of our lives over the last few, or last generation, or recently, I guess. Yeah. Last 10 years or so. Yeah. But I was gonna ask, what do you think are the political implications for this so far?

Speaker 3:

Uh, I'm guessing you're talking about in bias or in biases, right?

Speaker 2:

Biases, yeah. And, um, obviously there's like technic colonialism, which people don't really, it's not even a contemporary topic to talk about. Yeah. As it is, but not nearly as frequently as it should be, in my opinion.

Speaker 3:

Yeah. I mean, obviously depends on who, you know. Uh, we're talking about this I think off air, but like, obviously who's in charge of g uh, chat, G B T, where the sources are coming from, where it's, uh, you know, sorry,

Speaker 2:

It's not picking up

Speaker 3:

Your voice with what sort, what sources are actually even, uh, available, you know, for it to, uh, get, um, obviously there's paywalls or even if some things aren't even online yet, like I can imagine, uh, I mean there's, you know, there's, there's published journals out there for, uh, different types of things that are not as, uh, frequent or not as like, upper tier as others. So I wonder if, and usually, like for example, I, um, a paper I was working about yesterday, or not yesterday, last semester, uh, was actually from like an African philosoph, uh, philosophy journal. I don't think that would be anywhere on the radar, for example, for chat G P T. So, yeah. I mean, and that's, you know, they can, that can have connotations of racism or connotations of, uh, different cultural biases for sure. Do

Speaker 2:

You think it'll favor like, liberal American perspectives? Like, like the corporate media essentially?

Speaker 3:

I don't, uh, I, I don't know. I mean, I, the inclinations, yeah. But I guess we'll have to wait to see or

Speaker 2:

Hegemonic, uh, hegemonic mindset.

Speaker 3:

Yeah. I mean, most, oh, it might, it might like have some, again, I'm not, I'm so confused as to how the critiques work. Well, it might be a little critical, but I don't think nowhere to the same extent as like, you know, um, it's not gonna pull a, it's not gonna pull a nome. Trump's gonna be like, oh, uh, all presidents since World War ii, uh, are guilty by Newberg war trials. But like,

Speaker 2:

Yeah, by, by default it won't. But if you, right now, if you ask it to answer this from an anarchist perspective, or answer this from, uh, the perspective of a communist or or perspective of nom Chomsky, it does, but not by default.

Speaker 3:

Yeah. So I guess what is the default then? That's the question.

Speaker 2:

Yeah. That's really what I'm asking, I guess. And it seems to be just like run of the mill stuff.

Speaker 3:

Yeah. Well, the, well, again, like, I think they kind of, I mean, if it's run of the mill stuff, then that's more of a question of like, what's run of the mill. And that, that doesn't even go with chat g pt, that's just like, that can go into academia too, or what, or whatever medium. Like what's the run of the mill. So if, I mean, it's gonna be reflective of, you know, I don't wanna say reflective of the culture around it, but reflective of where the sources are coming from and if the, the sources are coming from, and all people developing it are from a certain worldview, it's gonna reflect on somehow probably

Speaker 2:

STEM majors. Yeah. Yeah. We were talking about earlier. Yeah. And, uh, I mean, it is ai, artificial intelligence is a, what is it called when there's, um, we should ask chat. G p t, this question<laugh>, um, interdisciplinary study. It's a interdisciplinary study. So you've got different fields.

Speaker 3:

Yeah.

Speaker 2:

Um, but probably the majority of them are STEM or, you know,

Speaker 3:

Liberal

Speaker 2:

Arts. Yeah. Like, uh, psychology,

Speaker 3:

Uh, I, uh,

Speaker 2:

Philosophy

Speaker 3:

People who do adult

Speaker 2:

In

Speaker 3:

General, people who develop chat. G P

Speaker 2:

D I don't know about the developers of chat G P T, but I, I know that there, there's more than likely cognitive scientists working.

Speaker 3:

Oh, I'm sure. Yeah. I don't, uh, there's probably a few, like, I'm trying to think, like philosophers. I mean, like Daniel Dek comes to mind, man, he wouldn't work on anything like this. But possibly, I mean, I think, yeah, obviously STEM is part of it. I think some sort of business major is gonna be a part of it. I mean, like a lot, a large part of like business degrees have like AI components now that they're studying. And that's, I mean, that kind of shows a little bit of like where the biases may lead to if you have people who are obsessed with like

Speaker 2:

Economics Yeah.

Speaker 3:

Be part of it. Cuz that's, I mean, that's, that is a whole industry. I mean, that's been on the rise. Um, I mean, even at uaa even there, they, they really recently developed a whole AI lab. Really. So, yeah. So, yeah. So that's, I think STEM and business areas are, are top, uh, and cognitive science as well. But yeah, I don't know. So it's gonna reflect those people.<laugh>, what

Speaker 2:

Are your views on, on academic elitism? Like, yeah, we were talking about this earlier. Be before we started the podcast.

Speaker 3:

I, I, I only really hate, well, I don't know hate, but like I see more of it, uh, as a problem through stem. And maybe that's like my own bias, but like, that's like the number one, um, when it comes to like, like the humanities or liberal arts. Like there's, there are like, you know, elitism, elitist, you know, elitist in general. But I don't know, it's kind of weird. Like a really elitist, I don't know, uh, a really elitist ethics guy, you know, at Harvard or whatever is bad, you know? Sure. Mm-hmm.<affirmative>, it's, he's in a, probably have a really weird,

Speaker 2:

It's funny. Had an experience

Speaker 3:

Yeah. With,

Speaker 2:

Yeah. With one of my professors.

Speaker 3:

Yeah.

Speaker 2:

So called me Un-American.

Speaker 3:

Right.<laugh>. And I think

Speaker 2:

He's a Harvard professor.

Speaker 3:

Yeah, I, I know who you're talking about. Oh, I forgot.

Speaker 2:

I got, he wasn't a professor, but he's a Harvard grad.

Speaker 3:

Oh, okay.

Speaker 2:

I'm not gonna say his name. Yeah.

Speaker 3:

I, no, no, no author.

Speaker 2:

I'm not gonna dox the guy.<laugh>.

Speaker 3:

Yeah. That's pretty bad. But I feel like the academic elitism in terms of stem, I think is more understated and more dangerous, I guess. I'm not sure about dangerous, but I think it's more, yeah, I think, I think more dangerous. Like, everyone talks about like, when, when we think of academic elitist, we don't think of like, um, like we usually think of that. Like it's, we usually think of the liberal arts guy. At least in my mind, that's what comes up. And I don't think that's really the problem. I think the problem is actually like the elitist or Okay. My bias is showing, but like the whole inclination towards STEM and mm-hmm.<affirmative>, what that implies as to society as a whole. Like no one talks about like the elitist scientists at, at whatever. And it's not a critique at science, it's a critique at like, the person who goes into it, the person who like teaches it or whatever. And I think that's much more dangerous because like, one that's less funding for liberal arts that's opening wide, the doors. Yeah. I think there's a economic disparity too.

Speaker 2:

I think, uh, liberal arts, people like humanity, I think humanities are considered liberal arts. Right. And like, even Yeah. Yeah. Like criminal justice, psychology, like those type of non-engineering, non-business. Yeah. Uh, even businesses considered liberal arts, I think. But let's, let's say at

Speaker 3:

Least

Speaker 2:

It's not an engineering or business or chem, uh, chemistry. Science or physics. Yeah. Hard science or business. Yeah. So if you're not trying to make money, essentially Yeah. Nowadays the there is, uh, a criticism Yeah. Of your degree, which is ironic considering the fact that that's why higher education was invented in the first place. Yeah. Yeah. Is to just become a better citizen,

Speaker 3:

Right? Yeah. So, I mean, I think the Yeah, the real well, yeah, that's the, yeah, exactly. And I think, like, not

Speaker 2:

That those people can't be Yeah. Um, academic elitist themselves, but

Speaker 3:

Yeah, no, yeah, absolutely. But it's just like, it's also the, you know, the, I don't know of any statistics or anything. I'm sure there's some study done, but like, I'm sure there's a lot more, you know, right. Wingers that you could say in those fields than not. I mean, like, for example, like they talk about like, oh, you know, university has a left-wing bias. You know, university has a left-wing bias, uh, one that's gonna be the case most likely because it's a university, like by its own nature it's gonna be left wing. But like, I think if you look specifically at certain majors, like that might actually not be the case in business. Or, um, even some hard science or

Speaker 2:

Whatever. Oh, yeah. And I've had experience with that, like, you know, being in criminal justice classes. Yeah.

Speaker 3:

It's like

Speaker 2:

Deer in the headlights or whatever, you know, it's definitely, you get like, you get stares,

Speaker 3:

<laugh>. Yeah. I mean, it's good that like ethics courses, for example, are required of like business majors or whatever. Like, but that kind of goes to show you, you know what I mean? Like, like you're taking a business course or a medical course. I mean, I guess with medical is different, but like, especially with business, like you're taking business, you're required to have an ethics class. Like what the implication of that is saying you, you be careful. Like, like let's teach you some ethics before you go, go out there. And like that kind of, I don't know, it kind of says everything to me. Do

Speaker 2:

You think, do you think people miss it? Like do you think they miss the point

Speaker 3:

Ethics course? Probably. I mean, if it's like, most likely, I mean, that's, I mean, I don't know, but any statistics or anything like that. But like, uh, if you're taking like, I don't know, three, four normal business classes and there's one business ethics class, like, you're probably not gonna focus on as too much. Oh, it's my, it's my required class. Let me get it over with and then I'm done. Um, I mean, some people don't fall into that trip. Like, I had a friend, uh, he was in marketing and he was about to finish his degree and then he just felt so wrong about it. He's like, I love the human interaction part of it or whatever, but like, it just feels so wrong. So he switched to philosophy and I, yeah. I mean, I think that kind of says everything.

Speaker 2:

Yeah. I agree. I mean, for a society that values money so much, you would think that there would be a focus on ethics because people care about it so much. But, you know, everyone else is held to ethical standards, like doctors and psychologists and Yeah. Even, even if you're just trying to do a basic study, you're held to like ethical standards. Right. But when it comes to money, it's like a free for

Speaker 3:

All. Yeah. But also, I mean, I think the thing, the thing is clear with money, but then like, when it goes to like the hard science and engineering, I think like the trend continues in a different aspect. Like, cuz it's at the detriment, I think of other fields or whatever. Like, like, like what's like the stereo, like the stereotype really major is gender studies. Like in anything that's like the stereotype,

Speaker 2:

That's people on it.

Speaker 3:

Yeah. Yeah. Dec crap on gender studies, which

Speaker 2:

Is ridiculous.

Speaker 3:

It, yeah. Like, again, I've, I never, never took the class, but I essentially took the class, long story short, but like super informative, crazy informative. Like it's, it's really in detail. There's nothing funny about it, you know? Mm-hmm.<affirmative>. Um, but I guess,

Speaker 2:

Well what is funny about it to, to people like what they think largely conservative. Yeah. Or reactionaries.

Speaker 3:

Yeah. They probably think gender studies is like, you know, oh, men suck. Here's this and this and that. That's not what gender studies, gender studies is at all. Gender studies is not even just about gender. I mean, it goes into, um, it goes into race a lot too. It's more of just, it's more like social, like social issues in general. And it's not just about women. There's, there are a lot of socials about men too.

Speaker 2:

Yeah. But do you think that's why like reactionary conservatives?

Speaker 3:

Yeah. I mean the, yeah, I think bias,

Speaker 2:

That sort

Speaker 3:

Of thing. I mean, part of it's probably, yeah. Cuz it's, it's centered around women. At least that's like the, the initial part about it. But another part is, yeah, I think part of the whole mindset is that, you know, it's not gonna get you a job. It's talking about very loose things that they'll make, you know, that's not like in terms of methodology, a humanities degree is gonna be very different than a hard science degree just by the nature of what it is. Like Yeah. It's not a step-by-step math process, not a step-by-step research, uh, process. It's a critical thinking logic kind of mentality. That mentality that you're in. So just because it's, it's fuzzy and it's loose and it's not, the answers aren't so clear in gender studies or whatever, but that's like the inclination that that comes up. Like, you talk about something in philosophy too, a STEM major, and it's like, it just goes over their heads. Cuz it's like, it's not, the, an answers aren't easy, there's not, there's nothing about it that's like, it's a step-by-step process. Mm-hmm.<affirmative>. Um, so I think that's why, you know, and there's obviously like, you know, YouTube clips of like, you know, someone like Bill Nye, like, oh man, I'm kind, I'm kind of jumping around. But like, there's that famous talk where like, it was like, bill Nye, Richard Dawkins, nigra Tyson, a few other, or Lawrence Krause feel about guys. And some, you know, I like some, I don't like others, but like, um,

Speaker 2:

They all have their good moments. They

Speaker 3:

All have their good moments, but they're very much like, that's what I'm talking about. You know, I think Richard Dawkins a pretty respectful philosophy, but like someone like Tyson and Bill Knight, like, like quintessential stereotype of like

Speaker 2:

Science man. Yeah.

Speaker 3:

<laugh> and like, they were talking about like conscious or whatever, and it was like the most, like, God, they don't know anything. It was like, uh, Tyson was like, what if conscience is an illusion? Which is, which is an actual position you can take, but it's like, it's, it's not a, it's not an accepted solution. It's not an accepted answer. It's one of the outliers. And then like Bill and I at, uh, jokes around, he's like, what if we're, what if, you know, he's pretends like he's smoking a joint. Uh, and he's like, what if we're just thinking that? We're thinking? And I'm like, that's an actual legitimate philosophical problem. But like, you know, it's, the audience laughs and they're all these guys are laughing. And it's like, that's actually like,

Speaker 2:

Just thinking that we're thinking that is pretty interesting.

Speaker 3:

Um, yeah. But like, that's like, that's like, uh,

Speaker 2:

This cart, what do you call it? Um,

Speaker 3:

I think therefor Yeah, that's reflexivity. That's the whole point of reflexivity. Cojito. Yeah. And let alone the whole field that up Atomology. So it's just like, I don't know, it, it me off cuz it takes, and then someone like Tyson says things like, oh, philosophy pretends, like, it asks really deep questions. And I'm like, it kind of does. Like, that's kinda the whole point is that it asks deep questions. Like Yeah. We're not talking about, you know, uh, the nature of a black hole. Okay. Like it's, I don't know. It's, the perspectives are different and Yeah. And it's not like, it's not like the humanities disrespect science or, or hard sciences. It's has no attitude like that towards that. But it's not reversed, you know, for the hard sciences of the humanities. So. Yeah. Yeah. It irks me a lot.<laugh>.

Speaker 2:

I agree. I agree. I mean, it's, yeah. I completely agree. Yeah. We're gonna take a break and we'll be right back with some more to talk about

Speaker 3:

Our thoughts tonight.

Speaker 2:

We are back on our thoughts tonight. Ours tonight, salver Regina.

Speaker 3:

Was that different, uh, different group singing

Speaker 2:

That? I don't know. It was just a different piece. Okay. It was just like one of those like playlists, like when you, when you're a 18th century philosopher, chilling in the dark under candlelight or whatever,

Speaker 3:

I love the ones where it's like when in you're, uh, when you're a villain or something, like when you're villain the romantic period,

Speaker 2:

Those are so great. Yeah.

Speaker 3:

<laugh>,

Speaker 2:

Uh, that, that itself, like those videos, I mean, you could write an entire like, phenomenological, like

Speaker 3:

Oh yeah.

Speaker 2:

Like essay on that, I guess. Yeah. Because I mean, it, it ha it's maybe phenomenological, but Yeah, I know it, it's weird how it's just images and like songs, but they create something else. We've talked about this

Speaker 3:

Before. Yeah. I, I, I mentioned this in actually in one of my classes too. When we're talking about phenomenology, it all connects. I will say there is, um, there is a dungeon synth album that's, uh, the whole theme is that you're kind of like an old Arab or Jewish philosopher from like a Middle ages<laugh>, it's called I, uh, that kind of inspired Bay Bay hark, actually. Oh, really? Yeah. But it's just like, uh, you know, that 25 minute song from Burm Ang or whatever mm-hmm.<affirmative>, uh, it's basically, it sounds like that it's like<laugh> and the, and the, the, and the album cover is just this really old, really old Middle Eastern dude just going through scrolls and it's like, oh shoot. That's so cool.<laugh>.

Speaker 2:

Um, you found it on YouTube?

Speaker 3:

Yeah. Yeah. That's

Speaker 2:

Awesome. You should send it to me.

Speaker 3:

Yeah, I will. But you wanna talk about,

Speaker 2:

Uh, oh yeah. The, um, so the question of, you see it a lot in news, and I almost like find it to be like a gimmicky news story these days and, uh, really fits with the chat G P T thing where like, just like the question of is AI conscious or can it become conscious or has it even become conscious now is, I'm not completely discounting it. So take this kind of with a grain of salt, but it's almost a unnecessary question because it's a huge assumption on the fact that we can create anything as intricate and powerful as consciousness or like through evolution, let's say. Yeah. Whether, whether or not consciousness is a product of ev of evolution, let assuming it is. Yeah.

Speaker 3:

So what, so the essence,

Speaker 2:

So the question is like, do you think it, that question should be criticized? So do you think it's become a little bit gimmicky to ask if AI is conscious?

Speaker 3:

Oh, it'll be gimmicky. Um, I, I mean, I think, I think it depends on who's asking, but like, yeah. I mean, I'm,

Speaker 2:

That's another thing though. Some journalist in the New York Times is trying to get clicks versus a scientist who's doing it an actual like low key study on it. Yeah.

Speaker 3:

Yeah. I mean that's, that's, that's exactly what it is. Like I philosopher, I, I guess I am really like, I'm really sick of the whole concept of my ai, there's something about AI these days. I don't know, like, I mean, it was fun right now doing chat, G B T, but like I do, there's something about that kind of makes me really like queasy. I don't know if you've seen those AI generated images at all. Like they're really Oh,

Speaker 2:

They're kind of weird. Yeah, they're

Speaker 3:

Kind, they're very cool.

Speaker 2:

Kind of gross me out.

Speaker 3:

Yeah. But there are some that like, is like, I don't know if you've seen those ones, but like, um, I don't know, insert some pop culture thing, a video game or a movie or something. And, uh, like, uh, the ones I was seeing was like, oh, insert video game in the style of an eighties, uh, aesthetic, uh, in, in eighties aesthetic, uh, dark fantasy film, super cool images, super intricate. I can show you later. And it's awesome. But there, I don't know. I keep, like, I kept seeing them and it was just like super, like, I don't know, after a while it just made me really queasy. Um,

Speaker 2:

They make you nauseous.

Speaker 3:

Yeah. I'm, it's just weird. There's something awful

Speaker 2:

About it. Yeah, I agree. I agree. It's almost like, um,

Speaker 3:

It's not real

Speaker 2:

<laugh>. There's, um, like the faces, if you look at the faces, they kind of like have weird structures to them. Yeah. And like, this is the earlier one, I think it was Dolly. It was like, there was like weird swirls and stuff on the faces that would just like, kind of look really gnarly and make me

Speaker 3:

Uncomfortable. It's an acid trip.

Speaker 2:

It's totally like an acid trip. Yeah.

Speaker 3:

Oh my God. Oh dude. It's not really ai, but like, I guess it goes more technology stuff. Um, there's that show, uh, on Netflix. I, uh, what's it called? Love Sex and Robots or something like that. I, I, I didn't watch it until,

Speaker 2:

Is there like a documentary on like Sex Dolls?

Speaker 3:

No, no, no. It's, every episode's apparently different. It's, it's computer animated stuff, but it looks really real, you know? Mm-hmm.<affirmative>. Um, and I was tripping on acid, uh, and<laugh>. It was the weirdest crap on acid Really? It was like,

Speaker 2:

What the hell are you doing to yourself

Speaker 3:

Watching that show? No, no. Well, French show me. I, I wasn't, I didn't, I was watch, I didn't know what I was expecting. She was just like, just, just, I'm not gonna say anything. Just watch it. And I didn't know how to fathom it. I was like, I mean, it's part of the, but even when sober it's super weird. It's like, is your real or is it not? Um,

Speaker 2:

Yeah. So what is it though? They're like robots?

Speaker 3:

No, it's just every episodes like, the one I watched was about like Knight or, and like, some like, um, sea Lady and another one Pirate. Oh, okay. It's just, it just the name of the show, but I gotcha. Yeah. But it's, yeah, it's super weird and I feel like, but it's still over. Go back to the whole question, like AI though, of like, can they be conscious or whatever. It's like, yeah. I feel like it is kind of an assumption and it's kinda like jumping the gun a little bit. Cuz even if like, I mean, even with, um, philosophy of mind, even for those who do accept the thesis that like, we can make consciousness, like we're nowhere near it. Like, like that's pretty much a widely accepted answer is like, even if it's possible, we're nowhere near it. So, um, it kinda shows a little bit of like human hubris a little bit. So yeah, I mean it's, it's also, again, it's also just like, it further shows like the lack of like attention to like the philosophical debate about these things. Mm-hmm.<affirmative>, like, there's like philosophy of mine is one of the most, it's pretty much the largest area in metaphysics, which itself is the largest area of philosophy by far. And I, I maybe, maybe tie with epistemology, but like it is, has such a rich academic background that like, you can't just like post a journal article about it. Like, oh, maybe it's answering my questions. It's like, there is so much depth into it. There's questions of emergence, of selfhood, of consciousness, of stretch,

Speaker 2:

Quality stretch back to the fundamental

Speaker 3:

Questions. Absolutely. Yeah. It's, yeah, it's, it's an angel question. So it's like, um, it goes back to like noose with Aristotle. Like it's, it's, it is a really complicated area. So just like the hypothetical nature of like, oh yeah, can robots be, can robots be real? Or AI this, AI that. It's like, dude, like, you know what you're talking about and that's fine. People can, people can like think about that kind of stuff, but like, I guess that is click baby for sure.

Speaker 2:

It's for sure. It's like, like faint. Yeah. There was one situation that I saw before. We like do final thoughts. There was one situation I saw where, um, a few months ago, I think I mentioned this earlier, with the AI scientist who was fired from Google for saying that he believes, or either he quit or he was fired. I, I can't remember. But, uh, he thought that he said at least that he thought that Google's AI was con had become conscious, but he used that as, as kind of, um, clickbait to Yeah. Have all these journalists from like Bloomberg or Yeah. Or New York Times or whatever it was, uh, interview him where he started talking about the consciousness Yeah. And how it might have become conscious. But then he started talking about technic colonialism Yeah. And saying like these things, whether or not they are conscious have implications on society.

Speaker 3:

Oh, so he truth them essentially.

Speaker 2:

Exactly. That's awesome. Yeah. So he was using, he's a very smart guy. Like I urge you to go watch this inter whoever's listening, it's this interview with this AI scientist on Bloomberg business, um, on YouTube just type in ai Google. But the thing about that is that he's incredibly right. Because when you have Henry Kissinger writing books on AI and how it's, uh, you know, a modern domain of like colonialism mm-hmm.<affirmative>, you gotta be careful and start paying attention to the implications of this. And I think, like you're absolutely right for saying people don't think about the entire rich background of this. And in order to understand how powerful something like this can be, whether it becomes conscious or not, is unimportant. You have to take these things into account.

Speaker 3:

Yeah. It's, yeah. It's not, yeah, it's not looking good and people being obsessed with it is also just not healthy. Definitely.

Speaker 2:

Yeah. It is a cool tool. It is. Yeah. I'm not trying to be a Luddite or anything, but

Speaker 3:

Yeah. I mean, I think to a certain extent, I don't know, like, I mean, yeah. Yeah. I don't know. I feel like to a certain extent, like I don't want to plug in 10 cuz in, see again, after all, all these years, I'm not mentioning him, but like, I think there is something to say about, you know, constantly using technology as well. So

Speaker 2:

Technological singularity.

Speaker 3:

Yeah. Yeah.<laugh>, shout out to the third episode. Uh,<laugh>. Oh. Lost episodes now. But

Speaker 2:

Anyway, they're, they're on there. It is unpublished

Speaker 3:

<laugh>. Oh, okay. Yeah. Yeah. You'll never know.<laugh>.

Speaker 2:

The first two are, are lost. I

Speaker 3:

Think the first one's lost. I, I think the second one's there, the first one. It

Speaker 2:

Might be on blog talk radio. Yeah. I'll have to go look. Yeah. But,

Speaker 3:

Um, there

Speaker 2:

Somewhere. Yeah. Should we do final thoughts?

Speaker 3:

Yeah. Um, final thoughts? Uh, man, I'm so, like, I love philosophy of mine, but like, I hate the machine side of things, the AI side of things. It's just like, I don't know, like go take a walk. I don't know, like, you know, or whatever. I'm not even a person that's out in nature all the time. It's just like, just, it's just weird. I don't know. I don't like it.

Speaker 2:

<laugh> definitely. It's strange. Yeah. Um, but it's helpful in a lot of ways. Yeah. I, I use it periodically,

Speaker 3:

But yeah. No, it's definitely cool. It's difficult. Uh, I guess I'm more mad at the people who are, uh, who are obsessed with it or who are developing it or who are like, think so highly of it or something like that. Mm-hmm.<affirmative>.

Speaker 2:

Yeah. There is definitely, it's just like anything else. It's being, it's being turned into a thing. Yeah.

Speaker 3:

Yeah. Yeah. It's a new trend.<laugh>.

Speaker 2:

Mm-hmm.<affirmative>. Anyway. Yeah. Um, my final thought is, uh, follow on Twitter at Crew Jam song or on the link B below. And uh, yeah, stay tuned for the next episode and, uh, rate us five stars on Spotify. Whatever. Thanks for listening. Our thoughts tonight. Our

Speaker 3:

Thoughts tonight.

Speaker 4:

Oh two Rider us song Bad has no meaning at all. That doesn't touch a personal nerve. Or he the social calm that has no right, that doesn't need a reason to waste some time with soldier. Like that old time rock and roll with the chords and a touch of soul. Give old time, rock and roll. Nothing. Whoa, whoa,

Speaker 5:

Whoa.

Speaker 4:

Oh two sing song that doesn't let it all hang out. That doesn't wave in a flag. Or having a slogan to shout that knows no east or west that has no left, that doesn't throw any stones. Or look for any fight just like that all time. Rock and roll with the ease chords and a touch of soul all time. Rock and roll. Whoa, whoa, whoa. Just to write a song that won't man broke in heart, that won't woo you like a lover. Or promise a brand new start that doesn't preach your rain and fire or heat. Any higher call that doesn't need a reason to mean nothing at all. Just like that old time. Rock and roll with the of corns and a touch of soul. Gimme all time. Rock and roll. Nothing. Whoa. Whoa. Just like that all time. Rock and roll with the of corns and a touch of soul. Gimme me all time. Rock and roll. I.