In this conversation with Lin Atnip, tutor at St. John’s College and incoming Basic Program instructor at the Graham School, we explored one of today’s most pressing questions: what is intelligence in the age of AI? We examined the fundamental differences between human and machine thinking, focusing on concepts like embodiment, desire, and tacit knowledge that may deceive even the most advanced algorithms.
Atnip shared her concerns about outsourcing cognition, the potential atrophy of critical skills, and AI’s power to shape and destabilize shared knowledge. She also detailed the philosophical roadmap for her upcoming course, which uses thinkers from Plato to Polanyi to examine whether intelligence can be separated from consciousness. This dialogue reaffirms the enduring power of timeless texts to help us navigate a radically new technological frontier.
Lecturer bio:
Lin Atnip (she/her) is a Tutor at St. John’s College in Santa Fe (on leave 2025-2026) and an Instructor in the Basic Program. She completed her PhD in 2019 in the Committee on Social Thought at the University of Chicago and a postdoctorate in the Center for Humanities and Social Change at the University of California-Santa Barbara, focusing on the question of how we are educated to the conditions of modernity (especially modern crisis) through reading and reflecting on literature. Her first book, From Tragedy to Apocalypse in American Literature: Reading to Make Sense of Our Endings, was recently published by Lexington Books. She also writes fiction and poetry. Ms. Atnip joined the Basic Program in 2015.
Transcript:
Seth Green: Welcome. So, my name is Seth Green, and in a morning like this one, I’m especially grateful to say I’m the dean here at the University of Chicago Graham School, and welcome to our snowy campus that just has a little bit of white still on it from all of the snow that has come so far this holiday season. And welcome to the Graham School, where we are 135 years young now in our mission of trailblazing new paths for lifelong learning.
For those of you who want to continue learning with us in the new year, I’ll just mention that we have a number of exciting events. We have a really wonderful conversation coming up with Lucy Kellaway, who in midlife went from being a columnist for the Financial Times to re-educating herself and becoming a teacher. She’ll be talking about midlife transitions based on a new book that she’s written, so she hasn’t entirely left her journalistic roots. We will actually have a conversation with an author on a topic very relevant to today’s conversation. It’s about a new book, Reader Bot, What Happens When AI Reads and Why It Matters, and it will look at some of the existential questions that I’m sure we’ll be talking about in our conversation.
And then we will have an info session for what I am so excited about, our first-ever Graham Summer School, where this year we will be celebrating the 250th anniversary of the country with a week-long exploration of the texts from the classics to today that really shape American civic virtue. And it’s an opportunity to be in community with your fellow lifelong learners on our gorgeous Hyde Park campus for a truly residential and meaningful experience over the summer together.
And with that, I will just mention that registration is now open for that summer school, and in the chat, you will see that momentarily. And with that, I will take down my screen, because we are now at the point why you’ve all arrived, which is to have our conversation with Lin, and let me add Lin to the screen here.
Lin Atnip is a tutor at St. John’s College in Santa Fe, but she is on leave this year because she is an instructor in the BASIC program, which she joined as an instructor in 2015. She completed her PhD here at the University of Chicago in 2019 in the Committee on Social Thought. And she also had a postdoctorate in the Center for Humanities and Social Change at the University of California, Santa Barbara, focusing on the question of how we are educated to the conditions of modernity, especially modern crisis through reading and reflecting on literature. That influenced her first book, which we had a wonderful talk about last year around this time, From Tragedy to Apocalypse in American Literature, Reading to Make Sense of Our Endings, which was published by Lexington Books.
Welcome, welcome, welcome, Lin. We are thrilled to have you with us.
Lin Atnip: Great to be here, Seth. Thank you.
Seth Green: Well, so just to jump in, I’m curious if we can talk about, you know, why you’ve decided to teach this course. And maybe it’s obvious from the fact that, you know, half of the words out of our mouths these days are AI, and, you know, what is intelligence in a world where it can be developed artificially, but talk about why this course, why now, and then we’ll jump into the content itself.
Lin Atnip: Sure, I mean, I think, to some degree, it is obvious that this has, all of a sudden, in the past couple years, AI has become part of our lives and seems to be rapidly upending a lot of… a lot of the way that things work in society, sort of, I might say there seems to be a headlong rush to integrate it into all sorts of aspects of our life, and I have, I’ve found it both fascinating and terrifying, and I sort of have… I’ve enjoyed my own conversations with ChatGPT, and I’ve seen it improve rapidly over the past couple years, and have been disturbed at how I feel like I relate to it at times, not to get too… personal, but, the very successful imitation, at least, of human conversation.
At the same time, I hear my colleagues, especially who teach undergrad, fighting against students just… offloading all of their work, writing to it, and I, I am just deeply disturbed by the idea that people might cease to learn how to write, sort of even beyond the degree where we still… or where we already have that as a problem.
So, I’ve gotten into, and maybe most of all, as a particular impetus to teach this course, I have had lots of conversations, debates, arguments, with people over whether and what to… to what degree, the new artificial intelligence is like human intelligence. So, and have just found myself unable to articulate some of my own intuitions about, for the most part, the differences in a way that would be satisfying to me or persuasive to my interlocutors. So, this course is as much for my own education and improvement in thinking through these issues as anything else.
Seth Green: Well, so let’s dig into that. I mean, you know, how might machines think, and how is that different from human thinking, as you’ve begun to kind of conceptualize those issues for the course?
Lin Atnip: Yeah, and of course, this is… this is one of the large topics of the course, so I only have the, my preliminary understanding of this, which I look forward to working out with those in the class. I also, I just want to back up, and I meant to say, so these, contemporary… sorry, classic text, contemporary world courses are developed with a committee of basic program instructors, so I wanted to thank my committee, which is Noah Chavitz, Maura O’Shea, and Michael Thomas. Everything that I say has been informed by our extremely productive and illuminating first meeting about these issues, so…
Seth Green: While we’re on thank yous, I just want to also thank Sheila Penrose, a basic program student who has made this initiative possible by supporting that committee and its work. So, yes, I continue, but I just wanted to make sure one more thank you was added.
Lin Atnip: Yeah, good. Also, thank you to Sheila Penrose. So there are a couple dimensions of, machine… machine learning or machine thinking that I, that I hope to explore in the, in the class, I want to note ahead of time, so as not to mislead anyone who might be taking the class now or in the spring, that it is doing more foundational conceptual, philosophical reflection, so we won’t actually be reading anything from the last 3 years, and therefore will not be, looking directly at large language models, which is the kind of thing that ChatGPT is. However, it will be looking at, sort of precursors and other models and the alternatives.
One… so, let’s see, one important aspect is, the degree to which intelligence has to do with our relation to an inhabiting of a world. So, artificial intelligence, as I understand it, has tended to be based on, you know, a set of principles or rules, or more recently, a kind of recapitulation of pattern recognition in language. And of course, there are, there are robots, which is the instantiation of artificial intelligence in a being that, that moves about in the world, but, the kind of AI that’s… that’s rapidly, spreading through society right now is not primarily through robotics, but just through these textual, interfaces that we have. And so, so one thing I want to think about is the embodiment of human intelligence and how that… how that is, or might be a necessary basis for, for what we think of as intelligence.
Another is the aspect of learning and development, how we… how we come to know what we know about the world, and the degree to which that can be formalized, in things that can be programmed into a computer, or, were reflected in… in patterns, as opposed to, as opposed to, held in some more tacit way. That… that may or may not be possible to recapitulate in a machine. So those are some of the things that I’ve been thinking about with respect to machine learning.
Seth Green: And so that’s the distinction, right? How do we think about how we differentiate, potentially? And then there is, what are the implications, right? In terms of outsourcing what has typically been a human task of, you know, iteration of, you know, your writing, or thinking deeply about a topic, or analyzing where you gather the information and try to put it together, and now we’ve outsourced a lot of those tasks to AI. I mean, I know as an N of 1, that, you know, I now try to kind of put together a lot of things, stream of consciousness, if it’s an email or something like that, and I’ll put it all in. Right? And then I’ll ask for help, and then, you know, then I’ll take the next iteration, right? And that’s very different. I mean, in terms of, like, I wonder, how does that, you know, affect me long-term as I think about… because I think one of the places where I’ve gotten smarter over my life is, like, taking that time to really look at those subtle distinctions and play around with them, and then there’s, like, a cognitive… you know, transformation happening in my mind as I’m doing that, where I can tell you over my life, I feel like I’ve gotten better and better at that, but a lot of that is because of the practice and iteration and, you know, the substantive… So, I’m curious, kind of, how you think about the implications of outsourcing thinking to AI, from the, you know, small, like in my case, to the bigger questions that maybe they are in terms of, you know, human knowledge as a whole.
Lin Atnip: Yeah, and I kind of want to say, we don’t know how small it is, Seth. That’s part of the sort of terror of it, but before I answer that question, something you said made me want to add at least one more thing to the previous question, which… which has to do with… in what ways desire and intentionality are key to human intelligence.
Seth Green: Yeah.
Lin Atnip: That, that, yeah, these, these yearnings or wants or, or agency, might, might be, important to actual intelligence. And again, to what extent might machines have that eventually, if not now? As far as implications go, yes, I… I am… worried, in part, and all of these are sort of related to each other, so if… if our intelligence, comes through certain… certain kinds of effort, engagement, exercise in the world, and we are no longer having to perform much of that exercise, of course, probably many of you have heard the analogy used of a muscle, and the brain, and various kinds of thinking being like a muscle, which either develops or doesn’t, or atrophies to the extent that we no longer use it. So what kinds of… what are we… what are we relinquishing if we give that up? What are we relinquishing if, to the extent that we consume… I mean, I’m… I write fiction now, so I’m concerned about the, like, AI churning out fiction, that seems to be better and better at following recognizable narrative forms, in any case, and doing what has the appearance of innovation, even. Why, why should we care, should we care about, you know, a human being having written the story, as opposed to… as opposed to a bot.
On the one hand, on the other, and I don’t want to… I don’t want to totally prejudice the issue, there’s been a lot of… reflection, going back all the way to Plato, about, about the human mind as, as was more recently articulated, a kind of extended mind, that we are tool-using creatures, and even in our thinking, from things like writing. We’ve, we’ve put ourselves out in their, out… into the external world, and I no longer have to remember everything that I or anyone else has ever thought or learned. I can go and look it up in a book. That’s a loss of a kind, probably, to a situation in which… you know, Homer had to recite all of his… all of his book from scratch, or from memory, but, it’s a question as to what extent artificial intelligence is different from previous tools that we’ve used all the way up to Google, or continuous. And, you know, it’s possible that even with writing, as Plato argues, you do lose something important that we ought to be concerned about, or at least aware of.
Seth Green: When I remember, you know, when my professor told me, you know, don’t use Google when you’re, you know, doing research for this paper, you know, you need to go to the card catalog system, you know, that is just too easy to get all your information, you know, and so it’s hard in that moment to figure out how this technology is going to transform, will it, you know, change research and how it’s done, or, you know, does it mean you’re no longer doing research, right, if it’s that easy at your fingertips. So, I think we’re always asking those questions, and to that point, when, you know, your course is all about exploring how both ancient and contemporary philosophers have viewed machines, obviously of different types than AI, but with the same idea of this new technology is upending the way that we have conceived of knowledge or, you know, our daily operations until now. And so, you know, how have people… I know this is a big focus, how have people thought over time? I mean, I’m thinking about Plato, all the way up to more contemporary thinkers about what does it know to mean, what does it mean to think, you know, how do you investigate that? Who are some of the philosophers that you’re looking toward in this course?
Lin Atnip: Yeah, thank you. So, the courses… split, maybe in sort of three ways, and I’ll talk about two of them. Sorry, I’ll talk about the first division, so there’s a kind of, philosophy, expository writing, tutorial, and… sorry, seminar, and then a tutorial that focuses more on fiction, and I’m using scare quotes in part because there’s kind of a lot of reading in both sides, so neither of them will exactly be the slow, kind of close reading that we usually try to do in a basic program tutorial.
But in the philosophy side, I’m going to read, some works that are focused really just on the question of human intelligence and human learning and thinking. So, we’re going to start with Plato and the Phaedrus, both for its discussion of writing and as this sort of extended mind, which may be problematic. And also, it’s… it’s reflection on memory and, you know, something like the soul’s recollection as important for the way that human beings are drawn towards reality. Which would, which might be relevant to intelligence. Maybe a very, a very high kind of intelligence, as Plato’s usually concerned with.
And then we’re going to read, yeah, pretty much jump from there into the 20th… into the 20th century, we’re gonna look at some… it occurred to me in doing some of the reading on machine learning, which I’ll speak about momentarily, that grammar is a really important question there. That is, there’s all this argumentation about what it… whether machines can know or not, or learn, or mean, and a lot of that may come down to what we mean by no, mean. So I couldn’t help but put some Wittgenstein, some, on the syllabus, and his reflection on just the grammar of knowing and learning, and what that reflects about the processes, or, what we call what we call learning or reading.
We’re going to read some Michael Polanyi. He is a philosopher of science, or who was originally a chemist, and was unsatisfied with the way that philosophers had been sort of positivist, philosophers had been describing science as this formalizable process of just sort of looking at the evidence in the world and hypothesizing and testing, and felt like that left out something essential that he called tacit knowledge or personal knowledge, which is this embodied knowledge that you come to know in the process of inquiry and, activity in the world. That can never be wholly formalized, but is both essential to our… to our knowing, and also to our inquiry, our sense of, again, sort of tacit sense of what might be a solution to a problem that we have, that we follow out in looking into it.
And then, there’s a… there’s a, I guess a social psychologist, Lev Vygotsky, who writes about childhood development and the role of play in learning. So those will be the main, sort of, human intelligence-focused thinkers.
And then we will read some classics on machine learning, including the paper by Turing. Many of you are probably familiar with or have heard of the Turing test. That, that says, you know, artificial intelligence is good enough if it… if it can’t be distinguished from human intelligence in a… in a conversation. A paper by John Searle, arguing, you know, looking at whether… something like learning a language, speaking Chinese, is identical to just inserting something into… into a program and spitting out the right answer. It’s, it’s called a sort of Chinese box experiment. Some of you already might be familiar with that as well. And kind of tome that I’m both looking forward to and a little daunted by by Hubert Dreyfus called, originally What Computers Can’t Do, and then revised into What Computers Still Can’t Do, that was back in 1992, so again, computers may be now doing something, but it’s a really thorough investigation into the whole, rules-based idea of artificial intelligence. Can human intelligence be represented by, by a set of rules or of, neural networks? So, that’s… those are the big ones.
Seth Green: Well, so, you have part of it on the human learning and thinking, and then on the machine learning and thinking. And so maybe you could just walk us through a philosophical account of human learning, and, you know, I know you have a number of them, I mean, you just mentioned Plato, Wittgenstein, I mean, others, is there one that you want to draw out a little bit, just to give us, like, a deeper flavor of what one week in the course might look like?
Lin Atnip: Sure, let me think about which one… which one that I might… I might want to do. I mean, I find, I find the, the Searle paper, which is… which I’m relatively new to, a kind of essential argument about. About human versus machine learning, because his, his experiment is, or his thought experiment is, if, if I were sort of in a box, and I had this whole set of rules for translating, Chinese symbols to English symbols, and someone input a bunch of Chinese symbols, which I don’t know or don’t speak, and I output all the correct English symbols, would that constitute, knowing Chinese? And, and he says no, and this is… the question of why that… I mean, I think there are a number of interesting aspects of… of that. I mean, one is just our intuition. If we have… I mean, you may not have that intuition. I kind of expect that… a lot of arguments can arise in class just from people’s differing intuitions about these things, but if one feels like somehow just, you know, being able to shift from one set of symbols to another according to some set of rules, no matter how massive, is not the same as knowing Chinese, to what extent is that subjective… exp… what is that? we might call subjective quality, and that subjective experience, and why is that… why is that important? Why and how are we sure that machines don’t have it?
Seth Green: Huh?
Lin Atnip: And, and I kind of want to bring in the Wittgenstein here, even though those would actually be two different weeks, because, because that compliments these questions with the issue, again, of how… the meaning of words is determined by our use of them in activity, and when we talk about knowing or meaning or reading. When we… we talk about a student learning to read, what are… what are we actually referring to? Like, when do we use those… that phrase? And what does it reflect about what we think is going on in a person, when they know something, or mean something, or understand it.
Seth Green: Well, so we have a lot of questions for you, Lin, in the chat, so I’m going to go there, and then we’ll come back to some of the final questions around core structure. But let’s start.
Lin Atnip: Sorry, I just want to speak out of, like, one sense about the fiction.
Seth Green: Go for it.
Lin Atnip: Because we didn’t talk about it, but we’re also reading a number of classics of… and one sort of recent, maybe, classic by, Ishiguro, mostly that have to do with our relationships to artificial intelligence, or… artificial intelligence that seems to have a human-like subjectivity, so that will allow us to reflect further on what we imagine and fantasize when we think about a truly human-like artificial intelligence, and also the kinds of relationships that… that we might form to non-human intelligence, which is… which is another thing I’m… thinking a lot about and possibly concerned about in our current situation, but that’s all.
Seth Green: Perfect. Perfect context, and so I’ll get us started, with what will be, it looks like about, you know, 10-plus questions for you in the chat already. So, Sir Walter Scott, joked about mechanical writing in the early industrial period, and Orwell’s 1984 has all fiction mechanically produced. If mechanics enhances muscle power, for example, driving, replacing walking, does automation of a mental activity like riding indicate it may be a weakness rather than a strength? No, it’s a complex question, but…
Lin Atnip: Oh, good question, I would say. First of all, I mean, I think that that… I mean, I like that example, because I think it brings out both of the aspects, the enhancement and the detractment? That’s not a word, but, the possible detraction. So, yeah, it suggests to me that, that it decreases the weakness of the, sort of, whatever is… what is intrinsic. But enhances power in some way. So in some sense, I want to say that that question, like many of these questions about artificial intelligence and outsourcing of capacities, come down… perhaps to a kind of value judgment, like, what… and just thinking of this now, I mean, maybe it’s not just a question of what we mean by intelligence, but what we want to mean, or what we think we should mean, that if… if I’m able to, you know, to… to do something with AI or with any of these tools that I couldn’t do without them, how do I value that? And to what degree is, sort of, the capacities that, like, reside within my body, if that’s the right way to put it, and I’m not sure it is, yeah, how highly do I value those? So, thank you.
Seth Green: Jerry asks, would you mind expanding on your comments on intention and desire? Do you think the heart can ever reside in artificial intelligence, or must it belong to the humans of the interface?
Lin Atnip: Yeah, so, I mean, I’ll… I guess I’ll betray some of my own formed opinions thus far on this, which is, it’s… it seems to me that… amongst the forms of artificial intelligence that we have so far, including ChatGPT, they… they have sort of, like, one… one layer, that there, ChatGPT is easiest to talk about, that it’s, I mean, there’s a… I suppose, two layers in the sense that it has all of the… all of the language that it’s consumed, and the patterns that it’s working from, but, like, you give it text, it recognizes the patterns in that text, it spits back text. It seems to me that humans have, another layer, which is… you know, our thinking, our feeling, our embodied, whatever, emotions, sort of actual physiology, then that other layer might actually be multi-layered in and of itself. So I don’t… it doesn’t seem to me, or I don’t see how existing, kind of one layer, or two collapsed layers, AI, can at this point have what we mean by heart or intentionality. And I can see how it could get increasingly good at replicating the manifestations of those things, the behavior or the kinds of things that a human being might say based on its intelligence, or on its intentionality or desire. But I don’t yet see where the intentionality or desire or heart could come from or reside in existing models of AI. And I’m, you know, sort of ready to be convinced, but it would take some convincing.
Seth Green: Well, so, this is a comment from Fred Boltler, but I’m going to read it, because I think it’s so relevant, how he thinks about this distinction, and might add to our conversation. Human nature refers to the shared, evolved traits of human beings, biological, emotional, cognitive, social, and moral. While computer nature, including AI, consists of design-engineered systems that process symbols and data according to program rules and learned patterns. This goes back to the Searle example that you mentioned. Humans are organisms with consciousness, emotions, and socially embedded agency. Computers are artifacts whose behavior depends on architecture, code, data, and external goals. And so this idea between artifact, right? And, you know, maybe patterns that reflect what has existed, and evolution, and the possibility of generativity is a distinction, I think, that he’s at least putting out into the possibility of how we might try to separate the human thinking from the machine learning.
Lin Atnip: Yeah, thank you. I mean, that all seems very well put to me, and it also puts me in mind of, a question or factor of emergence, as we… as we might call it, both in terms of something that happens that the… that the creator didn’t plan for or didn’t intentionally put in… put in there. I mean, I… one thing that is provocative in new AI, in large language models, is that they do seem to… there’s a kind of production of things that, you know, weren’t directly programmed in them. It’s why when you ask Siri, you know, it… or, I don’t know, Siri’s probably now incorporating, or starting to incorporate AI, but, you know, when I asked my phone 5 years ago to play something for my Apple Music library, it had, like, a certain number of things it could understand, that had been programmed into it, and that it would spit forth the proper song, or more often it would misunderstand me and start playing, something that had nothing to do with what I’d ask for. But in any case, there’s now this kind of emergent quality where… where, you know, ChatGPT will sometimes say things that look an awful lot like insight. But to what extent is that, does that have the same significance as the sort of emergence of human, human creativity. I’m taking off a little bit of your point, but, yeah, thank you, Fred.
Seth Green: Rachel asks, will the course touch on the transhumanism versus bioconservatism debate?
Lin Atnip: I think… I don’t think there’s an explicit, like, a text that explicitly addresses that. I did… we are reading, Donna Haraway’s The Cyborg Manifesto, which is a kind of classic of… really, of sort of feminist theory, but… but quite relevant, because it’s about this whole question of humans as hybrids, and… and brings in the ways in which we’re, continuous with, continuous with machines. I… I do… actually, I… I think that the, some of the fiction might allow us to talk about, I think a lot of things will allow us to talk about those kinds of issues, and again, to what degree embodiment is, is essential to human being or human intelligence, and as opposed to, like, well, maybe it would be better if we could just escape all of its inconveniences.
Seth Green: Anson writes, it’s said that perception is reality. How concerned should we be about AI spreading myths or disinformation when it comes to scientific facts about climate change, vaccine efficacy? And maybe I’ll broaden that question to the conceptual level, Lin, which is, you know, one of the realities is that knowledge is constructed. Right? It’s something that we come to, and, you know, at a certain point, you reach a level of consensus around something, and then you say, okay, that is how the world works, right? And there’s scientific experimentation and other things that you use to seek and to, you know, try to get toward truth. But certainly, ubiquity is one feature, typically, of coming to that in a society. Obviously, these models, right, they can spread, information, they can create their own… and so in what ways do you see the way AI shapes perception, then shaping reality, shaping knowledge, shaping how we construct knowledge in the future, that’s a whole other dimension of this interaction between machine and human thinking and knowledge generation.
Lin Atnip: Yeah, I mean, my direct and short answer to the question of how concerned we should be is, you know, very, as far as I can tell, empirically. I’m thinking again about Polanyi and the idea of tacit knowledge, and that we, so much of what we know and what we believe can’t be directly tested and formalized, that it rests on this whole broad, deep basis that we’ve acquired through socialization and our beliefs in certain kinds of institutional authorities, the authority of science, and… and I, I think we’ve already begun to see, maybe especially in the past 10 years or so, how fragile, or, that, that can be, and how it can be, how it can be disrupted by the fragmentation of our information ecology and the… and it seems to me that, yes, AI only, accelerates and deepens and worsens that project, in addition to, again, the better and better it becomes at at reproducing and imitating.
Seth Green: Reality.
Lin Atnip: Video creation, for instance. it just seems… I don’t know, my heart rate goes up when I think about it, so…
Seth Green: Well, so, coming back, Lin, to the course, talk about how you are going to walk your students through these really big questions, and I know that from our conversation, it’s set up with a philosophy seminar and a literary tutorial. Just talk about that structure a bit, which obviously connects to the basic program, but this is open. And then, you know, maybe talk a little bit about how you’re going to look at student discussion outside of class for, kind of, contemporary and hyper-contemporary issues.
Lin Atnip: Sure, yeah, so this is, this will be a sort of basic program-y class that I can make it, which is to say that, the… I primarily will try to keep us focused in class on, on reading and understanding the text. The reason for that being that I think that that will… not being a technical expert myself, not assuming that anyone else is, and not wanting to sort of be groping toward, you know, empirical… empirical knowledge about what’s going on in the world. That that will be the way in which we can most productively get clear, again, by the sort of very basic, sort of philosophical and existential issues that are… that are at stake here. I… I… I proposed this class and have been developing it with some trepidation, because it seems, and maybe this is something that these CTCW classes have to grapple with in a special way. It’s harder when the point is, in some sense, to be able to think about these very contemporary issues, to… to know how to bracket them without… without bracketing the things that everyone wants to talk about and cares about. So I think I’m going to try to keep us in the class, as much as possible from kind of going off on just… you know, sort of therapeutic, whatever, complaining about AI as much as that… that might be something that I myself would like to do. But… and from kind of technical discussions of what large language models are really capable of, which again will just be a little bit beyond our our collective scope. But I recognize that students, at least some of the students may… may really want to be able to talk about those things, so I was thinking of trying to, to, organize, like, a sort of student discussion for any students interested outside of class, and, you know, maybe we’ll, we’ll solicit and, and offer alternative, more, as Seth said, hyper-contemporary readings that would allow people to try to extract or sort of extrapolate from the reading we’re doing in the course to, to really thinking directly about, the concerns at the moment.
Seth Green: Well, Lin, I have to say how excited we are about this course. I think that it does what we often talk about at the Graham School. It takes the timeless wisdom of the ancients, and it brings it into the big questions that we’re wrestling with. And something I believe is that, you know, almost no question in our society is entirely new, in the sense that, at least at a philosophical level, we have typically wrestled with the big questions, and the reason, obviously, we have great books is because they have uncommonly insightful answers and questions themselves about how to wrestle with these big issues. And so, for me, I’m just thrilled that we are directly trying to put this timeless wisdom into conversation with one of what has to be the most disruptive forces in society today, and… you know, I think the best thing we can ever do is learn from the wisdom of the past, and from the experience of the past, and… try to figure out where are there parallels. Obviously, where are there not parallels, and what can’t you draw, and how is this entirely new? But I think that exploration is just a perfect fit for what we do at the University of Chicago and the Graham School, and we’re so grateful for you stepping forward with the committee to develop this and attempt to put this in conversation in a time when we are all trying to navigate the very uncertain ways in which this is both uplifting and probably challenging our humanity. So, I’ll give you any last word here before we close.
Lin Atnip: Yeah, thanks very much. I just saw one final question.
Seth Green: Oh, yeah.
Lin Atnip: Incorporate into a last word, which was, the difference between intelligence and consciousness. And I have been, I think. I may not have formulated it to myself this way, so thank you, Sandra, for the question, but I, I think as I’ve been thinking about this, I’ve realized more and more how difficult it seems to be to ex… to, to sort of totally separate intelligence from consciousness, and as an undergraduate, I, I started out, or was at one moment, a biology major interested in neuroscience, and I had this fantasy that I was going to work for a lab and, like, discover what consciousness was by, like, looking at neurons in the brain. And I discovered, first of all, that, you know, that research is so…
Seth Green: You know.
Lin Atnip: They can look at very, very specific little things, but, but from getting from what you can actually see in the brain to saying what consciousness is, is still… a very hard problem, as I think the philosophers put it. So, so this is another way of going at that very hard problem, which is if… even if we can create something we call artificial intelligence, is that, or to what extent is that the same thing as consciousness? And so I hope we will… get some illumination on… both what machines can do and what intelligence and consciousness are.
Seth Green: Well, Lin, thank you so much for sharing your ideas, and most of all, thank you for teaching this class, and for those who are out there who still want to register, the winter is full, which we didn’t anticipate in advance of this discussion, but now there is a spring opportunity that will likely be full very soon. So, we encourage you to register. Most of all, though, I want to close with Happy Holidays! We have this as our last event of the year. What a perfect way to end 2025. It has been a marvelous year. It’s thanks to what is an absolutely extraordinary community, first and foremost, of instructors and team members, and alongside them, the best lifelong learners in the world. So, we hope all of you have wonderful and joyous holidays, and I’m sure we will see you on your learning adventures in 2026. Thanks, Lin, thanks all.
Lin Atnip: Thank you, everyone.