Apr 12, 2024

Columbia Professor Dennis Yi Tenen on "Literature in the Age of AI"

Columbia Professor Dennis Yi Tenen on "Literature in the Age of AI"

“What we call AI is actually just kind of a collective process of thinking together.”

“What we call AI is actually just kind of a collective process of thinking together.”

Dennis Yi Tenen
Dennis Yi Tenen
Dennis Yi Tenen

Tell us about yourself. What are you working on right now?

I'm a professor at Columbia, and I study language and technology. I just wrote a book called Literary Theory for Robots. I teach a class called “Contemporary Civilization,” which [goes] basically from Homer, Plato, to [W.E.B] Dubois, Arendt and other texts in contemporary political thought. I'm also preparing a class for next semester on conspiracy theory. Some of my research right now has to do with the structure of conspiracy thought in the United States.

When was your first encounter with AI?

The argument I make in the book is that we think of AI as something recent. On many people's radars, AI came with ChatGPT. But you think about it, when you write with Microsoft Word or Google Docs and use spelling correction or grammar correction tools, those are also a kind of AIs, driven by complicated algorithms to get that accomplished. If you think about it even deeper, that grammar-checking and spelling-correction relies on style books and grammar guides and dictionaries, which are also a type of AI, in its pre-digital versions. It's literally somebody who is teaching us to spell more correctly through this technology of a book, of a dictionary, of a style guide. These become a part of a long history of our human encounter [with AI].

How do you continue to use AI at work?

In this more expensive definition AI is inescapable. I'd like people to think, not just like, how do you use ChatGPT, or some kind of contemporary tool. Whenever we write, whenever we think, we're constantly reaching for tools and we're constantly reaching for the help of others. Writing and thinking is, I would argue, a social process and what we call AI is actually just kind of a collective process of thinking together.

Do you have any tips you could share?

I would encourage people to be curious and to go deeper than the surface level of using something instead of assuming that it does the thing you want it to do. When your email completes your sentence, you can use it passively and it's useful. That's fine. But the next step would be to start poking around and thinking, “Okay, where does it come from?” How does it know how to complete my sentences? 

For example, the way something completes your sentences is that the algorithm learns from your email, in your own words. Well, that's great, but also has huge privacy implications. It means that the things that you're writing to your coworkers and to your family are also going somewhere into the cloud and it's being repurposed for another use. What does it mean for my private thoughts and for my private family affairs to be constantly fed back into this learning machine and learning system? These are the kind of things that require us as users of this technology to go beyond, “Oh, this is cool. I saved a second of my time writing a sentence.” We have to remember that there is a price that comes with such utility. But to even understand that price, to understand that bargain, we also need to be curious and go deeper to educate ourselves. That's why I think we need to teach classes that unpack what this technology is.

We make it into this boogeyman or savior and we need to just understand that it's technology.

What was your path from software engineer to professor?

My undergraduate degree was in comparative literature and political science, but I always worked as a software engineer and as a web designer. This is back in the day—in the ’90s, early 2000s. When I graduated, I joined Microsoft because I had a very specific experience in designing websites and experiences for non-desktop devices. After some time at Microsoft, I realized that ultimately the questions that this technology poses don’t come from a technological problem, but the problem of culture. Once you get more advanced in your career as an engineer, you start thinking to yourself, “Why am I building this? What should I be building? How should we structure experience?” So that brought me back to philosophy and questions of culture in the humanities. It brought me back to graduate school.

Can you talk about your class “Literature in the Age of AI”?

One theme in the class is I think when people hear the words literature and AI, they think computers will write poetry or fiction or essays. Some of that is happening and that's okay. I would also like to think about literature with the lowercase “l”—it's the various stories we tell each other in that sense. For example, when you go to your doctor and you tell your doctor a story of being sick, you have a particular way of talking to your doctor. As your doctor is listening to you, they're using software to translate your words, which is like a free-form narrative story, into a template. And the template is often automated, following specific patterns. The way the hospital sees you then is a very different kind of story than the one you told: your story might have details about your family and about your experience and all sorts of other details. And then the hospital record sounds very different. That translation between patient story and hospital record is being navigated and translated by AI or by technology, right? And that’s literature. The stories we tell are being retold and translated between institutions and between people through technology.

Do you have a hot take on generative AI?

My hot take is that very often today “AI” is just a synonym for technology. Maybe two or three years ago, we would just say “technology,” and today we say “AI.” A lot of jobs that were just software engineering jobs or that had to do with algorithms are now relabeled with AI. The word “technology” actually clarifies things, because it doesn't seem as autonomous. For example, if you ask, What is technology doing to our brains? [The answer might be] “Many different things, I don't know which technology you're talking about.” But then if you ask the same question about AI, it seems to be this one thing or process. And it's one thing that's either going to save us from everything, like it's kind of a savior figure, or it's going to kill us all. But it's just technology, and technology is great for us in some respects, and it's bad for us in other respects. I love having a phone. It's entertaining, it allows me to talk to you. But then [if you use it too much] it becomes not healthy for your mental being. That's my hot take—that AI is personified too much. We make it into this boogeyman or savior and we need to understand that it's just technology in different guises. Some of them good for us, some of them bad, [depending on] how we use them. AI isn’t one tool, it doesn’t want anything. It’s us talking and working with each other. We just need to think about our goals: what we want to do with it and how.



Sign up for more AI at Work

A occasional newsletter showcasing the latest conversations with leaders, builders, and operators who use generative AI to power their work.

Sign up for more AI at Work

A occasional newsletter showcasing the latest conversations with leaders, builders, and operators who use generative AI to power their work.

Sign up for more AI at Work

A occasional newsletter showcasing the latest conversations with leaders, builders, and operators who use generative AI to power their work.