Jan 29, 2024
Tell us about yourself. What are you working on right now?
I write an AI newsletter [No Longer a Nincompoop], and besides that I help companies build AI products. Through that, I've seen a lot of different applications of AI in quite a few different industries—health, education, venture capital, corporate communications, design, and more. Companies are interested in how they might develop a product with AI, or how they can integrate it into their services, and my job is to come in and tell them what AI is—because most people don't actually understand what it is—and help them understand what they can do with it.
How do you define “AI” in those conversations to help people understand?
I think it's more about understanding the limitations. It's not some magic black box, but we definitely don't fully understand how ChatGPT has the functionalities that it does. Instead, I’ll give an overview of how LLMs work, and I think when people have somewhat of a technical understanding of those—that they’re a kind of prediction system—they feel more comfortable, and are able to then ask, “how can we use that for our application?”
It’s about helping people—particularly founders or higher-level C-suite figures—understand that it’s not as complicated and foreign as it may seem. It’s not that they’re scared, usually, but they’re certainly intimidated by it. There's a lot of noise out there right now about AI, and people are pretty spooked, unfortunately. Senior folks think AI is going to have stolen a lot of jobs a year or two from now, and that’s just not going to happen.
There’s a pretty big misunderstanding there—even though I would say AI has probably been adopted faster than any other technology in recent history, nothing gets adapted that quickly. Its impact is going to be more of a long-term thing.
Also, the data you give an AI is absolutely everything. If you or your business have data others don’t have access to, that’s very powerful. Every business should be looking at using their data in combination with LLMs, and at how AI can transform their business in general—that goes without saying at this point. It’s not an option, it’s a necessity. A lot of people don’t see yet that they have that data, but you can put AI anywhere and it will provide value—like sitting in on a meeting, taking notes, and offering feedback afterwards. People need to be open to that.
What other issues are “spooking” people in these conversations?
The most dangerous thing will not be your chatty LLMs—it's the video and image and audio generators. And they're already out there, [the general public] just don't know yet. For now that’s a good thing, but the U.S. election is next year and it's probably going to be a nightmare.
AI will make it so easy for anyone to do anything, with just a laptop. That’s the underlying fear—that it’s the ultimate equalizer. Anybody can write code, anybody can create a website, anybody can create music or art. Is that a good thing? I think we’ll have an answer fairly quickly, because I’m not sure the internet will survive it. With that much AI-generated stuff it’s a different level of spam, and we, as a species, are going to have to figure out how to respond.
How do you use AI yourself at work?
I read a lot—like, an insane amount of research papers, opinion pieces, technical stuff as well as non-technical stuff—so I've tried to use AI for my research, but, I’ll be honest, it actually hasn't gone that well. However, my work requires me to know a lot of things, and then go into companies to talk to people about those things, and I’ve been able to feed all of that information to an AI system and use that as a sort of middle ground between myself and others.
That’s really important for people to consider for the future—as much as there’s talk about job losses, human beings still want that human connection, and I’m not sure that’s ever going away, regardless of how powerful AI gets. Humans want to interact with other humans.
However, when this technology gets so good you could have a companion that’s almost human, what do we do? It seems like some sort of far-fetched future—like in Blade Runner or Her—but we’ve already seen, since ChatGPT-3.5 came out, that thousands and thousands of people use it for therapy. Therapists have tried to argue that it’s dangerous, and that it’s just the AI saying what a user wants to hear, but that doesn’t stop people saying they feel good after talking with their AI therapist. How do we educate people in how that might not be the best thing for them?
I haven’t had to deal with this kind of stuff yet, and I don’t know how I would handle it, to be honest. Though I will say that, as a Muslim, there are certain things I can and can’t work on. I’ve been asked to work on projects related to making chatbots of certain adult content creators, and I had to walk away—I don’t get involved in that kind of thing.
Besides that, I’ve found AI is the best way to start writing. I only began my newsletter in February—it was the first time I’d ever written anything like that—and I learned very quickly that staring at a blank page is the scariest part. AI is very good at giving a boilerplate example of anything, whether it’s code, an email, or a newsletter, and it’s great for bouncing ideas off of. AI is really good at finding patterns that humans might not be able to see because it has so much general knowledge.
I should also make it clear that I’ve only ever included AI-written text in a newsletter once—and not even using ChatGPT, but instead from an open source model called Mistral. For people who have read a lot of AI-generated text it’s pretty obvious when something is AI-generated, so ChatGPT’s responses are never good enough for me to actually use, unfortunately.
You’ve been talking about ChatGPT, but have you experimented with other generative AIs?
Yeah, I've tried most of them. I've tried a lot of open source ones as well—I follow and write about the open source community quite closely. I would say ChatGPT-4 is still definitely the best model out there. Open source models are getting better, but they’re still not on the same level. Google also announced Gemini on December 7, 2023, but I wouldn't have a lot of faith in that either, to be honest. They released a demo video, but it’s completely different to the technical paper—essentially, the video is fake. It's been edited to make Gemini seem a lot better than it actually is, if you look at the technical paper. It doesn't seem to be even as good as ChatGPT-4, but I guess that’s why they released a video and not the model, because people would realize fairly quickly.
Do you have any tips or useful prompts other people could use?
With ChatGPT, most people don’t use custom instructions—like telling the model how to act, or what not to say—and I think that’s a big mistake. Every person I’ve spoken with who’s said that it doesn’t work well for their needs hasn’t used custom instructions. For example, in your first interaction you should write about who you are and what you do. It might be a privacy issue in some situations, but the AI will have a better understanding about you, and it’ll deliver better responses.
Also, if you ask ChatGPT to take a deep breath before answering, it does a lot better. And just a few days ago [programmer Theia Vogel] found that if you offer ChatGPT a tip it does better—and the bigger the tip, the better it does. A lot of people get frustrated with ChatGPT when it doesn’t answer their whole question—like they ask it to write some code, and it’ll write some but then say, “and now you should do the rest yourself.” If you ask it to answer your question, but to take a deep breath first, and say that you’ll give it a $200 tip if it gives you the whole solution, it’s more likely to give you the whole solution.
Beyond that, though, when it comes to generative AI, I think it’s important to keep an open mind, because most of us still don’t realize or understand all of its applications yet.
Do you have a hot take about generative AI?
I feel like I’ve been sounding like a big pessimist, but I’m actually very optimistic about the future! AI will get rid of a lot of grunt work, and we’ll have more time for creativity. With so many technologies in the past we’ve been afraid about job losses, but new jobs were always created—it’s the same with AI. It’s going to get rid of a lot of jobs, then it’s going to create a lot of jobs as well. What those new jobs will be, I have no idea—but I do think there will always be humans overseeing AI in work.
It’s an exciting time to be alive, and to be able to explore the possibilities of any kind of passion or idea. This is the only time in history where somebody can have an idea and just start a business in a single day—the amount of work a single person can do alone has exponentially increased. In the next five to 10 years there will be a lot of people—“indiepreneurs”—who will have started their businesses using AI. People should be optimistic about how they can use AI for their work.
The bigger challenge, in my opinion, is in education. I think the current education system, in terms of how we educate kids, is completely broken, and the biggest thing I’m optimistic about is using AI to completely change it, so that’s what I’m doing a lot of work on as well. There are so many good things to come, and I can’t wait to see them come to fruition.