Oct 13, 2023

The Linux Foundation's Lou Marvin Caraig on using AI to develop new products

The Linux Foundation's Lou Marvin Caraig on using AI to develop new products

"Ultimately it's on you. ChatGPT requires interaction, discussion, and discovery to get it right."

"Ultimately it's on you. ChatGPT requires interaction, discussion, and discovery to get it right."

Tell us about yourself. What are you working on right now?

I’m Lou Marvin Caraig, one of the directors of engineering at the Linux Foundation. I’m currently working on a product that will provide insights to executive directors and maintainers of different open-source projects so they can take data-driven action. I joined the Linux Foundation two months ago. Previously, I was the head of engineering at Athenian, where we were trying to provide similar insights for engineering leaders to help them create high-performing teams. 

How do you use AI for work? 

One way is from a process perspective. I use ChatGPT as if it’s a real person, asking: “Hey, I'm working for this company, and I have these issues. What are some ideas to solve them?”

For example, when I joined Linux Foundation, all the engineers were working in silos, focusing on their own tasks without focusing on the product as the final outcome. So, I asked ChatGPT, “Do you have any suggestions? What are some initiatives that could be implemented within the team to improve the situation to achieve this specific goal?” Of course, [to get maximum value from the conversation] required some iterations.

When it comes to [the subject of] building high-performing engineering teams, I knew that ChatGPT is very well-trained here: its knowledge base is rich and usually all its suggestions make sense. This is something that I realized back at Athenian, as we were building a proof of concept for a product to help engineering leaders build high-performing teams.

Another way I use it is as a tech expert. There are certain things that are just not easy to Google. For example, if you would like to explain a specific architecture that you'd like to implement or you want to modify, you can’t just throw it into Google’s search engine and [expect it] to understand what fits your use case. It's easier to provide context and iterate with ChatGPT. The biggest advantage is that you can follow up with very specific technical questions—on probabilistic data structures theory like HyperLogLog, database sharding and/or partitioning architecture, and so on and so forth

How else would you describe the value GPT has brought you? 

When I was in-between jobs, I decided to make a job board for [postings from] technical start-ups. Of course, one of the best ways to promote a new product is to start writing content. I wanted to start writing myself, but English is not my primary language and [I wasn’t sure] what I should write about. [I thought] ChatGPT would nail it.

I started by providing ChatGPT with context: “You're a freelance writer. This is the product and requirements for the blog post that you need to write. These are the goals, keywords, et cetera.” I wanted suggestions for blog titles first because I didn’t know what to write. ChatGPT started offering some very interesting ones. I liked “What to Expect in Remote Culture.”

I responded, “This looks good, but it’s too short. Please make it longer.” The whole conversation—a back-and-forth of multiple interactions—was a process of iteration and revision. [When I was happy with the blog post text it had produced,] I asked ChatGPT to write a description for the HTML tag, so it felt like an end-to-end use case.

ChatGPT has also been helpful for product discovery. When I left Athenian, I was still in touch with some of the leadership, marketing, customer success, and product teams. We were discussing the struggles [of] engineering and product leaders; one was [managing] Jira. One idea we had to address this was to offer product managers (PM) as a service. I tried chatting with ChatGPT to understand if the idea we had was feasible. I started prompting it by saying, “You're a product manager,” before playing with it to understand the possible outcomes. At one point, I asked it to add a new section in the settings page for admin to set up users’ permission.

I was curious to see if GPT technology could enable building our idea for PM as a service. My conversation with ChatGPT was to assess whether a hypothetical customer in a hypothetical team could leverage ChatGPT to help them do better or faster product management work. I wanted to see what would happen: when I asked it questions without providing much context, after I did provide the context (would ChatGPT be able to split the product development process into discrete tasks?), and after I provided even more context, for example around technology (would it be able to provide more detailed advice?).

When I started that conversation about a hypothetical product, ChatGPT did ask for more context—the same thing I’d expect from a senior engineer on my team. I followed up as I would do in a Zoom call with the PM or an engineer. Then [ChatGPT] started [providing ideas for] product development and a breakdown of tasks. I realized it was going in the right direction because the task breakdown was very interesting and high-level. Then, I pushed the boundaries even farther to understand how smart ChatGPT-as-PM could be by providing it with more context. I shared: “These are the technologies that we're using.” Then the task breakdowns became even more detailed. 

What tips would you provide for using ChatGPT? 

First, don’t start with the preconception that it's a machine [that] won't be able to do anything right. I really try to approach ChatGPT naturally, but without completely trusting the final outcome. (Which is basically the same thing that anyone would do when speaking with someone whose background [or expertise] you don’t know!) For example, when it comes to the idea of PM as a service, I tried to use real references to validate ChatGPT’s work. Sometimes I know straight away when it doesn't make sense because I have a strong tech background, but when it comes to very specific knowledge I try to find citations backing it up.

How do you validate these references?

Usually, the answer makes so much sense that I don't even need to validate it. It really depends on how much context you have on a specific topic. Sometimes, I ask ChatGPT, “Can you give me some reference about what you said?” Sometimes, it links to papers or blog posts. Just ask and it’ll be provided.

Do you ever share prompts with your engineering team?

Most of the time, I use ChatGPT as a personal assistant, rather than sharing my prompts with the other engineers. The majority of the times I’ve shared my techniques it was to prove to people (especially engineers) who don’t trust that ChatGPT can work. But ultimately it's on you, ChatGPT requires interaction, discussion, and discovery [to get it right]. 

I think our current expectations are too aligned with search engines. There, you provide a prompt and you’re given the answer. With ChatGPT, there’s a different way of using it. You also need to [be able to recognize] when you hit a wall, which can happen for multiple reasons. Most of the time, it’ll give you enough information to go outside of ChatGPT and take some extra steps forward, which is the most important thing.

Do you have a hot take on generative AI? 

It’s already really disruptive. The amount of users that ChatGPT engages with in a single week, compared to all other products, is astonishing. 

ChatGPT works right now—but it’s still mostly exchanging text-based messages. Text-only communication limits human interaction, though. When you’re in a discussion over Slack with your team, how many times do you decide to jump on a call because it will be faster? This is not something that you can really do, at least right now, with ChatGPT. (Though new features, like multimodal ChatGPT, are beginning to roll out to the public.)

I think generative AI will become more pervasive across human interactions in the future, especially in how we interact with other people. Just a couple of weeks ago, I saw a super-fast model doing text-to-voice translation in real time. I think that this is the direction we’re going in. The technology itself is groundbreaking, but unless it's going to be used by normal people, it's not a meaningful innovation.

To make this technology as widespread and available as possible to everyone, it needs to feel like something that isn’t extraneous. Right now, the way you interact with ChatGPT, you still feel that it's a machine. The direction we’re going in, it’ll feel more like a real person every day—and more like a part of day-to-day life. 

Can you share examples of prompts you've used successfully?

Sign up for more AI at Work

A occasional newsletter showcasing the latest conversations with leaders, builders, and operators who use generative AI to power their work.

Sign up for more AI at Work

A occasional newsletter showcasing the latest conversations with leaders, builders, and operators who use generative AI to power their work.

Sign up for more AI at Work

A occasional newsletter showcasing the latest conversations with leaders, builders, and operators who use generative AI to power their work.