Feb 12, 2024
Tell us about yourself. What are you working on right now?
I’m a marketer with about 14 years of experience in the tech industry, spanning everything from traditional B2B SAAS applications, to developer tooling, to working on more formal data and AI products.
Right now I'm a consultant mostly advising smaller companies, and working with engineering and product teams on their LLM strategies and marketing. Over the last year I’ve also been a director of product marketing at a Bay Area startup called Labelbox, a data platform for building AI models in educational programs. Outside of work, I’m also very active with the Product Marketing Alliance, I speak on podcasts, I conduct workshops, and generally I’m always asking, “How can marketers use AI to be more effective at what they do?”
When was your first brush with AI?
Seven years ago I consciously decided to invest more and more time into AI. I led worldwide developer campaigns for Databricks at Microsoft, and I started also building AI models using Python. I felt it was a natural extension of my career—I’d been handling a lot of data-driven marketing and using it for everything from personalizing content to analyzing campaigns and informing strategies. My teams considered me a kind of shadow data analyst. I started dabbling with the idea of machine learning, and it turned into a full-blown love affair before I knew it.
I thought of it as a kind of arcane science at first. Only elite groups of scientists could unlock that knowledge for organizations because it was so hard to get into the depths of backpropagation, forward propagation, biases, embedding—there were so many terms that made no sense to me, but I just said, “Wow, this still seems to work even though I don’t know what it does.” It was an amazing time.
Then I joined Meta, where I was fortunate that we had the resources to work with dedicated data scientists, and that it was an organization where data and marketing worked closely together. Models were very expensive to train back then, few could actually build them, and they also weren’t very powerful—this was 2019, back in antiquity—so the best they could do were basic tasks like count words in paragraphs, which seems laughable by today’s standards. But it was still completely revolutionary.
We built a very advanced model that could analyze thousands of hours of customer calls—petabytes of data—and I asked it some basic questions, like, “Hey, if I have X million customers in 40 countries, what topics do they care most about?” It took three months to answer that question—it was just that complex to do a TF-IDF analysis on all that data—and eventually it told us that there were around a thousand topics which our audience had roughly a one percent chance of caring about, and that that was the highest probability found across all possible topics. I said, “I’ll take that,” and built email campaigns around that one percent probability.
I realized that even with limited probabilities it allowed me to create tailored messages that were four to five times more effective than what I’d been sending out previously. That completely blew my mind. I was like, “Wow, imagine if we could actually personalize every touch point with a customer?” That’s been the holy grail for marketing from the beginning: how do you talk in a helpful and direct way to customers, instead of blasting them with messages? We never used to have the technology to do that.
And then I was at Google, where I led product marketing for TensorFlow, which was at one point the most popular framework for building deep learning applications. I led global product marketing, focusing on positioning, messaging, understanding the AI tooling landscape, informing product strategies, and running performance marketing to scale Google’s educational programs for developers. It was a lot of marketing, but at the same time that was all an incredible behind-the-scenes peek at the world of advanced AI applications, and my knowledge of AI was deepening.
How do you continue to use AI at work?
In 2023, I left Google around the same time ChatGPT-4 launched, and I jumped with both feet into using LLMs for everyday marketing work. I also diversified my arsenal of tools to include others like Bard, Claude, and Bing Chat, and found they were capable of doing what I call “assistant-level” tasks in marketing.
AI can answer complex questions a lot faster—I don’t use Google any more except for very simple factual questions, and I prefer Perplexity for the things I’m struggling with the most. I think that’s actually where product experiences can be—it’s an incredible alchemy of AI and product experience.
I’ve also dabbled with using AI for code generation, creating scripts in Google Sheets and Excel to automate number-crunching. However, while some models can do data analysis directly now, I haven’t found that to be as impactful. The current crop of AI tools still have their limitations, and I say that both from the perspective of someone who’s an end user and someone who’s worked behind the scenes building AI technologies and marketing them. They do make up facts and hallucinate, and they’re not very good at understanding the context behind questions just yet—especially if you’re asking a one-shot question—so if you treat AI as an intelligent assistant, capable of truly understanding intent, then that’s where it tends to fall short.
For now, I use AI as a thought-starter, as a research tool, and then I do the heavy lifting of analyzing and sorting and arranging and categorizing things myself before turning back to the AI for things like proofreading and editing. We’re still some way away from truly mimicking a writer who has their own point of view, for example.
Do you have any tips you could share?
My secret weapon has been this kind of prompt: “Hey, explain this arcane topic to a five-year-old.” Those are the best explanations I think I’ve ever seen for complex technical topics.
How has AI transformed your typical product launch process?
I think the impact has been dramatic in terms of workflows—it doesn’t take four meetings to clarify an idea or concept before working with designers. One of the coolest things I did recently for a campaign was when I asked DALL-E, “This is the audience, this is the kind of message I want, can you give me some appropriate visual metaphors?” It did a phenomenal job, giving me four really cool concepts that I could share with our designers, who loved them and then were able to create something powerful as a result.
I can spend 10 minutes doing what might have previously taken weeks to communicate clearly. It’s dropped that intra-team communication barrier significantly.
In terms of my own productivity, and creating content, I honestly felt like I was only ever an okay—never a great—writer, but AI tools have given me the confidence to break past my own barrier. I have a smart assistant now who can help with idea generation and proofreading, and I’ve become very prolific as a result. It’s ironic, but I’ve found my [own] human voice is much more prominent after the advent of AI.
It sounds like you’re saying AI is an augmentation of human creativity for you and your teams, rather than a replacement—is that accurate?
It’s a very significant question. I feel like there are many minds greater than mine trying to answer that. To give you one cool example, I read something Yann LeCun—considered by many to be the “godfather” of modern AI—said recently: that today it takes millions of dollars to train LLMs on 10 trillion tokens of data, but by the time a child is only four years old they’ve already consumed hundreds of times more completely unstructured data, and is way more capable of finding connections and patterns. A four-year-old is getting 20 megabytes of data per second alone just from their eyes.
We’re realtime systems in a way—plugged into this world, learning from constant input and feedback. That paradigm just does not exist in present-day machine learning. Once a model is trained, it’s frozen, having learned a certain way of doing things—it’s a dead system that never relearns or readjusts itself based on new sensory data without expensive cycles of retraining. Without true self-learning they can’t replace true creativity, which often comes through unforeseen events and experiences.
So I feel like the risks are relatively limited. They are there, but replacing humans seems far-fetched to me. However, I do resonate with the idea that your job won’t be replaced by AI, it’ll be replaced by someone using AI. I think learning to work with models is now an essential skill for the workers of the future, because they make you a lot faster and more capable.
How have you changed your approach as a product marketer when dealing with products which integrate AI in some way?
The biggest change is that every product is clamoring to be an AI-enabled product. Partially that’s because of board pressure, but it’s just a reality that now your product has to have generative AI capabilities baked in. In terms of marketing, though, it’s super interesting, because as a marketer I’m always trying to draw the line between bad claims and good claims. I’ve seen a lot of bad claims in the industry—you know, “Our chat bot is smart enough to understand every request in a personalized way”—and that’s simply not true right now. There’s a massive gap the moment you step outside of ChatGPT’s own knowledge and try to add your own, or fuse your enterprise’s data with an LLM’s—I feel like we’re still some ways away from that level of sophistication.
I feel that I have a fairly good grasp of generative models and what they can do, and I follow the industry press so that I’m consistently ahead of the latest developments, so I now tend to pay extra attention to the product’s roadmap and capabilities—being very, very careful in not miscommunicating or over-communicating the promise. This runs both ways: it helps keep me honest, so that I only produce credible, believable claims, but it also helps me debunk competitor claims. It’s way too common to see some of the largest, most well-recognized brands make completely outlandish claims about what their AI can do. It’s a competitive advantage against untruthful competitors.
It also helps me as a product marketer to get hands-on with the product itself. It used to be a lot harder in the past to work directly with engineers or product teams and get their bandwidth to answer even trivial questions, because they’re always too busy. Now, with generative tools and Q&A baked into many products, it’s easier for me to directly understand the difference between the new AI-enhanced product and the generic AI product.
Coordination was always a big challenge, and I think that always-ready access to information is a massively good thing that AI has enabled.
Do you have a hot take on generative AI?
I think we’ve just scratched the surface with LLMs. I have strong reasons to believe that Apple will launch its own AI this year. I also think that we’re in the infancy of video and image editing right now, and once we have truly generative video and image capabilities it’s going to dramatically change the way we do things.
However—and maybe this is my most controversial take on this—I do also think that we’re some way away from fully utilizing LLMs. Don’t believe the hype. People who are actually working on fusing their own data with LLMs are starting to realize it’s harder than it seems, and I won’t be surprised if it takes us into the better part of 2025 to get to the promise of truly intelligent assistants who have all of your and your organization’s data at their fingertips.