Jan. 4, 2024

The UX Research reckoning is here | Judd Antin (Airbnb, Meta)

The player is loading ...
Lenny's Podcast

Judd Antin has spent 15 years leading research and design teams at companies like Yahoo, Meta, and Airbnb. His direct reports have gone on to lead user research at Figma, Notion, Slack, Robinhood, Duolingo, AllTrails, and more. In our conversation, we unpack the transformation that the user-research field is experiencing. Specifically:

• Where user research went wrong over the past decade

• The three types of research—macro, middle-range, and micro—and the purpose of each

• How to effectively integrate researchers into the product development process

• The “user-centered performance” phenomenon and why it’s a waste of time

• Common tropes about PMs, from researchers

• The ideal ratio of researchers in a company

• Why Judd says NPS is useless, and what to use instead

Brought to you by Teal—Your personal career growth platform | Vanta—Automate compliance. Simplify security | Ahrefs—Improve your website’s SEO for free

Where to find Judd Antin:

• LinkedIn: https://www.linkedin.com/in/juddantin/

• Website: https://juddantin.com/

• Blog: https://medium.com/onebigthought

Where to find Lenny:

• Newsletter: https://www.lennysnewsletter.com

• X: https://twitter.com/lennysan

• LinkedIn: https://www.linkedin.com/in/lennyrachitsky/

In this episode, we cover:

(00:00) Judd’s background

(04:16) Critiques and responses to Judd’s post “The UX Research Reckoning Is Here”

(07:33) The state of user research

(08:53) Macro, middle-range, and micro research

(14:05) What teams get wrong when it comes to research

(15:46) The importance of integrating research from the beginning

(17:30) Traits of great researchers

(19:53) Advice for evaluating user researchers

(21:10) Balancing business and product focus

(23:55) User-centered performance

(26:42) The role of intuition in product development

(30:15) Checking your gut instincts

(32:54) Common tropes about PMs, from researchers

(41:02) A/B testing vs. user research

(43:15) Hindsight bias and narrative fallacy

(44:55) Making recommendations based on research

(47:26) Advice for teams on how to leverage researchers

(51:18) How product managers can be better partners to user researchers

(56:53) The ideal ratio of researchers in a company

(59:43) Empowering user researchers to drive impact

(01:03:39) The limitations of NPS as a metric

(01:06:48) The risks of dogfooding

(01:08:51) Lightning round

Referenced:

• Matt Gallivan on LinkedIn: https://www.linkedin.com/in/mattgallivan/

• Janna Bray on LinkedIn: https://www.linkedin.com/in/janna-bray-a4046a25/

• Celeste Ridlen on LinkedIn: https://www.linkedin.com/in/celesteridlen/

• Rebecca Gray on LinkedIn: https://www.linkedin.com/in/rebeccagray2/

• Hannah Pileggi on LinkedIn: https://www.linkedin.com/in/hannah-pileggi-43169314/

• Louise Beryl on LinkedIn: https://www.linkedin.com/in/louise-beryl-13225833/

• The UX Research Reckoning Is Here: https://medium.com/onebigthought/the-ux-research-reckoning-is-here-c63710ea4084

• The end of the “free money” era: https://www.theguardian.com/technology/2023/apr/11/techscape-zirp-tech-boom

• Cognitive biases: https://en.wikipedia.org/wiki/List_of_cognitive_biases

• IDEO design thinking: https://designthinking.ideo.com/

Everything Is Obvious: How Common Sense Fails Us: https://www.amazon.com/Everything-Obvious-Common-Sense-Fails/dp/0307951790

• Patrick Collison’s tweet: https://twitter.com/patrickc/status/1443215022029619200?lang=en

• Brian Chesky on LinkedIn: https://www.linkedin.com/in/brianchesky/

• Brian Chesky on Lenny’s Podcast: https://www.lennyspodcast.com/brian-cheskys-new-playbook/

• NPS: https://en.wikipedia.org/wiki/Net_promoter_score

• What is CSAT and how do you measure it?: https://www.qualtrics.com/experience-management/customer/what-is-csat/

• Michael Murakami on LinkedIn: https://www.linkedin.com/in/michaelhmurakami/

Bad Leadership: What It Is, How It Happens, Why It Matters: https://www.amazon.com/Bad-Leadership-Happens-Matters-Common/dp/1591391660

Demon Copperhead: https://www.amazon.com/Demon-Copperhead-Novel-Barbara-Kingsolver/dp/0063251922

All Systems Red: The Murderbot Diaries: https://www.amazon.com/All-Systems-Red-Murderbot-Diaries/dp/0765397536

The Last of Us on HBO: https://www.hbo.com/the-last-of-us

• Belay glasses: https://www.amazon.com/Belay-Glasses-Climbing-Comfortable-Sturdy/dp/B08GSBYDKQ/

• Epictetus: Control What You Can—Especially Yourself: https://www.shortform.com/blog/epictetus-control/

The 7 Habits of Highly Effective People: Powerful Lessons in Personal Change: https://www.amazon.com/Habits-Highly-Effective-People-Powerful/dp/0743269519

Production and marketing by https://penname.co/. For inquiries about sponsoring the podcast, email podcast@lennyrachitsky.com.

Lenny may be an investor in the companies discussed.



Get full access to Lenny's Newsletter at www.lennysnewsletter.com/subscribe

Transcript

Judd Antin (00:00:00):
User-centered performance refers to customer obsession or user-centered practice that is symbolic rather than focused on learning. It's hugely common, I would argue. It's work we do to signal to each other how customer obsessed we are, not because we want to make a different decision. If your listeners are like, "I don't do that." I'm like, "Think about it for a second. This is extremely common." Every time a PM comes to a researcher at the end of a product process and says, "Can you just run a quick user study just to validate our assumptions," that's user-centered performance. It's too late to matter. We got to ship it. What they want is to check the box. One of my big mantras was, "We don't validate, we falsify. We are looking to be wrong." Many PMs, many designers are not in that place. They do not want to be wrong. They're looking to validate, and that's user-centered performance.

Lenny (00:00:54):
Today my guest is Judd Anton. Judd helped build the user research practice at Facebook. He was a longtime head of research at Airbnb, and his direct reports have gone on to lead research teams at Figma, Notion, Slack, Robinhood, Duolingo, Fair, and other amazing companies. These days, Judd spends his time consulting, helping companies with organizational challenges, product strategy, design, research, hiring, onboarding, and crisis management. In our conversation, we unpack a conclusion that Jud has come to recently about how the user research field is going through a reckoning and what needs to change both within the user research field and how companies leverage user research going forward. 

(00:01:37):
Judd shares what the user research field has gotten wrong over the last decade, how PMs and designers rely on user research too often, and to answer the wrong questions, where user research will continue to provide significant value, and how to best leverage your researchers, why it's important for researchers to think about the business goals more versus just what the users need, what to look for when you're hiring a user researcher, how PMs can be better partners to researchers, and also a phenomenon that I love that Judd describes and often witnesses, that he calls user-centered performance, where everyone acts like they care about the user, but they're just doing it for show and already know what they want to do. This episode has a lot of spicy takes and will probably upset some people, but Judd is sharing some real talk, here, that I think we all need to hear. With that, I bring you Judd Antin after a short word from our sponsors. 

(00:02:27):
This time of year is prime for career reflection and setting goals for professional growth. I always like to spend this time reflecting on what I accomplished the previous year, what I hope to accomplish the next year, and whether this is the year I look for a new opportunity. That's where today's sponsor Teal comes in. Teal provides you with the tools to run an amazing job search with an AI powered resume builder, job tracker, cover letter generator, and Chrome extension that integrates with over 40 job boards, Teal is the all-in-one platform you need to run a more streamlined and efficient job search and stand out in this competitive market. There's a reason nearly one million people have trusted Teal to run their job search. If you're thinking of making a change in the new year, leverage Teal to grow your career on your own terms. Get started for free at tealhq.com/lenny. That's tealhq.com/lenny. 

(00:03:18):
This episode is brought to you by Vanta, helping you streamline your security compliance to accelerate your growth. Thousands of fast-growing companies like Gusto, KOM, Quora, and Modern Treasury trust Vanta to help build scale, manage, and demonstrate their security and compliance programs, and get ready for audits in weeks, not months. By offering the most in-demand security and privacy frameworks such as SOC 2, ISO 27001, GDPR, HIPAA, and many more, Vanta helps companies obtain the reports they need to accelerate growth, build efficient compliance processes, mitigate risks to their businesses, and build trust with external stakeholders. 

(00:03:55):
Over 5,000 fast-growing companies use Vanta to automate up to 90% of the work involved with SOC 2 and these other frameworks. For a limited time Lenny's podcast listeners get $1,000 off Vanta. Go to vanta.com/lenny, that's V-A-N-T-A.com/lenny to learn more and to claim your discounts. Get started today. Judd, thank you so much for being here. Welcome to the podcast,

Judd Antin (00:04:22):
Lenny, thanks for having me.

Lenny (00:04:24):
It's my pleasure. So we actually worked together at Airbnb for many years. And as I was preparing for this, I realized how many of the people that you managed went on to do amazing things. So I'm just going to read a list of people that worked for you and what they do now. We had Matt Gallivan, who now leads research at Slack. We have Janna Bray, who leads research at Notion, Celeste Ridlen, who leads research at Robinhood. Rebecca Grey, who leads research at Fair, Hannah Pileggi, who I think was leading research at Duolingo, Louise Beryl, who leads research at Figma, and then Noam, who was leading research at Wealthfront. I think he moved on to something else. What a fricking crazy alumni community and group from this one team that you hired and incubated.

Judd Antin (00:05:11):
No, I've never looked at that list, but I'll tell you, I have been so privileged to work with all these amazing humans. I can't take credit for it. They're just outstanding people. And I'm glad the diaspora is out there, because these people, rock stars.

Lenny (00:05:23):
Okay. The main reason that I wanted to do a podcast episode with you is that you wrote this piece that was titled The User Research Reckoning is Here, which I understand caused quite a stir in the research community and I think adjacent communities. And let me just read one of your takeaways at the top of your post to give people a sense of what it was about. You wrote, "The user research discipline over the last 15 years is dying. The reckoning is here. The discipline can still survive and thrive, but we'd better adapt, and quick." Before we get into the meat of the piece, could you share a bit about just the reaction to this piece and maybe if it was a surprise and what you expected would happen when you put this out?

Judd Antin (00:06:06):
Yeah, I was definitely surprised. I wrote it because I wanted to start a conversation about something I was thinking about. I didn't really know who would read it. And in the end it turned out a lot of people read it. I learned that using the word reckoning may have been a mistake because it inspires a lot of drama in a conversation that I wanted to be really productive and positive. Overall, I would say, though, that the response was very positive. It seemed to resonate with a lot of people who reached out to me. I spent a lot of time talking to teams, to designers, to researchers, but there were also a ton of critiques. 

(00:06:43):
I would say some of it was like people thought I was throwing research or researchers under the bus, like, "It's researchers' fault. We're doing it wrong." Which I don't believe at all. And that I wasn't taking responsibility as a research leader or a design leader myself. And the most interesting one I would say was the anti-capitalist crew, because one of my points that we'll talk about is that I think researchers need to be more profit focused. And there are a lot of people out there who, I think they think that's not cool or not research's job, and I'm like, "Well, what are we doing then, if we're not helping businesses succeed?" But that was the most surprising critique, for sure.

Lenny (00:07:23):
I've worked with some of those people who are just like, "Why are we growing? Why do we focus so much on growth? Why do we need to grow this business?"

Judd Antin (00:07:29):
Yeah. 

Lenny (00:07:29):
So I get that. 

Judd Antin (00:07:30):
Maybe it's the wrong industry for them.

Lenny (00:07:32):
Yeah. I'm not a fan of that. Okay. Let's actually dig into the meat of your message, and the big takeaway, and the conclusion of what you're finding is happening in user research. And I know a lot of this comes from a lot of user researchers have been laid off at a lot of companies. It was one of the hardest hit teams. And so I think a lot of this comes from that. So yeah, so let's just start big and then see where it goes.

Judd Antin (00:07:53):
So yeah, everybody who's paying attention has noticed that there have been a bunch of layoffs. And I think back in the summer I was thinking, "Listen, this seems to be hitting UX and UX research particularly hard. Is there something going on? Is there a bigger picture?" The reason I use the word reckoning is because to me that's like, "Hey, a moment to take stock." And triggered by the fact that a lot of wonderful humans may have lost their jobs, and many more are afraid of losing their jobs. And so if it's a sign, the fact that research has been hit so hard, it's a sign of what? And so the thesis of my article is really, it's a sign that maybe the system is a little more broken than we think and that research is not driving the value or impact that it should or could. And that's for a bunch of reasons, I think. Some of it is stuff that research can do better, and a lot of it is how research is integrated and positioned in companies. 

(00:08:46):
And at the root of all that, I think, is that we're just doing too much of what I would consider the wrong type of research. And what I mean by the wrong type of research is I have this framework, and it's in the article, macro, middle range, and micro research, at least three ways to talk about it. And it's pretty simple, the intuition of what those are. So macro research is big picture, strategic, business focused, forward-looking innovation, look at the market, look at competitors, long-term research to understand where the product should go next, stuff like that. 

(00:09:20):
And then you have micro research, which a lot of really technical usability falls into this, all the beautiful stuff that researchers do to enable a really high quality, excellent, pixel perfect thing to go out the door, laser-focused research to understand AB test results, stuff like that. And then you have this middle range, which is this blobular place where the research questions are middle altitude and a lot of the core, let's say user understanding questions fall here. And a lot of what research is doing is research in that space. It's, "Let's take a group of people and ask some questions about how they think, feel, behave, how they're using a product or not using a product." And it's just this devastating mix of really interesting to many, including researchers, and not impactful enough for the business. That's the core thesis. Researchers do it because it's interesting, but honestly, and a thing we should talk about, Lenny, is researchers also do it because it's the kind of work we most often get asked to do. 

Lenny (00:10:22):
Yeah. That's exactly what I was thinking. As a PM, that's what I want to get answers to, is like, "How should we think about this one product?" And I totally get this.

Judd Antin (00:10:31):
Yeah. The questions turn out to be really interesting and there are many cases at many companies where it's super impactful. But the problem with those types of questions is they tend to, they trigger all the worst stuff that researchers experience. So they yield results which are interesting, but sometimes hard to operationalize. They trigger the post hoc bias, really, really, really well where a lot of people can say confidently like, "Oh, that was obvious. We knew that already." And they fulfill this need for us to feel and be customer obsessed, user-centered, without changing anything. So doing too much of that research to me is a symptom of a broken system, and where companies are really different from each other. I heard from so many after this article and they're like, "Well, my company and my industry is like this or not like this." 

(00:11:24):
But in tech, we spent the last many years hiring, hiring, hiring researchers, but maybe, I'm sure most of your listeners are familiar with the idea of a ZIRP. Maybe it was a zero interest base phenomenon, where it was okay when the money was easy, to hire researchers, even though we were not setting them up properly. We're were going to set them up to fail. We set them up as a service function. We didn't know what research was for. We didn't know how to really drive impact with it. And that's where the reckoning comes from. It's like that era is over. Research, I think, is more crucial than ever. Good, great researchers are more impactful than ever. But it's in a new space. We're in a new space now.

Lenny (00:12:05):
I want to make sure people understand this framework. And specifically, how would you best describe the difference between this middle range research and macro research?

Judd Antin (00:12:16):
Middle range research is usually focused on a more specific set of research questions or a constituency. So if macro is like, "Let's understand the overall competitive landscape. Let's do a concept car type project where we really look ahead. Let's get involved with strategic planning," which is a wonderful thing for researchers to do, do TAM studies, other things like that, that stuff lives in the macro space. 

(00:12:42):
The middle range space is like, what's a good example? "We want to know how Airbnb hosts feel about their payment options." That's a really interesting, reasonable question. And we can go out and do research on that, but it's not that specific. It's not really targeted at a business problem yet. It could be. Maybe that's a result of the research, but it yields these middle range insights in which we've learned things like, "Well, hosts want flexibility about their payment options." I'm making this up. And that's a good example where it's like, "It's not that that's not an interesting set of questions, it's just not quite pointed enough." And it's not framed in the language of the funnel, or the business strategy, or the OKRs. It's not quite enough aligned enough to that. It's too blobular in that middle level and it ends up not driving impact.

Lenny (00:13:38):
I think it also leads to a lot of the things as you described, people don't like about research. It delays everything. You have to wait for the research to be done to have an answer, to make a clear decision. It also creates this issue that people complain about, that PMs and product teams don't want to just make a decision on their own. They're like, "I will get this additional data point and make sure research tells us this is the right answer instead of just trusting there." God, I guess maybe along those lines, this might be going off a little track, but what's your advice there for, say, product managers or PMs or product teams to not necessarily rely on research for that middle research?

Judd Antin (00:14:14):
I think the reason why so many PMs ask for those middle range questions is because they haven't really gotten deep with their researcher in a way which can leverage it for maximum impact. So if the question is like, "Hey, Judd, you just pointed out a bunch of problems, can you be more solutions oriented?" Well, the solution is simple but not easy to me. It's that we need to restructure the way we make products in a way which integrates research much more fully. It looks like consistent relationships in which researchers, and the work, and the insights they provide are a part of the process from beginning to end.

(00:14:51):
And I think, Lenny, you as a PM, that's how you worked. I remember you, I know who you worked with. You worked with great researchers. But honestly, most product processes are not that way. And so that's when research is a service function. It gets called in right at the end. It's reactive in the sense that a researcher in the room listening and participating in the conversation could have a ton of impact on framing exactly the right question that will drive maximum business impact, maximum product improvement at that moment, and then go do it quick, and get back, and we're onto the next. But they weren't there, the relationship wasn't there. They're not engaged in the project from the beginning. And that's the number one root of the problem. As long as research is a service discipline, I think we're going to be stuck in this spot.

Lenny (00:15:37):
When people might be hearing this, on the one hand, it's research has been not as helpful to teams as they thought, and researchers have been spending time on the wrong thing. On the other hand, your advice is integrate research from the beginning, make them more involved throughout. And I think that might confuse people. How should people think about, like, "Research is actually more important? You should integrate them more deeply."

Judd Antin (00:15:59):
There's a vicious cycle that's been happening, is from where I sit, and this is what I hear from many, many researchers and research leaders, which is a lot of companies hired a lot of researchers with great intentions, didn't quite know how to integrate them. And UX research is a newer discipline, so maybe that's not surprising. We're still learning how to use it. "Cool, let's evolve." But a lot of companies hired these people, but they hired them into kind of like a service discipline, very reactive, not in the room, not integrated in the way I said. 

(00:16:30):
And so they had less input on the questions to ask, or they're included, but only at the end. And then they're unable to build those direct relationships, to be there in the room to actually drive the questions and insert insights. Because a good researcher is like the repository of insights you need for growth, but they're not there. They don't participate in the decision. So they end up doing research. They have jobs to do, so they do research that is too reactive, it doesn't matter, and then it's less impactful. Executives conclude that therefore researchers are not as impactful and then they get sidelined or laid off and the cycle continues. 

(00:17:07):
So I think the short circuit is the constant engagement. If you take a great researcher and you insert them consistently in a product process, I feel confident that researcher will drive a product improvement, metrics impact, growth, all the things that you want to see as a PM and a product leader. It's just that's the exception, not the norm these days.

Lenny (00:17:31):
This may be a hard question to answer, but when people hear, "If you have a great researcher, here's how you approach it." What are signals that your researchers is great versus not great? What are some things people could look for to tell them, like, "Oh, maybe I have the wrong researcher on my team."

Judd Antin (00:17:44):
The best researchers I think are first of all, multi method. The first iteration of user research was primarily a qualitative discipline. But a strong opinion that I have is that is largely one of those models that needs to evolve. It's not that qualitative user research is no longer important. It's that the best researchers have five tools. I think they have five tools. And those five tools are number one, what we would call formative or generative user experience research. So looking ahead, innovation focused, really open-ended, maybe more ethnographic, "Let's go out into the field and talk to host and guests on Airbnb. Let's see people using our product in the field," stuff like that. So that's formative. 

(00:18:27):
The second type is evaluative, so more like usability testing. The third tool is a basic rigorous survey design. It's the best scaled way to get responses from communities small and large. You can get a lot out of really well crafted surveys. But to do that, you have to have the fourth tool, which is applied statistics, the best research, know a little bit of stats. You can't interact in a world of AB testing without knowing basic statistics. 

(00:18:56):
And then in the old version of this, the fifth tool was SQL, because I think good researchers need to be able to run their own queries. These days, so much of that is dashboarded, that the fifth tool may now be prompt engineering, which is a thing we could talk about, but I think maybe that's the fifth tool is somewhere, is it technical skills that fall in between querying your own data, understanding it very well in companies that are awash with data and then interacting with generative AI.

Lenny (00:19:24):
Amazing. That's such a cool list. Okay, so just to playback, formative, generative, innovative skills to think bigger and come up with new ideas, usability.

Judd Antin (00:19:35):
Yep.

Lenny (00:19:35):
Yeah, usability. How did you describe it? I have a different word, here. Evaluate? Evaluative?

Judd Antin (00:19:41):
Evaluative, right.

Lenny (00:19:41):
Okay.

Judd Antin (00:19:41):
So we're evaluating products and doing more. Really that's the micro level of research. 

Lenny (00:19:47):
Survey design, being really rigorous about it, applied statistics, and then SQL/dashboard/prompt engineering. 

Judd Antin (00:19:53):
Right. 

Lenny (00:19:54):
Maybe just one last question along this thread, also a big question, but any advice for how to evaluate these skills/interview for them? I know this is its own deep topic, but any advice for someone trying to find this person?

Judd Antin (00:20:05):
I've interviewed hundreds or thousands of researchers, and the way I usually approach that is you want a researcher who's got a Swiss army knife, because if all you have is a hammer, then everything looks like a nail. And so if you give in the context of an interview, let's say, a researcher, a pretty juicy, open-ended research question, and you want to see how they handle it, and a good answer is usually multi-method. We're not going to handle it in any one way. We're going to say, "Well, here's a couple of ways we could deal with this. Here's how we could do this in a day, or a week, or a month." We usually don't have a month, but sometimes big research projects go on for that long. "And here are the different sets of methods that we can use." 

(00:20:44):
So see where they go. It's actually pretty simple. Most researchers are deeper in one than the other, and sometimes you can make up for those five tools with the team. So you have experts who are t-shaped, but maybe deeper in one or several of those ways. But when I built a team at Meta and at Airbnb, that was my goal, is individually as researchers build up those tools and then as a team build deep expertise that would fill all the gaps.

Lenny (00:21:10):
Coming back to the main premise of your post, one of your big takeaways is, "Researchers need to be much more business oriented, thinking about what helps the business versus the user." Which I think to a lot of researchers will feel really weird. Can you just talk about your takeaways there?

Judd Antin (00:21:25):
So much of user experience practice, not just research, but design too, is focused on empathy and very user-centered. This is beautiful. I'm not saying that we should abandon that. I think what I'm saying is there's an overlapping event, where you have the user and profit or the business. And what researchers need to do is be way more explicit about finding that overlap. So one thing, when researchers ask for advice, they're like, "Well, what should I do to be more business or profit focused?" I say something like, "Did you read the last quarterly report, If it's a public company? Did you listen to the shareholder call?" And they're probably like, "No, it's full of a bunch of language I didn't quite get." 

(00:22:12):
And I'm like, "Yep." So there you go. That's the language you need to learn. Scour your Google Drive folder, your internal folder and look for all of the documents that are about this quarter, or this halves, or next half strategy. What are the OKRs? Understand the metrics and the conversion funnel, know it back and forward, because then what you're doing is you're proposing, if you're in the active conversation, you're saying, 'Cool, I hear you asking that research question. I've identified this is exactly the spot in the funnel where I think we need to do work. There's an opportunity here. Or that competitor is eating our lunch with this group of users. I know that because I read the competitive report and I understand it deeply.'" So those are skills that some researchers have and a lot are building these days, but historically, last 15 years, it hasn't been a thing we've been as focused on, and I think that's an evolution that needs to happen.

Lenny (00:23:06):
I think a lot of PMs listening to this are going to be like, "Hallelujah." This is exactly what I've been trying to convince people of. It's what I've been trying to convince my researchers of, and design often falls into this.

Judd Antin (00:23:17):
But Lenny, the opposite is true, too, because you got to take the average PM who lives in that land, all day, every day, and what they do is not in the Venn. I think those are people who are also performing customer centricity and performing user-centeredness a lot, when they're really not interested. And so this is not about researcher. This takes two sides. Fixing this broken system takes everyone, researchers, PMs, designers, everyone at a company, but also the way that organization is structured, and integrating itself in a different way. Everybody's got to come to the table.

Lenny (00:23:55):
Such a good point. And you have this actual term that you call user-centered performance, where it's the performance of being user-centered. Can you talk about that and then just what advice you'd give to PMs that, hearing this, are like, "Yes, I love everything you're saying," and then not realizing maybe they're too far in that extreme?

Judd Antin (00:24:11):
User-centered performance is a term I made up, because it's fun to make up terms. And it refers to customer obsession or a user-centered practice that is symbolic rather than focused on learning. So it's hugely common, I would argue. It's work we do to signal to each other how customer obsessed we are, not because we want to make a different decision. And if your listeners are like, "I don't do that." I'm like, "Think about it for a second." Because this is extremely common. It shows up in explicit ways and implicit ways. 

(00:24:52):
So explicitly, I would say every time a PM comes to a researcher at the end of a product process and says, "Can you just run a quick user study just to validate our assumptions?" That's user-centered performance. It's too late to matter. That PM is not interested in being wrong at all. It's too late in the game for that. We got to ship it. What they want is to check the box. So any check the box style research is a wild example of user-centered performance. 

(00:25:22):
I would argue every researcher has probably had to do executive listening sessions because a lot of PMs, founders, product people, but designers, too, they want to get close to the customer. And so, like, "Can I do some focus groups? I want to be there. I want to ask them questions." This is 97% performance. It's well-intentioned, but it isn't focused on learning. It isn't going to drive better outcomes or more impact. 

(00:25:49):
And then there's all these implicit ways that people engage in that kind of user performance, too. A lot of it comes down to cognitive biases, confirmation bias, ego. One of my big mantras was, "We don't validate, we falsify. We are looking to be wrong." That is the mindset you should use when you're approaching insights and research. "I want to be wrong. I want you to do research that shows we were off base in the following ways. Tell me exactly how and why in a way that allows me to fix it quickly." But many PMs, many designers are not in that place. They do not want to be wrong. They're looking to validate. And that's user-centered performance.

Lenny (00:26:29):
Oh, man. I think a lot of people are hearing this and feeling exposed.

Judd Antin (00:26:33):
Exposed.

Lenny (00:26:35):
I feel like you're like this Deep Throat person coming from sharing these things people don't want to talk about at the office.

Judd Antin (00:26:35):
I know.

Lenny (00:26:42):
There's this quote in your post I'm going to read. "Product managers love to ask for middle range research that they can use to justify decisions they're reluctant to make on their own. User designers love to ask for middle range research because it fits their model of what proper design process should look like. Executives love to ask for middle range because they don't really understand what research is for, and helps them do performative user-centeredness. In the end, they will decide based on their own opinions."

Judd Antin (00:27:07):
There is an important place for intuition in product development, of course. The best designers, researchers, product people develop strong intuition for the product. But you got to understand, intuition is where all of those biases lie. It's where all your blind spots are. And what great insights people do, what great researchers do when you're next to them all the time, is they'll expose you. I don't have to be the Deep Throat, because you have somebody who's professional job is ... Keeping you honest is probably the wrong way to put it, but as somebody whose capabilities are about expanding your horizons, making it so that your intuition is constantly improving, you don't have to rely on it when your intuition and the evidence sort of collide in a way that either affirms or falsifies the product decision you made. Now something really good is happening. 

(00:28:00):
And the other thing that is inherent in that quote is I, at Airbnb, wore many hats over the years. I was head of research two different times. I was head of design for guest products. And my last job was I was head of the design studio, so UX research, UX design, writing, localization, they all reported up to me. So I've seen this from many disciplinary angles in the UX field. And researchers aren't the only ones who are guilty of this. I would say design has a ton of performance. And it comes from the fact that we have figured out user-centered design, this process, or design thinking, which IDEO popularized. Like, "That's what we're supposed to do, right? Bezos told us that we, as PMs, had to be customer obsessed. So that's what we're supposed to do."

(00:28:48):
It's a really common and damaging thing when we don't genuinely have that growth learning mindset, and it's easy to sideline researchers. We don't need them in that situation. We've got our guts. Isn't the gut where a great PM, a great founder needs to have that gut? And they do, but they need to be open to the fact that your gut, is limited, and biased, and narrow, and wrong sometimes.

Lenny (00:29:13):
The two sides of this is trust your gut opinion, "I don't need research, I don't need data. I have opinions, and my own experience, and I'm going to use the product, and let's just go with what feels right to me." Versus pure data-driven research driven for designers that are maybe listening for product managers. Do you have any advice for just where to fall on that spectrum and just how to best leverage research to inform that opinion?

Judd Antin (00:29:36):
Yeah, I taught a class at UC Berkeley this semester on leadership, and we talk about that a lot, because great leaders develop intuition. It's the pattern matching part of experience, where you develop heuristics which allow you to make good judgments even if you can't quite explain where that judgment came from. That's what the gut is. But it's also, like I said, where bias comes from, where all the cognitive biases, there's a list of 151 of them on Wikipedia, I won't name them, but all those thorny things that lead us astray, the behavioral economists and social psychologists study, those live in the gut. And so the advice is when you are looking to check your gut, you have to do that thing. A lot of your listeners have probably read Thinking Fast and Slow, System 1, System 2. Right? 

Lenny (00:30:29):
I have it here, right under my laptop, actually, holding up my laptop screen. 

Judd Antin (00:30:32):
That's so appropriate, Lenny. So the secret is not that sexy. It's System 2. So you engage that slow, methodical process in which you do analytic thinking as a means of checking your gut. Slow in the grand scheme of things. Slow meaning not a split second decision, not like months of analysis. That's not what I mean. 

(00:30:54):
The other thing you can do, and there's really great research on this, is you bring in the wisdom of the crowd. So the wisdom of the crowd is a phrase a lot of people are familiar with, and it works in a specific situation. The wisdom of the crowd works when the people involved with the decision are bringing diverse sources of information and judgment to the table. Obviously, if everybody has the same sources of information, then it doesn't matter how many people are out there. So if you want to check your gut, get a bunch of different guts together, get a bunch of different people in the room who can bring evidence and intuition to bear, and have an open, direct end kind conversation in which we might disagree. You know who's great at that? Researchers

Lenny (00:31:38):
Leading those discussions essentially, and getting a bunch of people's opinions.

Judd Antin (00:31:41):
Yeah, this is the structural solution I'm talking about, Lenny, is like, "I never asked for research teams to have their own separate OKRs." I said two things, "Number one, what's the teams? Shouldn't the PMs, the engineers, the designers and the research, everybody should have the same set of metrics for success because either we're doing it together or we're not." And then I said, "My metric for success is when they won't have that meeting without you." That's my metric for success. If they cannot have that decision making meeting without the researcher there, that means you've developed influence, strong, trusting relationships, you're an active participant in the process, not just somebody who provides input into someone else's process. And that is when researchers can have huge impact.

Lenny (00:32:29):
I think of the PM role in a similar way, even though people won't have these meetings with PMs, because they're often at the center of lot of the stuff, but you want to be a PM that people want on their team. There's a lot of teams that are like, "We don't want and PMs, we don't need product managers. They just get in the way." And I find that that's only the case when the product manager's not great, and not really good at their job, because most great PMs just make everyone's life easier.

Judd Antin (00:32:50):
They do. The grease, I-

Lenny (00:32:51):
The grease.

Judd Antin (00:32:53):
... love it.

Lenny (00:32:54):
You mentioned also, before we started recording, that the biggest challenge for user researchers is in their relationship with their product manager. Can you speak to that and what you've seen there?

Judd Antin (00:33:04):
I'm wary of overgeneralizing, but I can tell you that from my experience and from what I hear, the product research or product insights relationship is one of the most challenged. And I think it comes from the fact that fundamentally, many researchers are just not included in the process that PMs are running. And then, actually, I did some asking around before this podcast, and so I thought, "There are some tropes that researchers have about PMs that are worth PMs knowing, just like four or five of them, the things that researchers know PMs say, which drive us nuts because they're not true." 

(00:33:51):
So the first one is that research just slows us down. Research is too slow. This is bullshit. A great research team can do research in a day, a week, or a month. It just depends on what you want to get out of it, like, "How much detail do you need? How many people do we need to talk to? What is the depth or breadth? Do we need to go to seven different countries to talk about our constituencies in Latin America?" Well, that's not going to happen overnight, but we don't often need that. The other way to look at that is that is it slower to get it wrong and fix it than to take a hot second to do the work to get it right the first time? So that's BS. Good research doesn't slow us down, it speeds us up.

Lenny (00:34:36):
And also just along those lines, a big part of your premise is you don't need to do as much research as people are doing, like this middle research that a lot of the time is put into. 

Judd Antin (00:34:45):
Yeah. Research can go super fast. I think especially, so the macro level research, I hope what it is tied to things like annual planning processes. We did a thing at Airbnb several years that we called, it was like Insights 2019, Insights 2020. They were concept car projects. And we spent quite a long time synthesizing the entire year's worth of insights from every place we could get them and then developing with designers and engineers like a concept car for five years in the future. So that's a long process. 

(00:35:16):
But the micro level, there's so much business value to be derived there, so much business value, and it can go so fast, Lenny, it can go so fast. You can have results in 48 hours on these things. We did a thing at Airbnb. There's a famous story which I'll only tell in the abstract, because I don't want to out anything, but we call it the multimillion dollar button. And basically we did research which revealed that people weren't going down the purchase funnel because they were afraid. The calls to actions on the button was making them afraid that it would initiate a purchase when really it was just taking the next step. 

(00:35:58):
We changed the text on the button with help from our amazing content design, our UX writing team. We basically changed seven characters and made Airbnb millions of dollars, because what we found out was really simple. It was just like, "Hey, this button feels scary. The CTA on the button feels scary." So that's a great example of how micro ... And that happened in like 48 hours, we would discover that insight, or overnight, basically. And we were like, "Hm, maybe we should test some other CTAs." We did the conversion, we added like 1%, which is really, really hard to do. So that's a quick example of how that type of quick research can drive a huge amount of business value.

Lenny (00:36:39):
This episode is brought to you by Ahrefs. Many of you already know Ahrefs as one of the top tools for search engine optimization. It's used by thousands of SEOs and companies like IBM, Adidas and eBay. What you may not know is that there's a free version that was made with small website owners in mind. It's called Ahrefs Webmaster Tools. It's free and it can help you bring more traffic to your website. Ahrefs Webmaster Tools will show you keywords that you rank for and backlinks that you can get. It also performs automated site audits to find what issues prevent your website from ranking higher on Google. Every detected issue comes with a detailed explanation and advice on how to fix it. Visit ahrefs.com/awt, set up a free account, connect your website, and start improving it. That's A-H-R-E-F-S.com/A-W-T. 

(00:37:29):
So just to make this even clearer, I think this middle research zone is the stuff that does slow people down, I imagine. It's like, "What are the challenges hosts have with payments on Airbnb?" What you're basically saying is, "Spend your time doing the micro stuff like usability research and then the bigger stuff that's part of overall planning. That's part of the planning cycle. It's not like every project you're working on, you need to have a whole research project on."

Judd Antin (00:37:55):
Exactly. The micro research should be much more common. A lot of researchers think that that's scut work, that usability is something junior researchers do. I completely disagree. I think we need to get back there as an industry and be like, "When you make a product easier to use, when you discover problems with functionality, business metrics we care about will go up." I've seen it happen. But that's not just work for interns and new grads, that's for sure. 

(00:38:24):
And then the planning process, absolutely. If we're integrated from beginning to end, we can help. And the thing about that middle range, I think you're right. That's the stuff that makes the stereotype that research is slow, and a lot of times it's also because it's just not pointed enough. The researcher can also say in that moment, "I have studied the business plan. I know exactly where, I've seen the metrics trend, I have an idea about exactly where that's going to go." We still need to do that middle range research. The question is valuable, but it's now very pointed and the time is worth it.

Lenny (00:38:58):
Amazing. Okay, I want to cure the rest of these tropes.

Judd Antin (00:39:01):
Okay, research is too slow is the first one. The second one, I can do my own research. Why do I need researchers? And that's true, as product people, I hope you are engaging with customers and listening well. But no offense, garbage in, garbage out. The thing is, anyone can talk to a user. That does not constitute research or insights work because one user can be powerful, but one user can be idiosyncratic. And a researcher knows how to get to the heart of that really quick. They know how to take that conversation, and understand, and situate it in a way which means like, "Sure, democratize research. That's happening. There are tools out there that will let anybody get customer feedback, voice a customer type stuff." But a researcher is there to help you turn garbage into something that's not garbage and avoid the bias that can come from you just reaching out to your cousin's family and then doing whatever they thought you should do to the product. So that's the second trope. 

(00:40:08):
The third one is AB test everything. And AB tests are great, but one of my most painful things to do is to sit in a room full of PMs and data scientists who have just seen the results of an experiment that flipped a stat sig, and then they're like, "Cool, I was significantly down over this course of time for these users." And then they just start speculating about why that is, because the AB test rarely tells you why it changed in the way it did. And then this endless flywheel of AB testing goes and I'm like, "Hey, you don't have to guess. I know somebody who can get you an answer or at least evidence that addresses the question of why did we see the test result we did in a very short amount of time? Or you could use your customers as Guinea pigs, and throw more experiments at them over and over, and spend a long time on it, and come to the same place in the end."

Lenny (00:41:03):
I think a similar critique that PMs often have is AB testing is conclusive scientifically, statistically, user research is just talking to a bunch of people. Why would I trust that? What is your best way to help PMs realize that this is actually very valuable data and you should listen to it? It's not just, you know, a story here and there. 

Judd Antin (00:41:26):
Yeah. No, I think they're both right. AB testing is as close as we can get to making causal claims about products. Research is usually not oriented towards making causal claims or it should not be, but those causal claims rarely tell you how and why things happen. And if you want to not make that mistake again in the future, you need to know how and why. If you want to build a better product in a way that doesn't just answer this narrow question that an AB test answered, you need to know how and why. And so you need both. Beautiful partnerships between data scientists and research and insights people are, I think what we're going to see in that next evolution. And if you set that virtuous cycle up, if you set up the engagement where those people are involved from the beginning, you don't make those mistakes. You get the causal relationship, which is valuable for one reason and the hows and whys, which are valuable for other reasons. 

Lenny (00:42:19):
Awesome. Okay. I think there's two more tropes you had. 

Judd Antin (00:42:23):
One of them is a simple one, which is like everyone loves to quote that it turns out a totally apocryphal Henry Ford quote about, "If I'd asked my users." It turns out to the best of our knowledge, he did not say that. And-

Lenny (00:42:35):
Really? What?

Judd Antin (00:42:36):
Yeah, I know. Isn't that sad? 

Lenny (00:42:36):
I didn't know that.

Judd Antin (00:42:38):
I know. Sorry to burst your bubble, Lenny. 

Lenny (00:42:40):
Oh, wow. 

Judd Antin (00:42:41):
Who was-

Lenny (00:42:41):
Does anyone say anything? I feel like every quote is-

Judd Antin (00:42:44):
Is apocryphal, now? I know. 

Lenny (00:42:46):
Yeah. What is reality? Geez, can we? Well, let's just- 

Judd Antin (00:42:49):
Okay, maybe he said that. He certainly believed that. That's what the historians say. But the reason that makes researchers so angry is because that's not research. That's not what researchers do. A researcher who's going to ask customers what they want is a bad researcher. You need a different researcher. I've never done that in my career. No one on my team has ever run a study that's like that. So that just makes researchers mad.

(00:43:16):
And then the last one is about post-hoc bias. It's, "We knew this already. That was obvious." And I think a lot about this book, which I would recommend to your listeners. The author is a sociologist at UPenn named Duncan Watts, and the title is Everything is Obvious If You Already Know the Answer. And it's about hindsight bias. He makes the argument that we rely too much on intuition, heuristics, and pattern matching in a way that is inappropriate to our experience. And it's like it leads us astray. It's like a form of self gaslighting. And it happens because we end up selectively remembering things and then constructing narratives around them in a way which makes us feel like we already knew that, when we in fact did not.

(00:44:05):
And he talks about this other, one of those cognitive biases called the narrative fallacy, which is the idea that people love to make convenient, simple stories about the past. If I asked you about your career, Lenny, and how you got to be this amazing podcast host, you'd be like, "Well, let me tell you about this series of events." And we do that. It's part of how we make sense of our lives and the information around us, but it would probably be a lie in the sense that we all twist the evidence we have to fit the narrative we want to be true, because it's simple, and lovely, and makes us happy.

Lenny (00:44:36):
This is going to sound self-serving, but I find I'm the opposite. I'm like, "I have no idea how this all came about. Here's some things that happened, and somehow I ended up here." But maybe I'm being very modest and try to not give myself any credit.

Judd Antin (00:44:49):
That's beautiful.

Lenny (00:44:50):
Thank you for these tropes, by the way. This was fun. I didn't know you were going to do that. So that's a fun, little collection we've got, here. 

Judd Antin (00:44:54):
Thanks.

Lenny (00:44:56):
I wanted to ask about, there's this tweet by Patrick Collison that I've brought up a couple of times on this podcast, that I think is really interesting. And his tweet is this, "In my opinion, the best product will stem from a very strong mental model of the domain and user. User research can help you get such a model and validate it along the way. But it's important to view the syllogism of UXR as model of user research, to improving your mental model of the user, to what product you should build versus user research tells you what product to build." Does that resonate in any way thoughts on that way of thinking about user research?

Judd Antin (00:45:34):
Yeah, there's a double-edged sword we talk about a lot in the research community, which is about making recommendations for design. So the best research doesn't leave it at that. It tells you, and it's like the what, the so what, and then the then what. But the problem with that is some researchers go too far in the other direction, where they're like, "We ran this study, it yielded these insights, and therefore this is what we should build." And everyone else on the team is like, "Whoa, whoa, whoa. Glad to hear your thoughts on the matter, but there's a lot going on here. Maybe we should talk about it." And that makes perfect sense. That's a failure of communication. 

(00:46:19):
And I think that speaks to the thing that Patrick is saying, is like, "Good research can sometimes tell us exactly what the problem is and exactly how to fix it." An example of that is the multimillion dollar button I told you about. But in a lot of the bigger picture questions, especially the macro ones and maybe also the really pointed middle range ones, the point isn't really, "This is exactly what we should do and this is exactly what we should build." It is, "Let us develop a framework which is based on actual evidence, and then together as a team figure out how we want to experiment our way to a successful product."

Lenny (00:46:53):
To close the loop on this specific thread, what is your advice to teams, researchers to help move out of this reckoning, and to move forward, and help the field, both from a user researcher perspective and also from just a company that maybe laid off a bunch of user researchers or is trying to decide what to do with their researchers?

Judd Antin (00:47:14):
Thank you for asking. I think I said to you earlier, and I feel some pressure as maybe the first conversation that you've had specifically about research on this podcast.

Lenny (00:47:23):
Yeah, I think so. 

Judd Antin (00:47:25):
And I want to help. I believe so much in this discipline of research and insights, and I think when I said, "The UX research discipline of the last 15 years is dying," I didn't mean that I think research is dying, far from it. I think that there's a version of it, which we're now moving past and into a new version. We're going through an evolution, as many do. And so the question for me is like, "How can researchers, and the companies, and the other people with whom they work create a new version, a different version, an evolution, which is hugely impactful for the business?" 

(00:47:57):
And so the advice I'd give to researchers about that is develop diverse research skills. Remembering the five or five and a half tool list that I mentioned earlier, really go deep on that business knowledge, so speaking the language of product, and business, and metrics, and understanding exactly how to use your insights like a scalpel, building those strong relationships, which is not a thing that researchers can do by themselves. It requires two-way engagements, and also in a way which allows researchers to do fewer things better. 

(00:48:30):
So most researchers that I know are working on teams where they're like, "I'm the only researcher, and I have seven PMs and 20 designers, and I'm trying to do 10 projects." And no one's going to do a good job that way. So researchers have to learn with their partners about how to say no and focus on the most important things. But that's only half of it, right? That's the research side. 

(00:48:51):
I have two thoughts about what companies should be doing. The first one, it's a little bit of an aside, but not really. One thing I learned by through the responses to the article was everybody came out of the woodworks from the variety of insights disciplines that are out there. Because I come from a tradition of user experience research or user research, but there are many insights disciplines in many industries, and they all wanted to claim one type of research or another, and say, "Oh, well, we overhear in consumer insights or market research have been doing that well for years." And there are many insights disciplines. And generally I think creating silos is stupid. 

(00:49:33):
Actually, I'm curious what you think, because here's the number one thing I heard when I joined Airbnb and you were there, is I did it a quick listening tour where I talked to a bunch of product people. And they all said the same thing. They were like, "Listen, we have all these different people throwing insights over the transom. And it's great. We want to hear from the data scientists, from the product specialists, from the customer service people, and the voice of the customer, whatever, all that stuff. But they're all coming over the side and we don't know what to make of it. It's too much."

(00:50:03):
And that, as much as anything, is an argument for companies to stop siloing research disciplines. So when I joined Airbnb, I set out to create an integrated insights function where it's like, "Let's do UX research, let's talk about the market and competitors when we have to. Let's integrate smartly with data science functions. Let's integrate all the stuff we're getting from customer service feedback." We brought over what was then the NPS program and said, "Hey, if we're getting customer feedback there, let's all just use it all to fuel this one insights machine." So that's the first piece of advice I'd give companies.

(00:50:38):
And the second one, without being a broken record, is to think differently about the broken cycle. So integrate researchers into a unified, lean process. So if the researcher is not there from beginning to end, if there are not strong relationships between product people and design people at every level, engineering people at every level, and somebody who's their insights partner, we're going to fall back into this problem where we're just a service discipline, we're not extracting the maximum value, it comes too late, we don't know what questions to ask, we're ignorant about what research can do. And so creating that integrated, lean process where a researcher is arm in arm from the beginning, is the most important advice I'd give.

Lenny (00:51:19):
That last piece may be the answer to this next question, but the question is how can product managers be better partners to user researchers/get more leverage out of user researchers?

Judd Antin (00:51:31):
I think that is in many ways the answer, making sure that they are creating a process for the product, for their products, that it integrates user researchers and insights from beginning to end. Also, being willing to partner with the research on the roofless prioritization. I used to say that, "A full plate for a researcher was probably three things, two big projects and a small project, like a side project. More than that, your researcher is probably not doing a very good job. And a project may take 48 hours. That's okay. But so they need your help to prioritize, they need you to participate. Great PMs will take the time to be with researchers to go into the field, even to travel." Did you ever do that, Lenny?

Lenny (00:52:15):
I did. I went with Louise, who introduced, we came up with this, basically told me to chat with you about this topic. 

Judd Antin (00:52:23):
Thanks, Louise.

Lenny (00:52:24):
Thanks, Louise. We did a whole tour to Paris, our whole team, or the leads of our team went to Paris to do a bunch of focus groups and a bunch of user research behind actual mirrors. I've never done that before that trip, and it was amazing. We learned a ton.

Judd Antin (00:52:38):
Can I tell you a quick story about behind the mirror? 

Lenny (00:52:40):
Please.

Judd Antin (00:52:41):
This is back from when I was at Facebook. And there was the high times there, it was like 2012, '13, and newsfeed is really taking off, ads are going into newsfeed. And I was a leader of a team that was working among other things on how to address post quality. Like, "How do we think about what's a good post and how do we get feedback about it?" And so there was a team of engineers that thought, "One thing that you can do on Facebook is hide a post." So they were like, "This is easy. Let's look at the posts that are hidden the most and use that as the signal of what's a good post on Facebook?" Seems reasonable. And something tripped me on this one. And so I did two things.

(00:53:26):
So the first thing I did is I looked at the distribution of hiding by user, and found out that it's power law distributed, like everything on the internet. There are a few people on Facebook who hide a ton, and then most people don't hide at all. And so then what we did was, we call these super hiders, we called them super hiders. And so we said, "Let's find super hiders around the office, and we'll get a super hider in, and we'll do a really traditional user interview." We just wanted to see. So literally the first person who walked in, I remember, because this was a person who had those fingernails that are so long, you don't know how they can do touchscreens, but they did. They were amazing at it. And it was one of those rooms with the glass. And I insisted that the ENG directors, the product people, and they were willing, whatever.

(00:54:20):
So everybody's behind the glass, and I'm there with them, and the excellent researcher is in the room, and they come in, and we're just doing a traditional think aloud study. And so they go, "Hey, can you open up your Facebook app? We would just love to see what your experience is like." So they open up Facebook, and were looking, and they look at the first story, and they hide it. They go to the second story and they hide it. And this went on for a while. 

(00:54:43):
And she's definitely using Facebook, but every time she'd finish with a story, she'd hide it. And the people in the back room were starting to chatter. And they're like, "Wait, what? What is happening right now?" And like the good researcher that this person was, they let it continue, and they're like, "Whoa, can you tell me what you're thinking right now?" Come to find out that she was like, "Well, I hid that story because I'd seen it already." The model she was going for was inbox zero, which was sad, because it was infinitely scrolling. She would never get there. 

(00:55:16):
And the reason I liked that story is because the people in the back room had their minds blown. It was not that we assumed that was common behavior, like this person could have been unique, but it was enough, because those people were there, experiencing the research, that N of one allowed them to burst their own bubble and realize, "Okay, we can't think so naïvely about hides as a signal anymore." And we came up with a better solution.

Lenny (00:55:47):
That is an awesome story and such a good example of you don't need statistical significance to get massive insights. One example just gives you a, "Wow, this might be exactly what's happening. Let's go validate that." Versus, like, "We are confident, 100%, this is what happened." I love that. It reminds me actually in the mirror study that I was talking about in Paris, there's a Facebook element to it, too. We were trying to convince hosts how to feel more comfortable accepting guests who are booking instantly. And one of our theories was if they were connected on Facebook, they would be more comfortable letting someone book instantly. 

Judd Antin (00:55:47):
Yeah.

Lenny (00:56:21):
And we're just like, "Hey, what if you were to connect Facebook and see if they're friends?" And everybody in Paris was very afraid of connecting and giving Facebook any data, way ahead of what the US hosts were feeling. 

Judd Antin (00:56:34):
Yeah.

Lenny (00:56:34):
So it just made it very clear nobody wants to actually give Facebook any data. So it was very anti-Facebook at that point.

Judd Antin (00:56:40):
Yeah, that's so interesting. Germany and France were always our bellwethers for what the rest of the world would be thinking with data privacy concerns.

Lenny (00:56:49):
Oh, man. Okay, a couple more things. A lot of this started with a lot of layoffs within user research. And I think between the lines, there's a sense of teams don't need as many researchers as they hired during the ZIRP era. I think a question in everyone's mind is just like, "How many researchers do we need? What is a good ratio?" I imagine there's not a simple answer, here, but just what's your general advice to companies of how many researchers is it right?

Judd Antin (00:57:15):
This is a thing I've thought a lot about, especially in my role as the head of the design studio, that was my fundamental question. It's like, "You have all these writers, designers, researchers, how do you structure them, how many, and where, and who works on what?" And the organizing principle for me was always relationships. You know you have enough when the people who need to have a concept research partner have them. And I would much rather create pain in that situation than spread someone too thinly. So my advice was always like, "Don't try to create a researcher to cover this entire product space. Pair a researcher up with somebody who's going to involve them in a consistent, engaged process, and let them go to work, and see the impact they're going to have, but protect their time. And then other people are like, "Wait a second, that person's doing great work. I want some of that."

(00:58:02):
And creating that pain for them, because it's a pain of loss, is the number one way to grow headcount. That's how I always approached getting more headcount, was not arguing abstractly for why research is important, but by asking partners who wish they had it, to do the arguing for me. And so you're right, there isn't a clean answer for like, "Hey, this is the right ratio," because it really depends on the nature of the product. Like, "Is it a early stage product? Is it a late stage product? Are we talking about a startup or a late stage company?" But I would argue there's always room for a researcher. Lenny, I'll tell you, and I used this in a keynote talk I gave lately. You published recently a list of, I think it was about 20 B2B companies and their first 10 employees. Do you remember doing that?

Lenny (00:58:51):
Absolutely.

Judd Antin (00:58:52):
Do you remember how many researchers are anywhere on that list? I'll give you a hint.

Lenny (00:58:57):
Not too many.

Judd Antin (00:58:57):
It's between zero and two. It's one. There's one researcher on that list, anywhere. Anywhere. And that's messed up to me. Now, look, it's just these 20 companies, and each is in their own space, so I'm not going to overgeneralize. But a researcher can drive incredible value no matter what stage a company is at, because a good researcher makes you go faster, not slower, and they drive impact because they answer questions which are impossible to answer in any other way. That's true if you're a startup. It's true if you're a late stage company. Now, if it's your first 10 employees, one researcher is going to go a long way. As you grow, making sure that you're matching up researchers so that they have strong partners in the key parts of the business is the best way to figure out if you have enough.

Lenny (00:59:45):
Interesting. So your advice is, as you're starting a company, your pitch is that you'll have a lot more leverage and move faster hiring a researcher versus generally an engineers, but you'd be trading off, essentially. That's what most of the hires end up being.

Judd Antin (00:59:58):
I am reluctant to overgeneralize, and I would say I know many founders who are in startup mode are like, "I know what I need to build. The problem is that I need people who can help me execute." And I think that's right. And so everything's a trade off. But remember, imagine that you could have that Swiss army knife at your disposal. Maybe you've got an MVP out the door, and you're looking to make your first major iteration, or like many startups, you need to pivot. This is where it's like, "Hey, you don't have to do that alone." We deify startup founders who pivot appropriately, but I think that is what we would might call moral luck, where we deify the ones who got it right, and even though they made exactly the same decisions as the one who got them wrong. 

(01:00:46):
And the fact of the matter is, if you have an insights person with you who has that Swiss army knife of tools, you're not in it alone. You don't have to guess. Ultimately, it will still come down to a tough decision that you and founders have to make, but you can have evidence that bears on that decision, which you wouldn't be able to get any other way.

Lenny (01:01:05):
To close out on this, and I have just a couple more questions on this thread, I think one of your big messages to researchers is, "You can be empowered. It's up to you to do the right sort of research and to move your career in the right direction, not become a researcher people don't need." And there's this quote that you have at the top of your post, where a lot of the reaction, or I guess the way you put it is, "I know what you're thinking, they just don't get it. We're so misunderstood. Our plight is to deliver insights that users use to drive business value while we're forgotten, never driving the roadmap, no seat at the table, consistently miscast, only to be laid off in the end." And what I'm hearing from you is like, "You can change that. You can push back on doing research that isn't actually contributing." But let me ask you, what's your lasting, I don't know, advice you would leave researchers with to be successful?

Judd Antin (01:01:56):
Yeah, it's tough to be operating in a broken system, and so I feel that response, where you feel kind of powerless, but I think that's not likely to lead us past this moment to the next evolution of research. So that's where it's like I don't blame any researcher at all for being in the spot they're in. It's been a tough go. However, crying about our lot is not going to get us anywhere. So I think the point of the article for me, and this is advice I give companies all the time when I do consulting with them, is like, "Hey, we can set this up in a different way, which responds to the current environment in a way which will drive a huge amount of impact." Now, that takes companies making the right choices. It also takes researchers owning up and developing skills, pushing back, understanding what research can have the most value, developing the skills, and the knowledge, and language around the business, becoming more influential, being excellent communicators.

(01:02:55):
It's one of the things I would evaluate the most in hiring, especially research leaders, because I needed them to show and teach by example, is like isn't just rigorous research. It's like if a tree fell in the forest and no one was there to hear it, you need to communicate it effectively, and you need to do it in a way that's appropriate to the audience. Because If I'm talking to you, Lenny, it's different than I'm talking to Brian Chesky at Airbnb. And so I got to be able to give that presentation effectively, and get right to the heart of it, and speak the right language. And so if you're a researcher, it's not hopeless. Actually the discipline, the future is so bright, and we can help it along by continuing to develop these different skills as companies build a model that's more inclusive.

Lenny (01:03:38):
Awesome. Okay. I have one just random, tangential question about NPS. You have strong opinions about NPS, and I just wanted to hear your perspective on the value of NPS, your experience with NPS. 

Judd Antin (01:03:50):
Yeah. I do have a strong opinion about NPS. I like to say, "NPS is the best example of the marketing industry marketing itself." And the problem is this threatens many people's livelihoods, because there's an entire industry of consultants and software providers that want you to believe NPS is a useful and accurate metric. The problem is, the consensus in the survey science community is that NPS makes all the mistakes. So it's a garbage in, garbage out problem. So the likelihood to recommend question is bad for a whole variety of reasons. So it's bad because it's a zero to 11 scale. It's bad because it's usually unlabeled. So we label the polls, but that's not the gold standard for research. It's bad because it's 11 items. 

(01:04:37):
And there's a couple of problems with that. Number one, we find that precision goes down after five items on average, maybe seven. Number two, especially on mobile, if you're taking this survey, what percentage of those options are below the fold? We are not going to get accurate survey data. And so from a survey perspective, it's really bad. There's also this intuition, which is like, "How likely are you to recommend Windows 11 to your friends and family?" I am not a person who goes around recommending operating systems. The question is fundamentally flawed. 

(01:05:11):
The argument is that that question is a good indicator of loyalty, but there's a really simple solution, Lenny. Customer satisfaction, a simple CSAT metric, is better. It has better data properties, it is more precise, it is more correlated to business outcomes. I wanted to prove this. This is something that survey scientists know and marketers don't want you to know. And so we did the work with Mike Murakami, who led survey science at Airbnb, and he's still there, great researcher. And we basically redid all that work to find out if all that stuff was true just for Airbnb. And it is. It's simple. Don't ask NPS, ask customer satisfaction

Lenny (01:05:50):
And the customer satisfaction question, what's the actual question for people to make sure?

Judd Antin (01:05:54):
Overall, how satisfied are you with your experience with Airbnb? Or it could be some version of that, which is like, "Overall, how satisfied are you with your experience with customer service when you had a problem?" So there could be a more specific version of that question, but those questions have better properties. And a lot of people say, "Well, hey, everybody's using NPS. So at least it gives me a benchmark because I can compare my NPS to industry NPS." The problem with that is the research shows that NPS is idiosyncratic, so it goes up and downs in ways that we don't understand, and there's a lot of inconsistency in how it's asked, that creates variations in the data, which means it's not apples to apples, so you can't even compare your NPS meaningfully to somebody else's.

Lenny (01:06:37):
I love these hot takes. I'm curious to see who comes out of the woodwork, too, when-

Judd Antin (01:06:41):
People are going to be so mad, Lenny.

Lenny (01:06:42):
I love that. I think, yeah, I've heard this many times and people don't talk about it. Okay. Is there anything else you want to share or leave people with before we get to our very exciting lightning round?

Judd Antin (01:06:54):
Can I? Yeah, I want to add one thing if I could, because this has come up on your podcast a few times recently, which is about the idea of people doing their own product walkthroughs. So should a PM just rely on their own dog fooding of the product and their own walkthrough to figure out how to fix it? And a couple of times recently this has come up, and I think the consensus seems to be, "Yes, this is a good thing." And I have a contrarian opinion there, too, which is that I think it is really important for everyone to dog food their own products. The problem is related to relying on your intuition about those products, which is the thing most PMs have trouble with is realizing, "You are nothing like the user. You are nothing like them in ways that will bias the way you think about what's good and bad in your product in ways that you can't necessarily recognize. Some things with a product, some problems with a product, you need a pulse to recognize."

(01:07:57):
And most good PMs that I know have a pulse and so cool. But a lot of them require context of use, priorities, constraints that you just don't have and you can't imagine purely on the basis of your own usage. So what I think that means is that you should definitely dog food your own product. Doing product walkthroughs to identify lists of potential issues is a great thing to do. Prioritizing that list, figuring out which ones are more or less a problem, and for whom is an area where you should be extremely wary of relying on your own opinion, expertise, or intuition when you are dog fooding your own product.

Lenny (01:08:39):
Thank you for sharing that. It's definitely come up a bunch on this podcast, so I think that's an important lesson for people to take away. Anything else before we get to our very exciting lightning round?

Judd Antin (01:08:49):
I appreciate you, Lenny. Thanks for having me on. 

Lenny (01:08:51):
I appreciate you, Judd. Well, with that, we've reached our very exciting lightning round. Are you ready?

Judd Antin (01:08:56):
I am ready.

Lenny (01:08:57):
What are two or three books that you've recommended most to other people?

Judd Antin (01:09:02):
I recently read a business book by Barbara Kellerman called Bad Leadership. And what I love about it is that we spend a lot of time talking about good leaders, and she really dives into the worst leaders and what makes them bad leaders in a way that I think is really valuable for everybody. 

(01:09:19):
I'd also recommend, I read a lot of fiction, so two recommendations, there. One, a recent Pulitzer Prize winner, Demon Copperhead by Barbara Kingsolver. It's an outstanding read that also is really sad, and moving, and illustrative, especially if you want to understand rural poverty. And then completely other side of the fiction spectrum, if you're interested in science fiction, which I am, read the Murderbot Diaries, it's about a sarcastic killer robot. And who doesn't love them?

Lenny (01:09:48):
I love these fiction recommendations. I feel like we need more of these on the podcast, so thank you. 

Judd Antin (01:09:52):
Yeah. Everybody goes to business books.

Lenny (01:09:53):
Yeah, absolutely. What is a favorite recent movie or TV show that you really enjoyed?

Judd Antin (01:09:59):
We recently watched The Last of Us and it blew our mind. I watched it after I played the video game after long last, if you are a person who plays video games and you haven't played The Last of Us, play it. If you don't know, the show is based on the video game, not the other way around.

Lenny (01:10:14):
Do you have a favorite interview question you like to ask candidates that you're interviewing?

Judd Antin (01:10:19):
Think of a topic that you had to explain lately that was the most complex, and then explain it to me like I'm five. And there are a lot of ways to vary that question, but the reason I like it is because I think, and I've asked this question to VP and C-suite candidates in multiple disciplines, and sometimes it's related to a conversation, like I might ask them to explain something complicated about quantum computing, or music theory, or it could be a complex business decision, but I want to see if somebody can break a complex problem down in a really simple way, and give me an intuition for it in a short amount of time. I think that is a differentiator between good and great for many people.

Lenny (01:11:02):
Do you have a favorite product you recently discovered that you really like?

Judd Antin (01:11:06):
Yeah, this is a really weird one, but my whole family started indoor rock climbing recently, and there's a challenge you have when you top rope, which is that you're looking up all the time. So they make these glasses, which are called belay glasses, and they have an angled mirror embedded in the lens, so that you can look straight ahead, and the view you see is up towards the person who you're belaying. And I just thought that product is so perfect for that. That's a niche problem and there isn't a better way to solve it.

Lenny (01:11:37):
Do you have a favorite motto that you often come back to, that you share with friends either in work or in life that you find useful?

Judd Antin (01:11:44):
Yeah. This is going to seem like pandering, Lenny, but I don't know if you remember a conversation that you and I had, it must've been eight years ago. I remember where we were sitting. And it was about stoicism. Do you remember this? Anyway, we had this conversation.

Lenny (01:11:59):
I don't, but I was into stoicism for a while.

Judd Antin (01:12:01):
I know you were, because we talked about it. And so the motto comes from stoicism, which is basically, "Focus on the things you can control and ignore the rest." And a lot of people think of this as the serenity prayer or the serenity saying, that was a 20th century invention, but Epictetus was writing about this BC, and I think about it all the time. So much of the stress, and pain, and worry that we have in life comes from things we can't control. So I try to let those things go.

Lenny (01:12:34):
Amazing. I learned that lesson from 7 Habits of Highly Effective People, and just the importance of thinking about these circles of you can control, you can influence, and you have no control over, and there's no reason to think about those other things.

Judd Antin (01:12:46):
Absolutely.

Lenny (01:12:47):
Judd, this was everything I hoped it would be. We got into some really good stuff. I'm excited to hear how people react. Two final questions, where can folks find you if they want to learn about what you're up to, actually share what you're up to these days, and how people can find you, and then also how can listeners be useful to you? 

Judd Antin (01:13:01):
Yeah. Thanks for asking those questions. People can find me at juddantin.com. That's the best way to find out what I'm up to. These days, I'm a consultant. I help people with UX strategy, org design, and crisis management. Somehow I love dealing with other people's dumpster fires, and I've found that I'm constitutionally good at it somehow. So juddantin.com is the place to find out. I also write. I write a medium post that you can find at onebigthought.com. And you'll find a lot of the topics we talked about today, including the original post that started this at onebigthought.com. If there's one thing I could ask your listeners to do is to get next to your researcher. I just think if you build those relationships and involve a researcher and insights person early and often, beautiful things will happen for you and for the business. So that's the thing everyone can do for me.

Lenny (01:13:56):
I love that. I've always done that. I loved my researchers that I've worked with, many of them reporting to you, and so beautiful takeaway. Judd, thank you so much for being here.

Judd Antin (01:14:07):
Lenny, thank you. It's been a pleasure.

Lenny (01:14:09):
Bye, everyone. Thank you so much for listening. If you found this valuable, you can subscribe to the show on Apple Podcast, Spotify, or your favorite podcast app. Also, please consider giving us a rating or leaving a review, as that really helps other listeners find the podcast. You can find all past episodes or learn more about the show at lennyspodcast.com. See you in the next episode.