May 2, 2024

This will make you a better decision maker | Annie Duke (author of “Thinking in Bets” and “Quit”, former pro poker player)

The player is loading ...
Lenny's Podcast

Annie Duke is a former professional poker player, a decision-making expert, and a special partner at First Round Capital. She is the author of Thinking in Bets (a national bestseller) and Quit: The Power of Knowing When to Walk Away and the co-founder of the Alliance for Decision Education, a nonprofit whose mission is to improve lives by empowering students through decision skills education. In our conversation, we cover:

• What Annie learned from the late Daniel Kahneman

• The power of pre-mortems and “kill criteria”

• The relationship between money and happiness

• The power of “mental time travel”

• The nominal group technique for better decision quality

• How First Round Capital improved their decision-making process

• Many tactical decision-making frameworks

Brought to you by:

Vanta—Automate compliance. Simplify security.

UserTesting—Human understanding. Human experiences.

LinkedIn Ads—Reach professionals and drive results for your business

Where to find Annie Duke:

• X: https://twitter.com/AnnieDuke

• LinkedIn: https://www.linkedin.com/in/annie-duke/

• Website: https://www.annieduke.com/

• Substack: https://www.annieduke.com/substack/

Where to find Lenny:

• Newsletter: https://www.lennysnewsletter.com

• X: https://twitter.com/lennysan

• LinkedIn: https://www.linkedin.com/in/lennyrachitsky/

In this episode, we cover:

(00:00) Annie’s background

(03:53) Lessons from Daniel Kahneman: humility, curiosity, and open-mindedness

(09:15) The importance of unconditional love in parenting

(15:15) Mental time travel and “nevertheless”

(20:06) The extent of improvement possible in decision-making 

(24:54) Independent brainstorming for better decisions

(35:36) Making sure people feel heard

(42:41) The “3Ds” framework to make better decisions

(44:49) Decision quality

(55:46) Improving decision-making at First Round Capital

(01:05:05) Using pre-mortems and kill criteria

(01:10:15) Making explicit what’s implicit

(01:10:55) The challenges of quitting and knowing when to walk away

(01:19:23) Where to find Annie

Referenced:

• Daniel Kahneman, Who Plumbed the Psychology of Economics, Dies at 90: https://www.nytimes.com/2024/03/27/business/daniel-kahneman-dead.html

• Adversarial collaboration: https://en.wikipedia.org/wiki/Adversarial_collaboration

• Does more money correlate with greater happiness?: https://penntoday.upenn.edu/news/does-more-money-correlate-greater-happiness-Penn-Princeton-research#

• Income and emotional well-being: A conflict resolved: https://pubmed.ncbi.nlm.nih.gov/36857342/

• Strategic decisions: When can you trust your gut?: https://www.mckinsey.com/capabilities/strategy-and-corporate-finance/our-insights/strategic-decisions-when-can-you-trust-your-gut

• Cass Sunstein on X: https://twitter.com/CassSunstein

• Dr. Becky on Instagram: https://www.instagram.com/drbeckyatgoodinside

• A framework for finding product-market fit | Todd Jackson (First Round Capital): https://www.lennysnewsletter.com/p/a-framework-for-finding-product-market

• First Round Capital: https://firstround.com/

• Brett Berson on X: https://twitter.com/brettberson

• Renegade Partners: https://www.renegadepartners.com/

• Renata Quintini on X: https://twitter.com/rquintini

• Roseanne Wincek on X: https://twitter.com/imthemusic

• Josh Kopelman on X: https://twitter.com/joshk

• Bill Trenchard on X: https://twitter.com/btrenchard

• Linnea Gandhi on X: https://twitter.com/linneagandhi

• Maurice Schweitzer on X: https://twitter.com/me_schweitzer

• Problems with premortems: https://sjdm.org/presentations/2021-Poster-Gandhi-Linnea-debiasing-premortem-selfserving~.pdf

• Create a Solid Plan on How to Fail Big This Year: https://www.forbes.com/sites/forbesfinancecouncil/2020/02/07/create-a-solid-plan-on-how-to-fail-big-this-year/

Quit: The Power of Knowing When to Walk Away: https://www.amazon.com/Quit-Power-Knowing-When-Walk/dp/0593422996/

• Richard Thaler on X: https://twitter.com/R_Thaler

• Stewart Butterfield on X: https://twitter.com/stewart

• Glitch: https://en.wikipedia.org/wiki/Glitch_(video_game)

• How the Founder of Slack & Flickr Turned Colossal Failures into Billion-Dollar Companies: https://medium.com/swlh/how-the-founder-of-slack-flickr-turned-failures-into-million-and-billion-dollar-companies-7bcaf0d35d66

• The Most Fascinating Profile You’ll Ever Read About a Guy and His Boring Startup: https://www.wired.com/2014/08/the-most-fascinating-profile-youll-ever-read-about-a-guy-and-his-boring-startup/

• The Alliance for Decision Education: https://alliancefordecisioneducation.org/

• Make Better Decisions course on Maven: https://maven.com/annie-duke/make-better-decisions

Production and marketing by https://penname.co/. For inquiries about sponsoring the podcast, email podcast@lennyrachitsky.com.

Lenny may be an investor in the companies discussed.



Get full access to Lenny's Newsletter at www.lennysnewsletter.com/subscribe

Transcript

Annie Duke (00:00:00):
It's so incredibly necessary in improving decision quality to take what's implicit and make it explicit. It's not that intuition is crap, your intuition is sometimes right. If you don't make it explicit, then you don't get to find out when it's wrong.

Lenny Rachitsky (00:00:12):
When you look at companies that have read your book, what do you find are the brainwashing tactics that really stick?

Annie Duke (00:00:16):
People generally think the purpose of a meeting is for three things, discover, discuss, decide. The only thing that's ever supposed to happen in a meeting is the discussion part.

Lenny Rachitsky (00:00:25):
Something that comes up in product a lot is this idea of pre-mortems.

Annie Duke (00:00:28):
So a pre-mortem, it's great only if you set up kill criteria. Commit to actions that you're going to take if you see those signals.

Lenny Rachitsky (00:00:35):
You have a very interesting framework for how to think about decision quality when the outcome is very long-term.

Annie Duke (00:00:40):
There is no such thing as a long feedback loop. And the way you choose to shorten the feedback loop is to say, what are the things that are correlated with the outcome that I eventually desire?

Lenny Rachitsky (00:00:53):
Today, my guest is Annie Duke. Annie is the author of the bestselling book Thinking in Bets, and also her more recent book Quit: The Power of Knowing When to Walk Away. She's also a special partner at First Round Capital, which we spent some time on and is incredibly fascinating. Prior to this part of her career, she was a professional poker player. She's won over $4 million in tournaments, including winning a World Series of Poker Bracelet, and she's the only woman who's won the World Series of Poker Tournament of Champions and the National Poker Heads-Up Championship. Currently, she spends her time helping companies make better decisions. In our conversation, we cover the many lessons that she's learned from her friend Daniel Kahneman, who recently passed away. What simple change she's found has the most impact in a company's ability to make better decisions, how to make better quick decisions when the feedback loop is very long, and also why she doesn't actually believe in long feedback loops, how she changed the way that the partners at First Round Capital make decisions, which is incredibly interesting.

(00:01:51):
Plus, why when you're thinking about quitting, that probably means you've already waited too long and you should have quit a while ago. I learned a ton from this conversation and this is definitely going to change the way I think about a lot of things. With that, I bring you Annie Duke after a short word from our sponsors. And if you enjoy this podcast, don't forget to subscribe and follow it in your favorite podcasting app or YouTube. It's the best way to avoid missing future episodes and it helps the podcast tremendously. This episode is brought to you by Vanta. When it comes to ensuring your company has top-notch security practices, things get complicated fast.

(00:02:27):
Now you can assess risk, secure the trust of your customers, and automate compliance for SOC 2, ISO 27001, HIPAA and more with a single platform, Vanta. Vanta's market-leading trust management platform helps you continuously monitor compliance alongside reporting and tracking risk. Plus, you can save hours by completing security questionnaires with Vanta AI. Join thousands of global companies that use Vanta to automate evidence collection, unify risk management, and streamline security reviews. Get $1,000 off Vanta when you go to vanta.com/lenny. That's vanta.com/lenny.

(00:03:11):
This episode is brought to you by UserTesting. Get fast feedback from real people throughout the development process so that you can build the right thing the first time. Companies are being asked to do more with less. They need to move more quickly while building experiences that meet changing customer expectations, all while minimizing risk and costly rework. With UserTesting, you have a trusted partner who provides user research teams with the skills, tools, and data to be able to articulate the value of user research to the business so that you can make an impact and build the best experiences with confidence. Get started today at usertesting.com/lenny. Annie, thank you so much for being here and welcome to the podcast.

Annie Duke (00:03:58):
Thank you so much for having me.

Lenny Rachitsky (00:03:59):
It's my pleasure. I was telling a bunch of people that you were coming on this podcast and every single one of them was so excited that we were doing this, so I'm quite excited that this is happening. I want to start with a question about Daniel Kahneman. I know you two were very close. You did a podcast together. I know you grew to be good friends over the past few years. Sadly, he passed away about a week ago at this point.

Annie Duke (00:04:21):
Yeah.

Lenny Rachitsky (00:04:21):
I'm curious, what are some lasting lessons that you took away from your time with him? Maybe one or two things that stick with you and you think will stick with you?

Annie Duke (00:04:31):
It's hard for me to describe properly how humble this man was, and it seems so impossible because his influence was so huge and his intellect was so huge and his insight was so change-making in the areas that he worked. But he was so humble, he really wanted to hear what you thought. He said, "I don't know a lot", he would change his mind. It's funny, I got asked in a class that I was teaching about thinking fast and slow and someone was talking about how there's a lot of studies in there that didn't end up replicating.

(00:05:19):
And what I've seen from a lot of other people in the field who really where their work is defined by some line of work that didn't replicate, where they're really arguing against the fact that it doesn't replicate. Just trying to justify or say, "But no", or this and that. This real lack of open-mindedness to the idea that you know what? Maybe it turns out not to be true and that should be okay. But the thing about Danny was when you talked about it, he was the first one to tell you, "I wish priming wasn't in the book. It doesn't replicate." He was just so open-minded to the idea that he knows what he knows now, but he knows that he doesn't know everything.

(00:06:06):
Another thing that people don't know about him outside of academics I think are that he was one of the pioneers in something called an adversarial collaboration. So one of his favorite things, and he in some ways invented it, was find someone who really disagrees with you and then write a paper with them and try to figure out with them what's the study that would resolve the issue. So an example of that was Danny did work on happiness as it relates to wealth. Kahneman and a collaborator, Angus Deaton, they published a result that got a lot of press about the relationship between money and happiness, and the idea was money doesn't buy happiness beyond $75,000. So that got published in 2010. I'm sure if you ran that exact same study, it would be more than $75,000 now because it would adjust for inflation. But it was like the idea being once your basic needs are met and you're not that worried about paying rent and so on and so forth, it's not going to have that big of an effect.

(00:07:09):
But then a decade later, Matthew Killingsworth said, "That's not true that actually money and happiness are correlated through all the income levels. The more money you have, the more happy you tend to be." Okay, so this is a classic example where people will tear each other down sometimes and that guy's wrong and whatever. And instead, it was Kahneman's like, "All right, let's do it together." And they joined forces along with Barb Mellers to basically try to resolve that issue. He was just always seeking out, it was just like everything was like, why am I wrong? And he was eager and he was excited and he wanted to...

(00:07:56):
I remember having a lunch with him where I realized after the lunch that the whole lunch was him asking me questions about my work. Why is he asking me questions about my work? Right? He's Danny Kahneman. But he was just so curious about other people and he just made time for everybody. He just loved the people around him so much. And I think that in the end, it's hard to believe, but I just think that that dwarfs his body of work, which is spectacular. But he as a human being was just so spectacular. And I do think you hear all the time about people who are so incredibly brilliant. It's the brilliant but asshole thing, and it was just so the opposite with him. He was so kind.

Lenny Rachitsky (00:08:51):
That's amazing. And it feels like there's probably a strong correlation between people that are curious, always looking for how they're wrong, and finding time to spend it with people to debate and discuss and ask. Feels like there's going to be a strong correlation with that and finding really interesting insights uncovering new things.

Annie Duke (00:09:10):
Yeah, yeah, yeah. [inaudible 00:09:12].

Lenny Rachitsky (00:09:11):
Well, thank you. Thank you for sharing all that. I wish I had known him. Let's talk about decision-making. So there's two areas I want to spend time on. As one would not be surprised, decision-making and quitting. And I know that you've been on a lot of podcasts. I imagine many listeners have read your book or have heard a lot about the stuff that you teach. I'm going to try to come at this from a bunch of different directions and do it a little differently. First of all, I know you're a parent. As last I heard, you have four kids. I just had a kid, he's almost 10 months now.

Annie Duke (00:09:43):
Oh, congratulations.

Lenny Rachitsky (00:09:44):
Thank you. I'm curious, what your frameworks you find most helpful in parenting and in raising your kids that have stuck with you there?

Annie Duke (00:09:53):
Well, let me just first off say I get a lot of people who are pregnant who come and ask me for my advice about parenting. And I'll tell you the two pieces of advice I give people. The first is this, there's all sorts of parenting books, there's all sorts of styles. A book needs to sell, so they need to sell you something new or different or whatever, okay. But the only thing that matters really is that your kids know you love them, really know deep in their bones, know you love them.

(00:10:32):
And we can argue breastfeeding versus not breastfeeding or co-sleeping versus not co-sleeping or attachment parenting or sleep training, we can talk about all that stuff, right? Do you homeschool? Do you send them to private school? Do you send them [inaudible 00:10:49]? We can have lots of [inaudible 00:10:51], but it all dwarfs in comparison to your children know that you would lay down in the street in the front of a bus if it meant that they would be alive and happy. That's the number one thing I say. Number two is at some point, you will drop your baby on its head. So I bet this has happened to you already.

Lenny Rachitsky (00:11:18):
Well, yeah. Sort of. Okay.

Annie Duke (00:11:18):
Sort of, right.

Lenny Rachitsky (00:11:18):
Keep coming.

Annie Duke (00:11:20):
So here's the thing about babies, is babies can't move. And so you get very used to as a mom or a dad or you're sitting on the couch and you're folding laundry and your baby is right here next to you and your baby can't move. And then one day without warning, your baby can roll over. And you think it's fine because you put a pillow there or something, but you turn your back for a second because you're like whatever and then your baby falls on its head. It happened with every single one of my children. And they don't fall that far and whatever, but they're totally fine and you're not a bad parent. And it's just this, there are things you don't know and stuff's going to happen and it's not going to be perfect. And every little thing isn't going to have some lasting impression on your child because at some point, you will drop your baby on its head and you will not mean to, you didn't intend to.

(00:12:21):
But in my case, it was always like, oh my gosh, my baby can roll over. I did not think that was going to happen today. And gosh, by the time I got to the fourth one, it's like I'm chasing after one baby to try to get the diaper on and this and that, so then the other baby hits its head. So I think there's just a lesson in that which is like it's a small thing, don't sweat it and don't get mad at yourself because you made a mistake. It's like the mistake would be, oh, I realize my baby could roll over now. So now you should make sure that you're taking extra precautions about your baby rolling over off the couch. But it's really just a funny way for me to say your baby's going to hit its head or walk into a coffee table or get cut or whatever. It's like they're very resilient and you didn't do anything wrong as a parent, so don't beat yourself up about that. And then I do actually have one other thing I say.

Lenny Rachitsky (00:13:20):
Yeah, go for it.

Annie Duke (00:13:21):
Be having four, you have very little, you have no influence whatsoever basically on what your child's personality is. What you can do as a parent is make sure they say please and thank you. So what I find with parents of one is particularly if that one is very well-behaved, they think they're a great parent and they may be. That would be resulting. They may be, but once you have four, what you realize is like, oh, my parenting really didn't have anything to do with it. But my children do say please and thank you.

Lenny Rachitsky (00:14:01):
This is really valuable, good advice. I need to hear this. My wife is going to need to hear this, too. She's always worried about the soft spot on our kid's head and getting hit into some things.

Annie Duke (00:14:09):
No, it'll be fine.

Lenny Rachitsky (00:14:10):
Okay, great.

Annie Duke (00:14:11):
It's actually it's part of why it's there. Really, your baby will be fine. I mean, that's the thing. We used to cart babies around in the wilderness, we were living in case. We're fine, we're built for it. We're built for a much rougher world than the world that babies actually grow up in now. I will say my oldest child when she was in high school, just for whatever reason, she had no interest whatsoever in alcohol or drugs or anything like that. And I remember just feeling so smug about how well I had raised this child and how great she was, and oh, these other parents who clearly weren't doing a good job. I mean, I didn't really but because it's hard not to. And then my next child went through high school and I was like, [inaudible 00:15:02]. That's just a temperament thing.

Lenny Rachitsky (00:14:11):
It's a good example of resulting in action.

Annie Duke (00:15:08):
It is a good example of resulting, but all my kids have turned out fine and they're all very different, but yeah. But as far as decision-making is concerned, which is a totally different thing, I'll actually quote Danny Kahneman on this one. Nothing is as important as it seems when you're thinking about it, that's a really important one. One of the things that I used to try to do with my kids all the time was mental time travel, which is actually a very good decision tool. So they would be really upset about something and it could be something that happened at school or it could be like they were grounded for something or whatever.

(00:15:46):
And the thing is when you're in the moment, it just feels like so big. And the thing that I used to say and they say all the time now is this is going to be so great for you when you're 40 at Thanksgiving. You're going to be able to tell these stories to your children and it's going to be the best, like you should be thanking me so that you can talk about your crazy mother at the Thanksgiving table. And it allowed them to get some time and space from it to realize this is going to be funny at some point.

(00:16:19):
At some point, it's going to be a hilarious story that you tell people. No matter how horrible it seems now, it's going to be a hilarious story at some point when you're older. And it sounds like a fun thing I was doing with them, but it's actually a really good decision-making tool for us in general, right? Look, there are absolutely things that 20 years from now are going to matter, for sure. But most things, no. If you think about most things that ever happen to you, like you're 21 and your girlfriend breaks up with you or whatever and you just think it's the end of the world and you're never going to recover and all these things, but if you could get yourself like, well, how do I really think I'm going to think about this in 10 years? Am I still going to be heartbroken in 10 years?

(00:17:03):
Probably not. And I think that we need to get that perspective of time so that we can get out of the moment because in the moment it just feels so important and the feelings are so big. And it's like the focusing effect on this second and the feelings you're feeling right now are so huge that we forget the scope of time. And I think that that's absolutely one of the best things you can do with your kids, and you can do it in small ways too, right? You can choose to play this video game or you can choose to study for your task. A week from now when you get that test back, how do you think you're going to feel about those two choices? Right?

(00:17:43):
So that was a tool that I used a ton and it's just generally an amazing decision-making tool, just generally for people. It's one that I happen to use with my kids all the time. And then I would say that the other one was using the word nevertheless, which is this is a great leadership skill, the word nevertheless. So let's say my child got caught doing something wrong. I don't know, I found a bag of red solo cups in my backyard that they forgot to throw out after a weekend that I was away. That's hypothetical. So I'm grounding them and it's a lot of argument back, right? So they think this is a debate and they're giving me all of their input and their opinions on why it's unfair and all the other kids do it and the other parents don't get mad and whatever, whatever the argument is. You have to have the balance between them feeling heard, which I think is incredibly important that your children feel heard, and following through on what you know or believe is right.

(00:18:56):
So it's I hear you. Nevertheless, you're grounded for two weeks. I hear what you're saying and I understand. Nevertheless, this is what's going to happen. And obviously, the words that you can use for that might be different. You have so much more authority obviously over your children than you do in other places. But in the workplace, this is very good because employees gripe all the time at decisions that you make in leadership because they think they're right and they want their way. And to have the ability to say, I heard you and your input, trust me, was incorporated into the decision, nevertheless, this is the path we're going to take. Right or wrong, that's what we're going to do. So I think there's a lot of things that basically you can say. These are good things to do with kids, but they're actually just generally really good decision strategies.

Lenny Rachitsky (00:19:47):
I'm taking notes on all these things. These are going to be extremely useful.

Annie Duke (00:19:51):
Nevertheless is a really good one with children.

Lenny Rachitsky (00:19:53):
It connects to something Dr. Becky teaches, if you know her, of telling someone, "I believe you." When your kid says something and they're upset about something, start with I believe you.

Annie Duke (00:20:03):
Nevertheless.

Lenny Rachitsky (00:20:03):
And here's nevertheless, exactly. Fair enough. I think we're going to start having to pivot this podcast to a parenting podcast because there's so much stuff here, but I'm going to try to resist. So we've been talking about decision-making and frameworks and things like that. Something I wanted to ask you is how much better can someone get at making better decisions? So for example, say someone listens to all of your podcasts, reads your book, studies it intensely. What's the delta you find in somebody being able to make a better decision? And where [inaudible 00:20:35] comes from is Daniel Kahneman of all people. People ask him, "Do you just live such a optimal life now that you know all of these biases we have and all these mental errors we make?" And he's like, "No, I can't..." I think he said famously, "I don't, actually. I can't use these in practice. This is just stuff that I have learned, but it doesn't actually impact my day to day." So I'm curious just how much better can someone get? What kind of delta have you found in terms of making decisions?

Annie Duke (00:20:59):
Okay, so it depends on whether you actually do the things. I mean, I think that that's what the issue is. I think Kahneman did some work way back that started him on this journey. That was basically work on hiring and it was taking it from completely unstructured Lenny, saying, "I just know a great product manager when I see one", whatever that means, to me going to Lenny and saying, "Okay, I understand that you think that you see a great product manager you know one when you see one", but can you explain to me what that means? In the abstract, what are the things that you're looking for in someone that you want to fill that role? And we can then excavate that, right? And make what is implicit because you're applying some implicit model to how you're thinking about the person that you're interviewing and how they map onto the role that they're going to be, but let's make that explicit.

(00:22:04):
So we can make that explicit, we can turn it into a decision rubric, we can create a structured interview process out of that. And then what he found was after you've gone through that process, if you then use your intuition after having done that, not before, that then you actually can really drastically improve your hit rate on hiring. And in that case, it was from about a 50% hit rate to 65. So that's pretty huge, so that's a really big difference. The problem is nobody does it. So I can tell you that most of the conversations that I had with Danny that were about my work in particular were just him saying, "How do you get anybody to do it?" Now that's not to say that I would be better at that than him, it's that he didn't do my job, right? So he was an academic doing research and so on and so forth, I'm living in companies embedding for years.

(00:23:02):
I've been with one client for five years, one client for four years, one client for four years also, almost four years, three and a half years. And then I just took on a new client who I've been with for six months, and that's my whole client roster because I stay with them so long. So if you do it, the answer is quite a bit. Like at least in terms of Danny's work in relation to something pretty noisy like hiring, you went from 50% hit rate to 65% hit rate, which is huge. It's enormous. The problem is that the way that you can make decisions in order to be better at them is not supernatural to the way that humans make decisions.

(00:23:47):
I think we think a lot more highly of our intuition than we really ought to. We think very highly of our ability to notice things in the moment and act rationally toward them in a way that really we ought not to. We tend to think that we have insight that other people don't have when actually the other people probably have more insight into our situation than we do. So I think the answer is a lot with a big if, with an if you can get people to do it. And I think that's where the issue comes in. Now, the good news is that you can then reverse that and say if I'm willing to do it, think about what an edge I'm going to have over people who aren't willing to do it, right?

(00:24:36):
So we can think there's people who don't know about it, there's people who do know about it but don't do it, and then there's people who do know about it, but do do it. And it's that last group that's so tiny. So the answer is I think a lot, but nobody does it.

Lenny Rachitsky (00:24:54):
So maybe following that thread, when you look at companies that you've worked with or companies that have read your book and really dove in and started to implement some of your piece of advice to make better decisions, if you could look back at the ones that had an impact, what do you find are the mental models or frameworks or tactics that really stick that most often have an impact and the biggest impact?

Annie Duke (00:25:18):
I think the one that's easiest to implement is this. And it's so easy, and I just wish more people would do it. The best way to get somebody's opinion is independently of other people's opinions, independently asynchronously. So the way that I put is I want people to stop talking to each other so much. When we think about what did people generally think the purpose of a meeting is, they think it's for three things. Discovery, which is I want to discover what your opinions are, what your judgments are of something, right? So I want to find out what you think about something. So for example, if we're in product development and we're trying to figure out a timeline, we're trying to develop a product roadmap and we're trying to figure out how long it's going to take to release certain features or something like that, we all come into a room and we start yelling it out together.

(00:26:29):
It's so bad for decision-making, I can't even tell you. There's cross influence, the loudest person in the room tends to then have an outsized influence on the decision, the most confident person in the room tends to have an outsized influence on the decision. And that's great if the most confident person is also right, but the problem is that's not always so. And so that's the discovery piece, right? And we tend to do that in a group. And then there's discussion, which is I've now discovered the way that Lenny is modeling this problem or what his judgments are about certain things or his forecasts are, his estimates are. And now, we're going to discuss those ideas and we're going to discuss your ideas in comparison to my ideas, and that's going to happen in the group setting. And then there's also now, we're going to decide in the group setting. So we're going to make some set of decisions about say what the roadmap is going to look like.

(00:27:26):
Here's the thing that I think is the easiest to change, is to realize the only thing that's ever supposed to happen in a meeting is the discussion part. So we should absolutely be coming together as human beings to discuss everybody's judgments and opinions and the way they're modeling the problem and their forecasts. In particular, it's really good to come in and discuss the places where people are different. So you've been in many meetings, I'm sure about 80% of the time is double clicking. Oh, I agree. And I want to now we've literally just used different words to say the exact same thing because I'm in agreement.

(00:28:07):
If we can focus the discussion on places that people disagree, we're much better off. But so now we can take this, discover, discuss, decide. So I'm saying discuss is only supposed to happen in the meeting, that's the only thing that's supposed to happen in the meeting. So what's happening with the other stuff, the discover and the decide? Well, and this is the thing that I think I actually have been able to get people to do is discover what people think before you get in a room independently of each other. So how would you do that? Well, let's imagine that you're going to have a meeting about the product roadmap.

(00:28:43):
So you would say to yourself in this meeting, what are the opinions that I need to get from the people in the room? It could be a brainstorm. What are all the different features that we could develop? So it could be a brainstorm. Fine. Write to them independently and say, "Hey, free for off. Just come up with all the different features that you think would be reasonable for us to consider developing and then give me a forced rank from best to worst of your own ideas with some three to five sentence rationale as to why you have these things in this order." And so I could do that, but maybe we already have a list. I can send that out for a forced rank to everybody. Okay, so here's the final list of things we're really considering. Prioritize those for me, just force rank them. And then again, give me a rationale.

(00:29:32):
Give me a little bit of free writing as to why you think this should be, that you have these things in this order. Maybe we've decided now on what our top five priorities are. Great. I could, before the meeting, send the top five priorities out. "Hey, we've decided on this stuff." For each thing, I'd like to understand what you think a reasonable timeline is, how many sprints it's going to take, so on, so forth, right? So we can ask for that type of information so that we can start making estimates because that's going to affect budget and what we're going back to the board with and so on so forth. But regardless, just figure out what is the thing that we're going to be discussing in this meeting? And I'm going to send it out to everybody independently and I'm going to say, "Don't reply all, just send it back to me independently."

(00:30:14):
If it's a repeated decision, you can actually create a rubric that lives on Airtable or Coda or Google Sheets or whatever, I don't really care. And people can input their decisions there where they can't see anybody else's decision. So with Google, you can use Google Forms in order to do it, and then it dumps into a spreadsheet that only you can see. That's a great way to do it. So anyway, so you do that and now you can now see everybody's opinions that you then now send out to the group and say, "Everybody, look this over before we come in and discuss." So notice you're still working as a group, but you're working as a group where you're not in the same room together and talking at the same time. And there's a word for that, which is nominal group. So it's a group that at that moment is working independently and asynchronously of each other.

(00:31:03):
So if you can get people to do that, and I do have companies where I don't consult with them, but I've just come in and talked to them briefly or whatever that do actually implement that piece of it. That is a huge piece of it, that's ginormous. And then you do the same thing for deciding. So the decision should not be made in a room, it's made either. I love the one decision maker model, but not everybody's down with that. But you can have a vote forum where people go vote in private about the way they're leaning.

(00:31:39):
You can do a variety of things with that, but just don't do it. Don't do it in the meeting. And then just the last thing that I'll add, which is a muscle that you really have to exercise, is I think it's really important to understand that the word alignment in terms of we're all aligned as a group, right? The word alignment is stupid and it shouldn't be used. And I know I'm saying that very harshly, but it's true. It's dumb because it doesn't exist. You have 10 people in a room and they're all really different people with different opinions, and they're never going to come out of the room agreeing with each other. And it's really bad if the expectation is that they're supposed to, and it's really bad for a few reasons. One is it isn't reality, and that I don't like coming out of things without reality actually kicking.

(00:32:38):
So it's not reality. People don't actually agree, they're not actually aligned on the decision. That's just the thing that makes you feel better, right? So I think that that's problem number one. But problem number two is that if the goal is alignment, if the goal is agreement, then the meeting becomes coercive and you never want that. So the way that I'm supposed to talk about my ideas is to convey why I believe what I do, not to convince anybody that I'm right because if I'm working to convince people that I'm right, it becomes coercive and that's horrible. So you have to get comfortable with walking out of the room. This is the nevertheless. Walking out of the room understanding that once you have that discussion, it's not that Lenny, your opinion might change. It could, right? I could say my thing and it could be just so damn brilliant that you change your mind somewhat, right?

(00:33:37):
So you come maybe you're like, "Oh, I'm thinking about this differently and actually", but maybe you don't and you still believe a thing that's very different than me. And leadership has to say, "That's fine. I've heard both of you and I know that this isn't going Annie's way. Nevertheless, trying to think about what all of our goals are, this is what the decision is going to be and it's totally fine that you ended up not agreeing with each other because it's reality. And what it allows me to do is get a better sense of what the space of decisions is." So those things I have been able to get people to do and they're actually quite impactful.

Lenny Rachitsky (00:34:17):
Imagine a place where you can find all your potential customers and get your message in front of them in a cost-efficient way. If you're a B2B business, that place exists, and it's called LinkedIn. LinkedIn Ads allows you to build the right relationships, drive results, and reach your customers in a respectful environment. Two of my portfolio companies, Webflow and Census, are LinkedIn success stories. Census had a 10X increase in pipeline with the LinkedIn startup team. For Webflow, after ramping up on LinkedIn in Q4, they had the highest marketing source revenue quarter to date. With LinkedIn Ads, you'll have direct access to and can build relationships with decision-makers, including 950 million members, 180 million senior execs and over 10 million C-level executives. You'll be able to drive results with targeting and measurement tools built specifically for B2B. In tech, LinkedIn generated two to five X higher return on ad spend than any other social media platforms.

(00:35:14):
Audiences on LinkedIn have two times the buying power of the average web audience, and you'll work with a partner who respects the B2B world you operate in. Make B2B marketing everything it can be and get $100 credit on your next campaign. Just go to linkedin.com/podlenny to claim your credit. That's linkedin.com/podlenny. Terms and conditions apply. There's an implication here you touched on a bit that there's a DRI essentially, there's one decision maker. Sometimes people start to feel like, "Oh, my voice isn't heard. I don't have a lot of say, I can't be part of this decision." And you talked a bunch about just how to make people feel included, you get feedback along the way. But any advice there if you try to move to this model of making people feel like, "Okay, I actually have impact on where this goes"?

Annie Duke (00:35:58):
Yeah. So here's the really wonderful thing about moving from a coercive model to really, I guess you could say a model of curiosity, right? You want to be curious, not coercive, which means that the way that people in the meeting are talking about what their opinions are is in the mode of conveying information, not trying to convince anybody. So once we move away from that coercive model, and when I say coercive, I'm not saying anybody is purposely trying to set up a culture of coercion. Sometimes that's true, but for the most part, everybody's trying to do a good job and nobody's trying to set up a culture of coercion. But as soon as you say, "Are we all in agreement? Are we all in alignment?" And as soon as you're allowing people, and I'm sure you've been in these meetings, right? If you allow people to interrupt, if you allow people to say this, "I think you're wrong", "I disagree", "Here's why", those are all very coercive things to have happen.

(00:37:12):
Interrupting someone is silencing them. Saying, "You're wrong", well now, you become tribal and people aren't going to be open-minded and they're going to stake their ground and it's all really bad. So let's move away from that. Number one, that's already going to help. But when we think about the way that meetings normally happen, again, there's all this crosstalk and some people aren't speaking and some people are and so on, so forth. And then of course, not everybody feels equally heard. But when you work as a nominal group, let's talk about something as simple as we're going to make a forecast of how long it's going to take to launch this product feature. Okay. So I'm going to send out to everybody, what's your point estimate? What's your lower bound? What's your upper bound in terms of timeline?

(00:38:02):
And you could do it in some way, like how many sprints do you think is going to take whatever language you want to use? Now, everybody independently now gives their forecast with a rationale for why they believe that. And then you come into the room and you run a discussion where everybody's getting to say what their estimate was and why they believe that, and people are getting to ask questions. "I have a question" always is a clarification, it's just I don't understand. And as the leadership, the way that you would do it is, let me give you an example from a real one because that'll be easy. I did one of these discussions for a question about remote work, like what did the company want in terms of remote versus hybrid and that kind of thing. And so there was a lot of disagreement about whether whatever the policy was, it should be consistent across functions. Lots of disagreement.

(00:39:01):
So let's take somebody who was on the, no, it shouldn't be consistent across functions. So they now say why they believe it shouldn't be consistent across functions and they say things like, "Well, different functions have different requirements, right?" So there's some functions that have to be in the office, like if you're IT as an example, but there's other functions which are more collaborative and creative and whatnot where it makes sense for people to be in the office, whereas engineers doesn't really matter, right? So there's different needs of different functions in terms of how much they need to be in the same space. So I'm not agreeing or disagreeing with this, I'm just saying what somebody said. So now as a facilitator, I never say, "Oh, I agree." What I say is I just want to make sure I understood what you said and I reflect it back.

(00:39:57):
So I say, "So what I heard you say is that not all functions are created equal in terms of how well they work remotely or how well they work in person or what the flexibility might be. So what you're saying is there're functions that have to be in person. Period. And then other functions where that collaborative element being in the same office would be more important versus some functions where being remote is totally fine. Is that what you meant?" And then they have the ability to say, "Yes, that is what I meant" or, "Actually, I meant something slightly different. Let me say what that is", and then I reflect that back. Okay, so literally you're just going around, you're calling on people to do that, and then you're reflecting back what they say without offering your own opinion. I don't know, Lenny. Tell me how someone doesn't feel heard in that situation.

Lenny Rachitsky (00:40:48):
Yeah. Absolutely. I would feel so good if somebody just clarified and made it clear they know exactly what I'm saying, even if the decision doesn't end up the way I want.

Annie Duke (00:40:57):
Right, so that's the thing. And then what actually ends up happening is that the people in the room feel more, the psychological term would be endowed, more endowed to the decision. In other words, they feel like they have ownership over the decision. And whatever the decision is, they generally will see their selves reflected in it because they were heard. And they also will generally understand that nobody, no one really ends up with exactly every single thing that they wanted to see in the decision because they also get to see the true spread of opinions on the team. And what you see in that situation is that there's lots and lots and lots and lots of disagreement, which you don't see if you talk in a group. It narrows the space, right?

(00:41:48):
So for example, particularly if you're senior, if you say, "I think this is going to take three months to launch" and I was thinking four weeks, you're never going to hear four weeks from me, ever. But if we get those opinions independently, you will actually hear that I think it's going to take four weeks. You may tend to be more right there being more senior, but I may have something interesting to say. You should hear what I have to say. And this also allows me to learn from you, too. But because I haven't heard your opinion first, then I'm not going to conform my opinion to yours. So it actually spreads the surface area of disagreement that you see on the team, which then makes people feel much better about the decision not being exactly what they want because they recognize like, "Oh, this is actually a hard problem. People really disagree on this stuff."

Lenny Rachitsky (00:42:41):
Amazing. I think this is going to be really helpful to a lot of people. Just to close the thread on this little summary maybe is core advice here is brainstorm separately. So I guess first, there's discover, discuss, decide. To discover ideas completely independently, basically brainstorms. You're a big advocate of brainstorming independently, sitting on your own, thinking through ideas. And then bringing people together to discuss all the things they've come up with and especially where they disagree. And then ideally having one person that makes a decision once she or he has taken all the input.

Annie Duke (00:43:12):
Yeah, you can have one person. I mean, I work with people where it's like a partnership and there's six people who are going to vote, but they still do it independently. They go to a forum. So I don't know what your vote is. And all six people don't have to agree. I mean, I just think it's really important that once you get above an end of one, you shouldn't necessarily expect the people to agree, which I think is just really important. And this is good for more than just brainstorming. It's for forecasting, any kind of project planning, budgeting. Yeah, I mean, we do this, for example, at First Round, we have a structured forum for evaluating a company in terms of whether you should invest in it because there are facts and then there are the way that you model those facts in. So it's a lot, what are the investor's opinion of the founder and the product and that kind of thing.

(00:44:06):
And we just have structure around how we're eliciting those opinions, which is really helpful. The other thing that I'll just add to what you said is that you can actually be in the same room together and still discover information independently. So this will happen on the fly all the time where people are talking and they start talking. Something comes up, like someone suggests some new feature or something like that, and then people start saying, "Yeah, but that's going to take..." And you go, "Stop. Okay, everybody take out a piece of paper." So you can still get that same independence on the fly as well as doing it in advance. But yes, that was a very nice summary.

Lenny Rachitsky (00:44:49):
If this is just the one thing people take away from this conversation already, I think that could have a big impact. Speaking of First Round, one, we actually just had Todd Jackson on the podcast talking about-

Annie Duke (00:44:59):
Love Todd.

Lenny Rachitsky (00:44:59):
... love Todd, talking about product market fit. That episode will have come out before this episode. Also, so I asked Brett Berson, your colleague at First Round, what to ask you.

Annie Duke (00:45:00):
Oh, okay.

Lenny Rachitsky (00:45:08):
Yeah, yeah. By the way, your title of First Round is amazing, special partner. I've never seen that before.

Annie Duke (00:45:15):
I am. It's a title just for me.

Lenny Rachitsky (00:45:17):
Oh, my God.

Annie Duke (00:45:17):
I adore Brett Berson.

Lenny Rachitsky (00:45:19):
Me, too.

Annie Duke (00:45:19):
So I'm interested, I'm interested in what he said you should ask me.

Lenny Rachitsky (00:45:23):
Nothing too spicy, it's along the lines of what you talked about. So you said that you have a very interesting framework for how to think about decision quality in the short term when the outcome is very long term. For example, investing. Also, many decisions we make in business, things you need to decide now that you only find out years from now. Can you talk about your advice there and your insights here?

Annie Duke (00:45:43):
Oh my gosh, I'm so happy that that's the question that he asked.

Lenny Rachitsky (00:45:45):
Yep, I see.

Annie Duke (00:45:47):
Okay, so can I give a tiny bit of background to this?

Lenny Rachitsky (00:45:49):
Absolutely.

Annie Duke (00:45:50):
So prior to talking to First Round, I have another client too who, they're amazing, Renegade Partners at Roseanne Wincek and Renata Quintini. They're incredible. Before finally hooking up with them, and they work at different stages, First Round is C, obviously. Renegade is more like A B, tiny bit of dabble in C. But before running into them, I talked to quite a few venture firms who are interested in talking to me post-Thinking in Bets having come out, so this would be 2018. And there was a theme, there was a theme across them all. The first one was, well, the kind of decision-making you're talking about we don't need to do because we just know a good founder when we see one. So that is a sentence that came out of many people's mouths. And as I just said to you, okay, I have no doubt, but don't you want [inaudible 00:46:52] make that explicit?

(00:46:53):
There's all sorts of great things that come from making it explicit, not just in terms of the increase in decision quality in the moment, but it actually allows you to close feedback loops much better. So that was one thing that I found quite surprising. But the one that I really found very interesting was being told, "Well, what you're talking about doesn't apply because our feedback loops are a decade." And in poker, you got an answer right away. You won or lost the hand right away. So the way that you're thinking about decision-making doesn't really apply, until I met First Round and then Renegade where they actually heard what I had to say because I gave the same answer to everybody. So we'll just put aside that wouldn't you want to make that explicit. The first thing that I would say is, oh, poker is much noisier than you think because when I win a hand, I have no idea why.

(00:47:50):
So I do actually have to wait a long time because I have to play many, many, many hands before I actually know, do I actually have an edge? Because I actually don't know very much. On one hand, for one thing, I almost never see my opponent's card, so I'm left in a dust of uncertainty. But separately from that, the main thing that I said was, how could you possibly think that the feedback loop is 10 years? And this is what I think really caught First Round's eye because when I was talking to Josh Kopelman about it, he said, "Well, what do you mean? We don't get an exit for 10 years." And I said, "Oh, I'm sorry, do you invest? And then you go to sleep like Rip Van Winkle? And then you wake up 10 years later and you go, 'Hey, how'd that go?' Or are there all sorts of things that happen in between?" The simplest thing, the simplest thing is does it fund at Series A?

(00:48:46):
And the little pushback that I would get there is, but we're not investing for Series A. And I say, "Well, I know that", but have you ever had a company that exited for more than a billion dollars that did not fund at Series A? And the answer is no. And I'm like, okay. So it sounds like that's necessary. Might not be sufficient, but it's necessary. And it's certainly a signal that is actually more highly correlated with exiting out well than the investment at C. And then, oh, right, you have series B. And that's separate from all the other things that you can look at, like what you talked to Todd Jackson about. Is it achieving product market fit? We know that eventually for it to be successful, it's going to have to achieve product market fit, right? So you can look at what's happening with that, just general things about traction, what's happening with net new ARR, ability to retain top talent churn.

(00:49:55):
I mean, there's so many different things that you can look at, all of which are things that you know must happen in order for the big thing to happen. Okay, so what that means is that this is the big, I'm going to make a bold statement here. There is no such thing as a long feedback loop. You can make a decision about how long the feedback loop is. That is your choice to live in a long feedback loop, and you can choose to shorten the feedback loop. And the way you choose to shorten the feedback loop is to say, what are the things that are necessary but not sufficient? That's one thing, for getting a good exit, or what are the things that are correlated with the outcome that I eventually desire? And what that means is that when you're at the decision point, right? Like in First Round's case, I'm going to invest in a company.

(00:50:51):
What you have to understand is that you are making a prediction about how the world is going to unfold, how the future is going to unfold. And those things that you're predicting. You can track, and you can track them back to the decision and you can do it pretty darn fast, mind you. I mean, think about being in 2021. There were companies that were raising in A six months after seed. Today, it's a little more 16 months-ish. But even so, let's just say that that was the only thing that you decided to do. I'm going to forecast the probability that this company's going to fund at Series A. And then obviously, those companies start to fund or not fund at Series A and you're finding that out in 16 months. Here's my question for you, Lenny. Is 16 months shorter than 10 years?

(00:51:38):
So it's probably why he said to ask me that question because I just really do. I mean, I have a very strong opinion about this. The feedback loop is as long as you choose it to be. And if I take that back to some of the things that I heard early on when I was talking to people, what I would say is that I think that there is a certain amount of psychological safety in allowing the feedback loop to stay long because really of two main factors. One is that, look, if I was early into Uber and now I'm a celebrity investor or something, I don't really want to know if I'm good or not. Do I? Right? I don't really want the world to know that. I mean, if I'm good, that's great. But it feels like they already believe that I'm good because I happen to be early into Uber.

(00:52:47):
So since people already think I'm good, I'm just losing to that decision, psychologically speaking. Not investment quality speaking, but psychologically speaking. Okay, so here's the problem though. Why was I early into Uber? Did I have an insight into a real pain point in a developing market, blah, blah, blah? Or did my buddy start Uber and I was like, "Sure, I'll give you some money?" Right? I mean, obviously, I'm talking about the extremes here, but we don't actually know what the decision quality was, right? All we know is that you had a good result and given that you had a good result and people think very highly of you, what are you going to gain? Right? So it's so nice to just let that feedback loop sit there and allow people to have the opinion of you. That is really nice, feels good, right? And not actually find out the answer because why would I want to?

(00:53:51):
Unless you're really super focused on decision quality, then you would want to do that. So that's part of the psychological safety. And what that goes to is, the real core of it is that it's very, very difficult for human beings to deal with feeling wrong in the moment, even if it helps them in the long run. It's just hard. And the tighter the feedback loop, the more that you risk finding out you are wrong in the short run. Now that helps you to learn and improve your decision-making if you're focused on it and you're good at it, right? That's going to help you. And then in the long run, you're actually going to do better. But human beings are notoriously good at trading off the long run just to feel good in the short run. That's why we're all eating chocolate and cupcakes and stuff that we know is bad for us because it feels good.

(00:54:45):
And so much of our decision-making is trying to advance this positive self-narrative and the idea that yeah, we're going to have a more positive narrative ourselves in five years if we do some stuff. Most of us are like, "No, I don't really want to make that trade. I'd rather just feel good now", and I can use the fact that we are living in power law, under the influence of power law. I can use that to just confirm a lot of things that I wish to believe that are true of me. And if you take that away from me and you take the uncertainty away from me, it's going to be really hard. And I will tell you that's what I love about both Renegade and First Round is that they're just like, I want to know. It would be such a horror for me to think that I was making good decisions when I actually wasn't. And that's what really matters to me and I think that it's just so special.

Lenny Rachitsky (00:55:47):
Hearing all this makes me want to be an LP in First Round. Not that they would let me in there, but just knowing that you're in there futzing with everything and the way they're thinking is really inspiring. I'm curious if you could share an example of anything that they tweaked as a result of this analysis that you did and how they evaluate.

Annie Duke (00:56:05):
Well, first of all, let me just say they didn't really record a lot about their decisions when I first came in. They voted and they had a record of the vote so they knew who said yes and who said no, but they didn't have a lot of other information. So the first thing that happened was just that. Okay, what do we really think is the way that you would model whether you should invest or not invest in? Very broadly, you would say you're reading the market, the team, the founder, the product broadly. We're going to make sure that those opinions aren't just like it's good or it's bad, but it's on a scale of one to seven so that we can actually get some precision and some spread among the partners in terms of, say, strength of market. We're going to make sure that we have shared definitions of those things, which you'd be surprised people don't.

(00:57:00):
So when I am thinking about market and market quality, I might be thinking about something very different than you. So we want to make sure that we have a shared definition of that, and that's reflected in something that we would call mediating judgments, which are judgments that you make related to market prior to actually judging what you think of the market in general. So you could think of something like competitive landscape, so you would judge that. So that turns into what these mediating judgments are, which is basically an implied definition.

(00:57:30):
So you create that, and then you also think about what are the forecasts that are important. One thing that you already know is you're going to forecast the probability the company funds at Series A. So that was a huge change, just a very different way of making decisions. What we've been able to do with that now because I've been there for five years is we now can actually look, say the partnership as a whole, and look at these ratings that they're making of the component pieces of parts of how they're modeling, like what makes a good investment, these forecasts that they're making.

(00:58:03):
And we actually know how these companies have now unfolded. So in the simplest sense, we know whether hundreds of companies have funded at Series A or not. And now, we can actually look and say, how good are the partners at actually forecasting this thing? Right? Are they random? Are they better than random? And we actually know that and we can feed that back to them so that they can understand their own accuracy around these pieces of the decision because the fact is that whatever a seed investor says to you, whether that company is going to fund at Series A is part of their decision. It's included in the decision. So they're making that forecast whether they make the forecast explicitly or not. So what we're saying over at First Round is let's make it explicit because you're doing it implicitly anyway, and then we can actually start to look at your accuracy.

(00:58:58):
We can now feed that back to you and let you know how accurate you are, which will then help you to become more accurate, right? We can also look, because we know in any given vintage what are the best companies or what are the worst companies. Remember, we're having people do these ratings on a scale of one to seven of say the quality of the market, and we can look across the partnership and say, "Look, Lenny, when you think the market is great, how does that map onto how well the company is doing in the future? When you say the market is terrible, how well does that map onto how that company is doing, how that company ends up doing in the future?" And I can come to you and I can say, "Oh, Lenny, by the way, your judgments about market are amazing."

(00:59:47):
You know a good market when you see one and it's really mapping on in a great way onto how that company unfolds, but you're not so great with founder, or maybe you're great across the board, or maybe you're not. So we can now start to give people, we can give the partners insight into their own decision-making, not only to allow them to improve the decision-making, but also to allow them to understand not to over index on certain things that maybe aren't as predictive as an example for them. We also can change the rubric based on the evidence now.

(01:00:22):
So the first version of the rubric is always taking the intuitions of the partners and making those explicit, but then we can start to loop those back together and understand, well, maybe this thing that the partners thought was important actually isn't predictive across any partner. For example, we can start to develop the rubric based on the data. These were all things that weren't possible because prior to that, if I had come in and said, "Well, let's look at decision quality", how would I do that? I mean, I have no idea why people were... I just know whether they said yes or no. And so it's very difficult to them to start to do some really serious refining of the decisions if I don't have that information.

Lenny Rachitsky (01:01:08):
I desperately want to know which partner it makes the best decisions. I know you're not going to share them.

Annie Duke (01:01:13):
No, I'm not going to. The partnership as a whole is excellent as we know. And this is what I will tell you is that all partners have strengths and all partners have weaknesses, and they're not perfectly overlapping, which is wonderful, right? I mean, that's one of the things. It's like what's really wonderful, and I think it shows the power of why would you have more than one person having input into a decision, is some people are very strong on rating a particular aspect of market, or some people are very strong on rating a particular aspect of the founder. There's overlap and then there's things where Todd is uniquely great at something, or Josh is uniquely great at something, or Bill is uniquely great at something. So that's a wonderful thing about it is that everybody has strengths and everybody has weaknesses and they're not perfectly overlapping. So this is where you can see getting that spread of opinions and really understanding, breaking that decision down into its component parts really shows you the value of diverse opinions as input into a decision.

Lenny Rachitsky (01:02:21):
I could talk about this thread forever. Maybe let me just ask one more question just because I'm super curious. Is there anything surprising that stands out, that came out of this analysis so far? Just like, oh wow, maybe market isn't as important as we thought or this person is amazing at-

Annie Duke (01:02:34):
Yes. So I think just generally speaking, when you're creating the initial decision rubric, there are things that people are really pounding the table about that they think is especially important in making a decision. And one of the things that we found is that sometimes their intuition was absolutely right. The thing that they were pounding the table about is incredibly predictive, not just for them but other partners. But sometimes it's not at all predictive, and these are equal table-pounding situations. So let's say you're pounding the table about something, sometimes the thing you're pounding at the table about is predictive for you and for all the other partners that it's actually quite predictive about how the company does. But sometimes when you're really pounding the table about something, it's not just that it's not predictive for the other partners, it's not predictive for you.

(01:03:33):
And I think that what's really important to understand about this, and this is why it's so incredibly necessary in improving decision quality to take what's implicit and make it explicit, is that our intuition is sometimes right and sometimes wrong. It's not that intuition is crap and your intuition is just completely wrong, I mean obviously, that can't be true. We would die, right? So your intuition is sometimes right, but it's also sometimes wrong. And if you don't make it explicit, then you don't get to find out when it's wrong.

(01:04:07):
You don't get to find out when it's off base, and that's a disaster. So that's the thing that I think is really interesting is that you have equal vehement and confidence that this particular factor is really important. And sometimes it is and sometimes it isn't. And it's so surprising because we're talking about people who are true experts who are great, and I think that we all just have intuition about intuition, right? You just intuit that if they're so amazing, clearly their intuition about what's important would have to be good, but not necessarily. That's the thing, not necessarily. Yeah, I think that was probably the most exciting thing.

Lenny Rachitsky (01:04:56):
Keeping it mysterious, but I still appreciate you sharing.

Annie Duke (01:04:59):
Well, I have to keep it mysterious.

Lenny Rachitsky (01:05:02):
Yes, I understand.

Annie Duke (01:05:03):
I can't give away the trade secrets.

Lenny Rachitsky (01:05:06):
I wanted to touch on a different framework that I've heard a lot of companies actually using, and something that comes up in product a lot is this idea of pre-mortems, which is essentially think ahead of time what might go wrong. Can you talk about this? Because I think that it's something that's easy to implement, really powerful, and a lot of people are actually doing this.

Annie Duke (01:05:25):
Yes, okay. So a pre-mortem is great, but only if you attach a pre-commitment with it. So I just want to be super clear about that. What you find with pre-mortems, this actually work. I actually did this work with Maurice Schweitzer and Linnea Gandhi, who are both at Penn. Then when you have people do pre-mortems, it generally doesn't actually change their plan very much or it changed their behavior. So I think that we have the feeling that if you do a pre-mortem and you think what are the ways that things might go wrong, that that's going to change your plan, but probably not unless you're specifically using it for that purpose and you say, "Okay, we're going to do this, but let's think about how we might change our plan in light of this information." But I think what's actually more important than that is what a pre-mortem allows you to do is to set up kill criteria.

(01:06:18):
So kill criteria are just a set of signals that you might see that would tell you that it's time to pivot or stop because once we actually launch something, we're very, very slow to decide to quit. I'm sure that everybody has felt that way before. Things go on way too long, even like they're over budget and you've blown the timeline. And when you finally shut it down, you realize you should have done it a lot earlier. And this is true across the board because of a variety of biases. The most well-known, and probably the biggest influence is something called sunk cost, which is that feeling, but then I'll have wasted all the time and effort that I've put into this already. So it's taking into account what you've put in in the past and deciding whether to continue on in the future. So what we want to do is actually just get better at that thing. So understanding that when you've gotten to the pre-mortem process, you probably are going to launch it, like it's probably going to be the case.

(01:07:22):
Use the pre-mortem to set up kill criteria. So I'll give you an example from a sales team that I worked with. Basically, I sent them out a prompt, all the IC is a prompt that was, imagine that you got a lead through an RFP or RFI and you worked on it for six months. And now, it's six months later and the deal is dead. Looking back, you realized they were early signals that that was going to be the case, what were they. So this is a pre-mortem. What are the things that you saw that would tell you that this was going to go south? And they came up with all sorts of ideas. Notice this doesn't mean they're not going to start off pursuing the lead, right? But they saw, they came up with all sorts of signals. So I'll just give you three of them.

(01:08:12):
The RFP RFI was clearly written with a competitor in mind, so they felt that was a very bad signal that was probably going to go badly. Another one was the customer didn't want to demo, they only wanted to talk about price. Obviously, that's quite bad. And another one was after the first few meetings, they couldn't get a decision maker in the room, right? So it was a much longer list than this, but those are three. So for each of those, that now becomes a kill criteria if I see this thing and now you attach an action with it. So in the case of price, they actually just said, "We should kill it." If they literally don't want to demo and they're only asking about price, they're just trying to beat up somebody else on price like we're a box-checking exercise. So there, they just said, "We're going to kill it. We're not going to pursue the deal anymore."

(01:08:59):
So this is great because salespeople will pursue deals forever and leadership is like, "Well, why did you stop pursuing that deal?" And they get in trouble for it. So this is going to help with that problem, right? In the case of the RFP RFI was written with a competitor in mind, they have an action associated with that as well, which is ask them directly if they're working with a competitor and how far down the road they are, depending on the answer you would kill or pursue. In the case of we couldn't get a decision maker in the room, offer up executive alignment at the next meeting. And if they say sure, great. And if they say no, kill.

(01:09:35):
So that's actually what I feel is the best use of a pre-mortem is to say, I'm going to try to figure out what those signals are along the way that things are going badly. And now instead of just hoping that when I see those signals, I actually act rationally, which is a hope that it will not come true. That's why there's many people who climb Everest in the middle of a blizzard, even though they shouldn't be doing that. Use the pre-mortem to now create structure around those signals that you've spotted and commit to actions that you're going to take if you see those signals. And I think that's the best use of a pre-mortem.

Lenny Rachitsky (01:10:15):
That's really helpful. And it's interesting how many of your examples come back to just of a framework that you often talk about, which is make it explicit what is implicit. There's another example that the First Round example is a great example of that where it's just, here's all our assumptions, they just make them actually explicit and just shows how much [inaudible 01:10:32].

Annie Duke (01:10:32):
Right, then you can examine them. You can examine them, people can discuss them, you can figure out if they're wrong or right or whatever. It's like, I want to be very clear, I'm not anti your gut or your intuition. I think it's probably sometimes pretty good. I just want you to make it explicit, that's all.

Lenny Rachitsky (01:10:55):
Okay. So we didn't have time to get into quitting, which is your more recent book. Maybe we'll do a follow-up episode specifically thinking about quitting, but let me just ask-

Annie Duke (01:11:03):
It's my fault because my answers are long.

Lenny Rachitsky (01:11:05):
It's my fault.

Annie Duke (01:11:06):
I apologize to everybody.

Lenny Rachitsky (01:11:06):
No apology is necessary. We'll have plenty of time in the future, hopefully. Well, let me just ask one question. I found this one quote from you where you said you should assume that if you're thinking about quitting, it's already probably past the time that you should have quit. Do you still believe that? Is that generally a good rule of thumb? And just any takeaway, tip, lesson on quitting as our one question on quitting?

Annie Duke (01:11:29):
So the data is pretty strong that by the time you quit, it's probably long after the [inaudible 01:11:37]. And it's really just because, look, when we start things, we're starting things under difficult circumstances, which come from the uncertainty of the decision to start something. So when we start something, luck is going to have an influence on the outcome, which obviously, we have no control over because luck isn't in our control. And then there's also hidden information. So what happens is that after the fact, we know that we're going to learn new information and it can make it very hard to start things because we want to be more sure than we actually need to be. It's why Bezos has the 70% role to try to roll people back and be willing to accept that uncertainty in the starting decision. Now the good news is that when you learn that new stuff and the new stuff that you learn is, "Ooh, if I had known this, I wouldn't have started it", you have the option to quit generally.

(01:12:28):
So that's the good news. The bad news is that the same difficulties that apply to the decision to start apply to the decision to stop. In other words, we're making that decision under uncertainty as well. So we're not going to know for sure whether it would've turned out well or poorly unless we continue to do the thing that we already started. And we don't like to walk away from things unless we know for sure. So as Richard Thaler put it, most people won't quit until it actually isn't a decision. In other words, the whole thing is blown up, the startup has no money, or you're up on Everest and the blizzard is literally upon you and you're stuck in it. Or I think as he said, until you've fallen in the crevasse already, then you'll make the decision to quit because then you know how it was really bad. So people generally, for example, don't quit their jobs until they feel they have no other choice or relationships or projects or products that they're developing, it all applies because we want to know for sure.

(01:13:35):
And then on top of that is the fact that there is this issue of sunk cost, which is when we walk away, we feel like we'll have wasted everything that we've already put into what we're doing. But of course, waste is a prospective problem, not a retrospective one, even if we treat it like a retrospective one because it's the prospective one. Well, if you wouldn't start this today, then that means that everything that you're putting into this going forward is the actual waste, right? But we do that all the time. We go forward with things that we ought not to be going forward with because we're trying to protect the resources that we've already sunk into it in the past. Then there's other issues that have to do with, for example, endowment, the ownership over the things that we've built. This is particularly bad in product because we're building things.

(01:14:18):
And once we build things, we own them. And once we own them, we don't want to give them up and we actually value them more highly than identical things that we don't own. And then there's issues of internal and external validity, which is really just a fancy way to say your identity. How do other people view you? How do you view yourself? Do you feel like you failed? And what that means is that by the time, there's so many biases against stopping that by the time you're actually even thinking about quitting, it's probably already past the time that you ought to have quit. But we'll still continue on until we know for sure we didn't have any choice because here's the thing. When you walk away from something and someone's like, "Hey, why'd you stop that?" And you're like, "Oh, I had no choice" and you tell them everything that went wrong and, nobody's going to question you.

(01:15:04):
They're going to be like, "Oh, well, it sounds like you put in your best effort." But if you walk away early, people are like, "What?" So just quickly, I'll tell you I think just one of the best stories of this that I've got in my pocket here. Let me pull it out before the end. So Stewart Butterfield creating a product called Glitch, and Glitch is a massive multiplayer online world-building cooperative game. Releases it, this is in the aughts, and it's like a huge hit with the critics. It's Monty Python meets Dr. Seuss, it's an incredible whatever. They're getting tons and tons of great word of mouth and PR not doing any paid marketing. They have incredible investors in Andreessen Horowitz and Accel, they have $6 million in the bank and they 5,000 have diehard users, meaning users who use the game who play over 20 hours a week.

(01:16:04):
The issue is that customer acquisition was a beast that for every one person who was playing over 20 hours a week, there were between 95 and 99 people who came for five minutes and left. So obviously, this is a customer acquisition problem, which everybody knows. So they make an agreement in 2012 that they're going to do paid marketing, which they do for six weeks. And during that six weeks, their acquisition of new users, it's growing like six to 7% week over week, which is amazing. And at the end of that six weeks, this is November of 2012, at the end of that six weeks, that Monday morning, Stewart Butterfield writes a note to his investors and co-founders and says, "I woke up this morning with the dead certainty that Glitch was over."

(01:16:58):
Now notice nobody would do this in this case, right? But this is what happened is that the issue is that you really have to see is this worthwhile or not? Would I start this today? That's a forecast of the future, right? And what he did was some back-of-the-envelope math, and he said, "Look, if we continue to acquire customers at the weight that we've been acquiring them at the cost that we've been acquiring them, it's going to be 31 weeks until we break even." But that's an absurd assumption because customer acquisition costs, it's going to go up. Cap has to rise because we're going to saturate the core gaming market, so it's got to rise. So what he realized at that point was that this was not a venture scale business, and he was in this for a venture scale business. So even though nobody else saw that he was supposed to shut it down, he saw he was supposed to shut it down.

(01:17:45):
And not only that, he saw that he was supposed to shut it down for his employees who were working for equity, and he had now realized that the equity wasn't worth their time and that it wasn't fair to them to keep going with it. So he shuts it down. Obviously, that feels... Who does that? Right? And he will actually tell you that he knows he should have shut it down before the marketing push, but he needed the marketing push to prove it to himself that he was seeing the future clearly. Now, the code to this story just quickly is two days later, he's like, "Well, I'm a startup guy. I want to start something", and he's got this internal communication tool that his team is using in order to develop this product that everybody loves. And he says, "Actually, they really like that. Maybe that should be the next product."

(01:18:29):
So he goes and talks to the investors, they roll their money over into that. And that thing, which had no name at the time, now gets a name which is searchable log of all company knowledge, which is Slack. And so I think this is the important thing to realize is that we get so focused on, "But what about everything that I've put into it?" When what we forget is that when you're doing something, there's not just the cost of doing something that's not worthwhile that's direct, but there's also the cost of not being able to devote your attention to other opportunities that might be available to you. And as smart as Stewart Butterfield is, he couldn't see Slack until he quit Glitch. And that is a true cost that he would've born of continuing with Glitch. If he had continued with Glitch, Slack would not be something that we're all using today.

Lenny Rachitsky (01:19:21):
That is an incredibly beautiful way to end our conversation. I feel like I have at least a billion more questions to ask you and on the other hand, I feel like we've also helped a lot of people make much better decisions through this chat so I'm really thankful that you made time for this. Two last questions. Where can folks find you online if they want to learn about the stuff you're up to in case they want to work with you? And how can listeners be useful to you?

Annie Duke (01:19:43):
You can find me at annieduke.com if you're interested in working with me. I have a Substack, Thinking in Bets. Please go check that out. I teach a class on maven.com twice a year on effective decision-making. So if you're interested in that, you can go to Maven and check it out. My next cohort at the moment is in September, although I might do one in May. I'm not sure. But in terms of people can help me, I co-founded an organization called The Alliance for Decision Education. We're trying to bring the kinds of knowledge that we have about improving human decision-making in adults to K through 12 education to make the world a better place. So I would love it if people could go look at that. If you're interested in it, get the word out.

Lenny Rachitsky (01:20:26):
Amazing. So we'll link to all those things in the show notes. Annie, again, thank you so much for being here.

Annie Duke (01:20:32):
Thank you. Thank you so much, this was so fun.

Lenny Rachitsky (01:20:35):
Same. Bye, everyone. Thank you so much for listening. If you found this valuable, you can subscribe to the show on Apple Podcasts, Spotify, or your favorite podcast app. Also, please consider giving us a rating or leaving a review as that really helps other listeners find the podcast. You can find all past episodes or learn more about the show at lennyspodcast.com. See you in the next episode.