Breaking the fourth wall of an interview

Shreyas Prakash headshot

Shreyas Prakash

A group of men eating ice cream during peak London summer started drowning in large numbers.

As there was a huge number of men eating ice cream who drowned, it was concluded that eating ice cream led to drowning.

This did sound absurd to the researchers investigating this curious case of the missing link between ice cream and men drowning. However, upon closer investigation, it became evident that this was a classic case of a confounding variable at play. The actual factor influencing both the increased ice cream consumption and the rise in drowning incidents was the hot summer weather.

People were more likely to eat ice cream and swim during the summer, hence the correlation. This illustrates how confounding variables can mislead us in everyday life. A more relevant example is in the hiring process, where a candidate excels in interviews but performs poorly on the job. Here, the ‘experience of attempting interviews’ acts as the confounding variable.

Candidates who frequently attend interviews become adept at navigating them, which doesn’t necessarily reflect their actual job performance.

These ‘interview hackers’ develop strong interviewing skills that overshadow their actual professional abilities.

The act of assessing quality candidates has got particularly harder now as we’re living in a post-GPT world where LLMs are getting increasingly better at drafting product strategies, writing PRDs, and critically thinking through various scenarios that are usually led end-to-end by product managers.

Help me with an AI experiment: Which of these two is the better answer for the task of developing a product strategy? Vote 🅰️ (left) or 🅱️ (right) in the comments.

Bonus: Let me know if you think the one you chose was AI 🤖 or human 🙍. pic.twitter.com/YfcFMbY3QO

— Lenny Rachitsky (@lennysan) June 5, 2024

From this experiment which Lenny Rachitsky conducted, most evaluators ranked the AI generated strategy to be a better representation (despite knowing that this was indeed generated by AI)

I recently read about applications where AI was able to provide real-time answers to questions asked by the interviewers essentially rigging the system.

I faced a similar situation while drafting an assignment that could help assess a product manager’s skills. It’s highly likely that the candidate is using some version of a ChatGPT or a Claude to help draft better answers. How do we then cut through the noise and understand their thinking process?

To answer this question myself, I’ve been documenting some internal meta-notes to help me do a better job at distinguishing candidates with interview hackers. These meta-notes are more oriented towards product managers and some of them could also apply to other domains.

Breaking the fourth wall

I sometimes subtly probe the candidates to go deeper. Sometimes, they break the ‘fourth wall’ and provide a spiky point of view.

Say, for example, you ask the candidate — ‘Tell me more about how you prioritise your time with an example?’.

The candidate might usually start with a project that they’ve taken up, how they’ve used certain frameworks, and how they had approached prioritisation that way. This is where the usual conversations go into, and perhaps that might be it.

But for some candidates, they might do more critical reflection on their own process, and even talk about places/scenarios where the specific prioritisation framework might not have worked. In reality, there are no blanket solutions.

Sometimes, through this exercise, a spiky point-of-view emerges. One which is rooted in their experience, and yet others can still disagree with it. It captures attention as it stands out in the sea of sameness, but does provide valuable signal that this candidate has great lessons rooted in practicality.

Trees and Branches

I’ve been able to identify top-notch talent by asking this — ‘What was the hardest problem you’ve encountered and how have you approached it?’. While the candidate starts narrating, I start using the metaphor of a tree to weave questions around their narration.

When the candidates go deep into one particular topic (say metrics), I zoom out a bit, and ask them about outcomes. I don’t want to know the details of how the leaf looks like, I just want to see the overall outline of the tree, the branches and the twigs.

Whenever the candidate goes too deep, I nudge them to go a bit broad. When they go too broad, I nudge them to go one level deep. While doing this, I also see if the candidate is having an holistic approach to problem solving. For example, if they’re building an electronic health record system, how are they thinking about the legal, data privacy, ethical implications of collecting patient phone numbers?

Not just trees, even blood vessels, tree roots, and even tree branches follow similar patterns. Branching is an efficient pattern in a lot of contexts, even for an act such as interviewing

Listening to respond

For listening skills, I give them constructive criticism at the end of the interview, and see how they respond. If they’re listening with an intention to learn more, then I see that as a good sign. If they’re listening to respond, or even worse: to defend, that’s a red flag.

Narratives on lived experiences

I also frame the questions slightly differently. Instead of asking them ‘How should a product manager involve the stakeholders?’, I reframe this as ‘Describe a challenging situation in the past involving difficult stakeholders, and how did you navigate this?’.

I’ve seen that the answers slightly shift from a theory to a lived experience. This is very difficult to fake, and the more interviews one conducts, we do get good at spotting fabricated narrations. They wouldn’t necessarily pass the smell test that way.

Stretching to extremes

Another interview technique I recently adopted involves stretching an idea to its extremes. When a candidate describes a decision they previously made, I extend the scenario to extreme conditions. For instance:

  • What if the data is insufficient?
  • What if there are no insights from interviews on this process?
  • What if there is no clear roadmap?

By posing these hypothetical extremes, we can gain insight into the candidate’s internal decision-making model. This method mirrors a Socratic dialogue, primarily driven by ‘What if…’ questions.

The effectiveness of this technique lies in its ability to move beyond conventional responses. Many candidates are familiar with standard practices and may offer predictable answers when asked about them. However, critical thinking often emerge in response to extreme situations, outliers, and edge cases.

This approach helps identify candidates who can think critically and adaptively in unconventional scenarios and separate the ‘interview hackers’ out of the mix.

Subscribe to get future posts via email (or grab the RSS feed). 2-3 ideas every month across design and tech

2026

  1. How I started building softwares with AI agents being non technical

2025

  1. Legible and illegible tasks in organisations
  2. L2 Fat marker sketches
  3. Writing as moats for humans
  4. Beauty of second degree probes
  5. Read raw transcripts
  6. Boundary objects as the new prototypes
  7. One way door decisions
  8. Finished softwares should exist
  9. Essay Quality Ranker
  10. Export LLM conversations as snippets
  11. Flipping questions on its head
  12. Vibe writing maxims
  13. How I blog with Obsidian, Cloudflare, AstroJS, Github
  14. How I build greenfield apps with AI-assisted coding
  15. We have been scammed by the Gaussian distribution club
  16. Classify incentive problems into stag hunts, and prisoners dilemmas
  17. I was wrong about optimal stopping
  18. Thinking like a ship
  19. Hyperpersonalised N=1 learning
  20. New mediums for humans to complement superintelligence
  21. Maxims for AI assisted coding
  22. Personal Website Starter Kit
  23. Virtual bookshelves
  24. It's computational everything
  25. Public gardens, secret routes
  26. Git way of learning to code
  27. Kaomoji generator
  28. Style Transfer in AI writing
  29. Copy, Paste and Cite
  30. Understanding codebases without using code
  31. Vibe coding with Cursor
  32. Virtuoso Guide for Personal Memory Systems
  33. Writing in Future Past
  34. Publish Originally, Syndicate Elsewhere
  35. Poetic License of Design
  36. Idea in the shower, testing before breakfast
  37. Technology and regulation have a dance of ice and fire
  38. How I ship "stuff"
  39. Weekly TODO List on CLI
  40. Writing is thinking
  41. Song of Shapes, Words and Paths
  42. How do we absorb ideas better?

2024

  1. Read writers who operate
  2. Brew your ideas lazily
  3. Vibes
  4. Trees, Branches, Twigs and Leaves — Mental Models for Writing
  5. Compound Interest of Private Notes
  6. Conceptual Compression for LLMs
  7. Meta-analysis for contradictory research findings
  8. Beauty of Zettels
  9. Proof of work
  10. Gauging previous work of new joinees to the team
  11. Task management for product managers
  12. Stitching React and Rails together
  13. Exploring "smart connections" for note taking
  14. Deploying Home Cooked Apps with Rails
  15. Self Marketing
  16. Repetitive Copyprompting
  17. Questions to ask every decade
  18. Balancing work, time and focus
  19. Hyperlinks are like cashew nuts
  20. Brand treatments, Design Systems, Vibes
  21. How to spot human writing on the internet?
  22. Can a thought be an algorithm?
  23. Opportunity Harvesting
  24. How does AI affect UI?
  25. Everything is a prioritisation problem
  26. Now
  27. How I do product roasts
  28. The Modern Startup Stack
  29. In-person vision transmission
  30. How might we help children invent for social good?
  31. The meeting before the meeting
  32. Design that's so bad it's actually good
  33. Breaking the fourth wall of an interview
  34. Obsessing over personal websites
  35. Convert v0.dev React to Rails ViewComponents
  36. English is the hot new programming language
  37. Better way to think about conflicts
  38. The role of taste in building products
  39. World's most ancient public health problem
  40. Dear enterprises, we're tired of your subscriptions
  41. Products need not be user centered
  42. Pluginisation of Modern Software
  43. Let's make every work 'strategic'
  44. Making Nielsen's heuristics more digestible
  45. Startups are a fertile ground for risk taking
  46. Insights are not just a salad of facts
  47. Minimum Lovable Product

2023

  1. Methods are lifejackets not straight jackets
  2. How to arrive at on-brand colours?
  3. Minto principle for writing memos
  4. Importance of Why
  5. Quality Ideas Trump Execution
  6. How to hire a personal doctor
  7. Why I prefer indie softwares
  8. Use code only if no code fails
  9. Personal Observation Techniques
  10. Design is a confusing word
  11. A Primer to Service Design Blueprints
  12. Rapid Journey Prototyping
  13. Directory Structure Visualizer
  14. AI git commits
  15. Do's and Don'ts of User Research
  16. Design Manifesto
  17. Complex project management for product

2022

  1. How might we enable patients and caregivers to overcome preventable health conditions?
  2. Pedagogy of the Uncharted — What for, and Where to?

2020

  1. Future of Ageing with Mehdi Yacoubi
  2. Future of Equity with Ludovick Peters
  3. Future of Tacit knowledge with Celeste Volpi
  4. Future of Mental Health with Kavya Rao
  5. Future of Rural Innovation with Thabiso Blak Mashaba
  6. Future of unschooling with Che Vanni
  7. Future of work with Laetitia Vitaud
  8. How might we prevent acquired infections in hospitals?

2019

  1. The soul searching years
  2. Design education amidst social tribulations
  3. How might we assist deafblind runners to navigate?