Maxims for AI assisted coding

Shreyas Prakash headshot

Shreyas Prakash

AI-assisted coding has this strange phenomenon of making the 10x developer, a 100x one. For the rookie, it’s either a hit-or-miss, and you usually end up with a lot more slop and hallucinations. I’ve been building various tiny apps, scripts, and projects by vibe-coding it, and I seem to have got marginally better at it. I’ve developed maxims that have proven effective in ‘taming the dragon’:

These principles are framework-agnostic and can be applied across different projects:

  • Run multiple instructions simultaneously by pressing CMD + T. If you’re on the Pro plan and terminal commands are slowing you down, this parallel approach saves valuable time. Some developers even open Cursor for the same codebase in multiple windows to provide instructions in parallel.
  • When you’re new to an existing codebase, ask Cursor to create mermaid diagrams of the codebase and chat with it. This helps you get familiar with the structure. If you have a github repo which you want to understand, replace ‘hub’ with ‘diagram’ to get a mermaid visualisation.
  • Create AI-generated commits consistently to help retrace your steps if things go wrong. I’ve set up a keybinding to generate commit messages and commit all changes in one keystroke:
{
"key": "ctrl+enter",
"command": "runCommands",
"args": {
"commands": [
{
"command": "git.stageAll"
},
{
"command": "cursor.generateGitCommitMessage"
},
{
"command": "git.commitAll"
},
{
"command": "git.sync"
}
]
}
},
  • Store Documentation Locally: Following Karpathy’s advice, store relevant documentation and example code in a .cursor/documentation directory for quick reference. Helpful to store a variety of documents in this folder such as PRD, App flow documents, design system, user schema, styling document etc
  • API integrations are often challenging with AI assistance due to hallucinations or outdated information. Use “@web” to fetch the most current documentation, then create dedicated markdown files for each API you’re using. For services like Stripe, either include the Stripe MCP server in Cursor or add code snippets from the latest documentation.
  • For simpler apps, focus on articulating your goals rather than providing precise instructions. This approach gives the AI room to suggest optimal solutions and frameworks it’s confident in.
  • Goal-Oriented Prompting: As one Reddit post wisely suggested:
Provide goals, not specific commands. 

Unless you can code, odds are you won't know the right instruction to give the agent. Give it a problem statement and outcomes. Then ask questions: > How would you make this? > What do you need from me? > What are your blind spots?
  • At the end of the lengthy conversation on the Cursor chat window, I tend to use this prompt to summarise what just happened. I (have to admit), that I sometimes don’t really read all the details of the code generated, so this prompt is helpful to summarise without shortening:
Your task is to create a detailed summary of the conversation so far, paying close attention to the user's explicit requests and your previous actions.
This summary should be thorough in capturing technical details, code patterns, and architectural decisions that would be essential for continuing development work without losing context.

Before providing your final summary, wrap your analysis in <analysis> tags to organize your thoughts and ensure you've covered all necessary points. In your analysis process:

1. Chronologically analyze each message and section of the conversation. For each section thoroughly identify:
   - The user's explicit requests and intents
   - Your approach to addressing the user's requests
   - Key decisions, technical concepts and code patterns
   - Specific details like file names, full code snippets, function signatures, file edits, etc
2. Double-check for technical accuracy and completeness, addressing each required element thoroughly.

Your summary should include the following sections:

1. Primary Request and Intent: Capture all of the user's explicit requests and intents in detail
2. Key Technical Concepts: List all important technical concepts, technologies, and frameworks discussed.
3. Files and Code Sections: Enumerate specific files and code sections examined, modified, or created. Pay special attention to the most recent messages and include full code snippets where applicable and include a summary of why this file read or edit is important.
4. Problem Solving: Document problems solved and any ongoing troubleshooting efforts.
5. Pending Tasks: Outline any pending tasks that you have explicitly been asked to work on.
6. Current Work: Describe in detail precisely what was being worked on immediately before this summary request, paying special attention to the most recent messages from both user and assistant. Include file names and code snippets where applicable.
7. Optional Next Step: List the next step that you will take that is related to the most recent work you were doing. IMPORTANT: ensure that this step is DIRECTLY in line with the user's explicit requests, and the task you were working on immediately before this summary request. If your last task was concluded, then only list next steps if they are explicitly in line with the users request. Do not start on tangential requests without confirming with the user first.
                       If there is a next step, include direct quotes from the most recent conversation showing exactly what task you were working on and where you left off. This should be verbatim to ensure there's no drift in task interpretation.

Here's an example of how your output should be structured:

<example>
<analysis>
[Your thought process, ensuring all points are covered thoroughly and accurately]
</analysis>

<summary>
1. Primary Request and Intent:
   [Detailed description]

2. Key Technical Concepts:
   - [Concept 1]
   - [Concept 2]
   - [...]

3. Files and Code Sections:
   - [File Name 1]
      - [Summary of why this file is important]
      - [Summary of the changes made to this file, if any]
      - [Important Code Snippet]
   - [File Name 2]
      - [Important Code Snippet]
   - [...]

4. Problem Solving:
   [Description of solved problems and ongoing troubleshooting]

5. Pending Tasks:
   - [Task 1]
   - [Task 2]
   - [...]

6. Current Work:
   [Precise description of current work]

7. Optional Next Step:
   [Optional Next step to take]

</summary>
</example>

Please provide your summary based on the conversation so far, following this structure and ensuring precision and thoroughness in your response.

There may be additional summarization instructions provided in the included context. If so, remember to follow these instructions when creating the above summary. Examples of instructions include:
<example>
## Compact Instructions
When summarizing the conversation focus on code changes and also remember the mistakes you made and how you fixed them.
</example>

<example>
# Summary instructions
When you are using compact - please focus on test output and code changes. Include file reads verbatim.
</example>
  • According to Claude’s whitepaper on agentic coding tools, positioning your most important goals at both the beginning and end of your prompts can be effective, as LLMs give more “attention” to these positions.
  • Use models with larger context windows (like Gemini 2.5 Pro, o3) when starting a new codebase that needs comprehensive understanding. For smaller, well-defined tasks, Claude Sonnet 3.7 or Claude 3.5 Sonnet can be more efficient.
  • For frontend implementations, v0.dev produces high-quality React/Tailwind components.
  • When words aren’t enough to convey your design intent, use Frame0 for low-fidelity mockups or Figma for higher-fidelity ones. That said, I’ve found I rarely need Figma these days—clear verbal direction is often sufficient.
  • Follow the Explore-Plan-Code Workflow: The TDD approach works best for side projects. Instead of jumping straight to code, have your AI assistant review the plan first, break it down into a prompt-plan.md file, and provide targeted instructions. Once the code generated for each section is done, run rigorous tests to check if everything is working as planned, move forward, only after successful completion of tests.
  • In case through iterative prompting, if I was able to fix a key issue, I do want to store the learning in the memory in the format of a Cursor rule. To do this, I add this to the end of the chat conversation:
Please review our entire conversation in this thread, especially the debugging process we just went through.

Now create a new `.cursor/rules/*.mdc` rule that summarizes the mistake, the fix, and a reusable pattern that prevents future hallucinations like this in the same codebase.

Use this JSON format:

{
  "description": "One-line summary of the problem this rule prevents",
  "when": "Where or when this kind of bug would occur",
  "rule": "What to do instead, including any assumptions or validations needed",
  "examples": [
    {
      "before": "[Cursor's initial incorrect code]",
      "after": "[The working, correct code we ended up with]"
    }
  ],
  "tags": ["hallucination", "bugfix", "[add tool or domain name]"]
}

Only return valid JSON. Be concise and generalize the pattern so it applies anywhere in the codebase where similar logic is used.


Include a precise `when` clause scoping the rule to specific file paths or module names.

Include how to validate that the fix worked—such as a unit test, console output, or specific log check.

For rule type "Agent requested", also provide "Description of the task this rule is helpful for" so that the agent can use this rule accordingly
  • Use .cursorignore to ignore files that need not be indexed such as /dist in JS projects etc.
  • Use gitingest to get all the relevant files (filtered by extension, directory), which you can then use to feed to ChatGPT and ask questions.
  • Keep the context short, the longer the context, the more hallucinations AI is prone to. Best to keep it to 10-15 conversation replies in the Cursor composer window. If you’re still not landing there, open a new Cursor chat.
  • Claude, Gemini 2.5 Pro etc can help create a clear plan in markdown. Keep asking clarifying questions, do socratic reasoning, and gradually improve the specs doc. Use multiple models to help overcome gaps (if any). In your chats, keep referring to the @product-specs.md created frequently.
  • System prompt in “Rules for AI” in cursor settings:
    • Keep answers short and concise.
    • Don’t be a sycophant and answer always in a agreeable manner, disagree if you think the reasoning is wrong.
    • Avoid unnecessary explanations.
    • Prioritize technical details over generic advice.
  • Use Context7, an MCP for referring to the latest docs
  • BrowserTools MCP can fully automate analysis of browser logs (no longer going to console, copy pasting logs into your chat)

Subscribe to get future posts via email (or grab the RSS feed). 2-3 ideas every month across design and tech

2026

  1. How I started building softwares with AI agents being non technical

2025

  1. Legible and illegible tasks in organisations
  2. L2 Fat marker sketches
  3. Writing as moats for humans
  4. Beauty of second degree probes
  5. Read raw transcripts
  6. Boundary objects as the new prototypes
  7. One way door decisions
  8. Finished softwares should exist
  9. Essay Quality Ranker
  10. Export LLM conversations as snippets
  11. Flipping questions on its head
  12. Vibe writing maxims
  13. How I blog with Obsidian, Cloudflare, AstroJS, Github
  14. How I build greenfield apps with AI-assisted coding
  15. We have been scammed by the Gaussian distribution club
  16. Classify incentive problems into stag hunts, and prisoners dilemmas
  17. I was wrong about optimal stopping
  18. Thinking like a ship
  19. Hyperpersonalised N=1 learning
  20. New mediums for humans to complement superintelligence
  21. Maxims for AI assisted coding
  22. Personal Website Starter Kit
  23. Virtual bookshelves
  24. It's computational everything
  25. Public gardens, secret routes
  26. Git way of learning to code
  27. Kaomoji generator
  28. Style Transfer in AI writing
  29. Copy, Paste and Cite
  30. Understanding codebases without using code
  31. Vibe coding with Cursor
  32. Virtuoso Guide for Personal Memory Systems
  33. Writing in Future Past
  34. Publish Originally, Syndicate Elsewhere
  35. Poetic License of Design
  36. Idea in the shower, testing before breakfast
  37. Technology and regulation have a dance of ice and fire
  38. How I ship "stuff"
  39. Weekly TODO List on CLI
  40. Writing is thinking
  41. Song of Shapes, Words and Paths
  42. How do we absorb ideas better?

2024

  1. Read writers who operate
  2. Brew your ideas lazily
  3. Vibes
  4. Trees, Branches, Twigs and Leaves — Mental Models for Writing
  5. Compound Interest of Private Notes
  6. Conceptual Compression for LLMs
  7. Meta-analysis for contradictory research findings
  8. Beauty of Zettels
  9. Proof of work
  10. Gauging previous work of new joinees to the team
  11. Task management for product managers
  12. Stitching React and Rails together
  13. Exploring "smart connections" for note taking
  14. Deploying Home Cooked Apps with Rails
  15. Self Marketing
  16. Repetitive Copyprompting
  17. Questions to ask every decade
  18. Balancing work, time and focus
  19. Hyperlinks are like cashew nuts
  20. Brand treatments, Design Systems, Vibes
  21. How to spot human writing on the internet?
  22. Can a thought be an algorithm?
  23. Opportunity Harvesting
  24. How does AI affect UI?
  25. Everything is a prioritisation problem
  26. Now
  27. How I do product roasts
  28. The Modern Startup Stack
  29. In-person vision transmission
  30. How might we help children invent for social good?
  31. The meeting before the meeting
  32. Design that's so bad it's actually good
  33. Breaking the fourth wall of an interview
  34. Obsessing over personal websites
  35. Convert v0.dev React to Rails ViewComponents
  36. English is the hot new programming language
  37. Better way to think about conflicts
  38. The role of taste in building products
  39. World's most ancient public health problem
  40. Dear enterprises, we're tired of your subscriptions
  41. Products need not be user centered
  42. Pluginisation of Modern Software
  43. Let's make every work 'strategic'
  44. Making Nielsen's heuristics more digestible
  45. Startups are a fertile ground for risk taking
  46. Insights are not just a salad of facts
  47. Minimum Lovable Product

2023

  1. Methods are lifejackets not straight jackets
  2. How to arrive at on-brand colours?
  3. Minto principle for writing memos
  4. Importance of Why
  5. Quality Ideas Trump Execution
  6. How to hire a personal doctor
  7. Why I prefer indie softwares
  8. Use code only if no code fails
  9. Personal Observation Techniques
  10. Design is a confusing word
  11. A Primer to Service Design Blueprints
  12. Rapid Journey Prototyping
  13. Directory Structure Visualizer
  14. AI git commits
  15. Do's and Don'ts of User Research
  16. Design Manifesto
  17. Complex project management for product

2022

  1. How might we enable patients and caregivers to overcome preventable health conditions?
  2. Pedagogy of the Uncharted — What for, and Where to?

2020

  1. Future of Ageing with Mehdi Yacoubi
  2. Future of Equity with Ludovick Peters
  3. Future of Tacit knowledge with Celeste Volpi
  4. Future of Mental Health with Kavya Rao
  5. Future of Rural Innovation with Thabiso Blak Mashaba
  6. Future of unschooling with Che Vanni
  7. Future of work with Laetitia Vitaud
  8. How might we prevent acquired infections in hospitals?

2019

  1. The soul searching years
  2. Design education amidst social tribulations
  3. How might we assist deafblind runners to navigate?