Andrey Markin

AI-Powered Development: A Deep Dive into Cursor's Features and Workflow

AI DevelopmentLLMsVibe Coding
Published on May 20, 2025
AI-Powered Development: A Deep Dive into Cursor's Features and Workflow
I've noticed something interesting lately. My little bubble of the developer world is constantly tweeting about AI tools – everyone's using them to speed things up, explore ideas, learn or just hate and trash on them. But then I zoom out a bit and see that's not the case everywhere. There are still tons of teams and developers out there coding away like it's 2021, without any AI Agents.
Tweet by rez0: I saw a guy coding today. No cursor. No windsurf. No chatgpt. He just sat there. Typing code manually. Like a psychopath.

Tweet by rez0

And maybe you're already using something like Cursor, but you're curious if there's a better way to get the AI to actually do what you want, instead of just guessing. If either of those sounds like you, stick around. I'm going to walk you through how I use Cursor and other tools to make coding a bit less of a solo slog and a bit more of a collaborative effort with an (often quirky) AI partner.

Understanding AI Agents in Development: Strengths and Weaknesses

Before diving into how to use tools like Cursor, it's crucial to understand what AI models, the engines driving these tools, are actually good at and where they fall short in the context of software development. Thinking of them as powerful but imperfect assistants, rather than all-knowing oracles, will dramatically improve your experience and help you identify when and how to best leverage them.

Based on my experience and the insights from Theo's perspective, here's a breakdown:

Strengths of AI Agents in Development:

  • Handling Simple, Specific Requests Quickly: AI excels at taking a straightforward instruction and generating the corresponding code or information. Need an FFmpeg command with specific parameters? Want a basic for loop? These are tasks where AI can often provide the correct, ready-to-use solution in moments, saving you a trip to documentation or a search engine, especially when the LLM is already inside your IDE.
  • Boilerplate and Repetitive Code Generation: Writing standard code structures, component templates, or data models based on a clear definition. For example, recently, in a simple marketplace where I already had components and functions for selling items, I needed to implement the same thing but to show user orders. So I passed to the Agent all relevant code from Items and schema definition for orders, in 5 minutes I had a PR made by the agent, with the new working feature.
  • Accelerating Initial Implementation Velocity: AI can help you get code written much faster than typing alone. As Theo mentioned, it can help developers feel like they're coding "as fast as they think," a feeling often only achieved by highly experienced developers with mastery of their tools. This allows for rapid prototyping and exploring ideas.
  • Providing Quick Explanations and Code Summaries: AI can act as a knowledge base or researcher that's to tool calling, explaining how specific functions or code blocks work within a given context. This is useful for onboarding to a new codebase or understanding unfamiliar patterns.
  • Working with Structured and Type-Safe Code: When a codebase follows clear patterns, uses strong type safety (like TypeScript), and has well-defined inputs and outputs (like functional programming or tRPC), AI models are significantly better at understanding the code, generating correct suggestions, and identifying potential errors.
  • Performing Simple Refactoring and Transformations: Tasks like changing variable names, converting code structures (e.g., loop to map), or applying standard patterns within a limited scope are well within the AI's capabilities, especially when guided by tools like Cursor's Inline Edit.
  • Automating Tedious Tasks: Beyond code generation, AI can assist with mundane but necessary tasks like writing basic tests based on requirements, or even generating initial documentation drafts.
Tweet by r_marked: 'Meanwhile Claude 3.7' and image illustration with detailed drawn horse, with underdrawn head. then the title: 'Claude, make the head more detailed'. And the next picture with a detailed horse head but a dog torso.

Tweet by r_marked

Weaknesses of AI Agents in Development:

  • Lack of Deep, Niche Understanding: AI models are trained on vast amounts of data, but they often struggle with highly specific or obscure information. They won't be reliable for debugging an error code from an undocumented API, resolving quirky browser behaviors only found through deep investigation, or understanding the nuances of a legacy system.
  • Difficulty with Complex, Interconnected Logic: While good at generating isolated pieces of code, AI struggles to grasp intricate relationships between different parts of a large application or navigate complex dependencies across numerous files and directories (especially in large monorepos). They often miss subtle side effects or architectural implications.
  • Hallucination and Fabrication: AI models can confidently generate code or information that looks correct but is entirely made up or factually incorrect. This is particularly prevalent in dynamically typed languages where the AI isn't immediately corrected by a type checker. They can invent function names, API endpoints, or data structures that don't exist.
  • Limited Understanding of High-Level Architecture and Intent: The AI's understanding of your overall project goals and architectural decisions is mostly unreliable without strong instructions. Complex agents like V0, Bolt, and Lovable use sets of predefined system architecture provided as a system prompt to the agent to follow. So we can do the same – more on this in the Project Rules section.
  • Struggling with Undocumented or Poorly Structured Code: AI relies heavily on the patterns and explicit information it can read. Codebases that lack clear structure, comments, or type safety are much harder for AI to work with effectively, increasing the likelihood of errors and useless suggestions.
  • Potential for Generating Inefficient or Suboptimal Code: While AI can generate working code, it may not always be the most performant, maintainable, or idiomatic solution for a given problem or language. Human review is essential to ensure quality. But sometimes we need speed first, then quality.
  • Inability to Independently Verify Correctness Beyond Basic Checks: AI doesn't truly "understand" if code works; it relies on tools like type checkers, linters, and tests, or your feedback. So if you want to spend less time on feedback to the agent, you need to ensure there are programmed things for that.

AI agents aren't a magic bullet for every development challenge and have clear limitations. The good news is that we are not powerless in mitigating their weaknesses – we're engineers after all. Many of the very things AI struggles with – complex context, understanding intent, identifying logical flaws – are precisely the areas where human developers excel. And critically, we have a growing array of tools and techniques to help the AI bridge these gaps.

In the following sections, I'll dive deeper into practical strategies, like project rules, linting and type safety, prompting, Cursor's modes, and MCPs – all designed to hold the necessary context for the AI, provide guardrails against errors, and ultimately make these agents powerful, reliable collaborators in your development process.

How to use Cursor to help you code – All ways to interact with models

Now, let's talk about Cursor specifically. It gives you a few different ways to interact with its underlying AI models, each good for a different kind of task, and if you don't know all four, we can't move next. If you already experienced user, skip to Manual Modes or Background Agent.

Tab: Ghost Autocomplete

Source: Cursor Docs

Tab model – that ghost text autocomplete that pops up as you type – is the feature I miss instantly when I switch to a different editor. If you think Copilot does the same, no, Cursor is better (at least in my opinion at the time of writing). They actually bought a startup called Supermaven to build this out. Their autocomplete feels like it's actuallyreading my mind sometimes.

Of course, it's not magic. Sometimes it misses. If it does, give it a little nudge: sometimes, I'll drop a quick comment like // TODO: add useQuery fetching right before where I need the code. That gives the Tab model a hint. If it still doesn't quite get it, I'll start typing the function name or the first line of the loop, and then it usually clicks and offers the perfect suggestion.

Tweet by dmytro @pqoqubbw: 'no way @cursor_ai tried to rickroll me 💀'.

Tweet by dmytro @pqoqubbw

But yeah, sometimes it completely misses the mark and suggests something wild or just a bunch of repetitive code I don't want, especially in text/markdown files. When that happens, I just ignore it or hit escape. You can change the model it uses or even disable it in the bottom right corner of the IDE UI, if it annoys you, but honestly? I just stick with the default and deal with the occasional weird suggestion. It's usually so good that the misses are forgivable.

If you still think Copilot is better in suggestions, you can change the Tab model from Cursor to Copilot.

Inline Edit (⌘K)

Cursor inline edit showcaseCursor inline edit showcase

Think of ⌘+K (or Command+K on Mac, Control+K on Windows/Linux) as your tool for surgical strikes. You select a specific block of code – maybe a function, a loop, or just a few lines – hit Cmd+K, and then tell the AI exactly what you want to change within that selection.

This is perfect for those small, slightly annoying tasks that take you out of flow. Need to change the key casing on an object? Refactor a small helper function? Convert a loop into a map? Cmd+K is your friend.

You still need to be smart about what you select. Sometimes, you might need to include a couple of lines around the code you want to change so the AI has enough context to understand the surrounding logic, but generally, you select just the relevant bit. Because you're limiting the scope and the AI's task, this mode is usually super fast.

Chat (⌘I)

Cmd+I (Command+I / Control+I) is the full agent mode where you can provide a significant amount of context and give the AI more complex tasks or ask broader questions.

Source: Cursor Docs

Contexts in Cursor

The power here comes from what you feed it before you even ask the question. Cursor lets you add different types of context: @Files, @Folders, @Code, @Docs, @Git (diff), @Web (e.g. get latest libraries), @Link (e.g. specific article, or git repo), @Lint Errors, @Cursor Rules, @Terminals, and @Past Chats.

Chat also has /command: Recent Context, Generate Cursor Rules, Iterate on Lints, Add Open Files to Context, Add Active Files to Context.

Mixing and matching these is how you really guide the AI. Adding an error message and the relevant file dramatically increases the chances of a useful fix suggestion.

Cursor Modes

In the chat UI, you can choose different modes to run. Modes are defined by the models powering the agent, the tools the agent can use, and the ways of applying changes and instructions.

Agent Mode

For me, this is the default go-to mode, the most powerful way to interact with LLM in Cursor. Agent has access to all tools, can be powered by all Models you enabled in settings, and can execute the multiple-step plan.

Ideal way to use it – provide clear task, mention relevant contexts (usually files and terminal), and for complex tasks, I would advise providing tech architecture description, either just in chat or via cursor rules (more about rules later).

You can also break down the way you handle execution. For example, first you ask the agent to describe some part of your application, then you give it a problem and ask for a strategy to handle this problem, and then ask it to execute changes into your codebase.

Tweet by @0xgaut: 'vibe coding and pushing to prod'

Tweet by gaut

Ask Mode

Basically, this is read-on-day mode, where you can explore the codebase, ask questions about feature implementations, function usage, architecture analysis, and so on. I find myself often using this mode in parallel with Agent. For example, if a thing I'm working on somehow touches parts of a codebase i'm not familiar with, I can use Ask Mode to gather needed files and summarize explanation of a feature my colleague implemented, so I can easier navigate through the codebase, and then return to the Agent with new context and new tasks based on what I've learned.

Manual mode

Here, the Agent has no access to Terminal or Codebase search, so you would use it for precise changes using the files youprovided to the Agent. I think of it as an extended version of ⌘K (inline-edit) to a broader context.

Create your own Cursor Mode

Cursor Settings (⌘+Shift+J)Features ChatCustom modes

After playing with default modes, you can expand Agent capabilities by building your own, where you can define the tools the agent has access to and custom instructions you give.

There is a resource where you can see examples and settings for custom modes like Plan, Audit, Teach, Architect, and others.

Cursor Background Agent beta

This is one of the newer and potentially most exciting features. The background agent is like giving the AI a task and letting it go off and work on it asynchronously. You tell it what you want done – maybe "Implement the user profile page based on the plan.md file" – and it works in the cloud, often creating a separate branch with its proposed changes.

The workflow looks something like this: You brief the agent, it goes off and codes, proposes a pull request (or branch with changes), you review it, maybe leave comments telling it what needs fixing or changing, it goes back and makes the revisions, you review again, and once you're happy, you merge it.

Now, imagine scaling that. What if you could launch 10 such agents on different tasks? Or write a higher-level agent that reads user stories from Jira and assigns them to these coding agents? Suddenly, your role shifts. You're spending less time typing boilerplate and more time designing the architecture, writing clear instructions (prompt engineering!), and doing code reviews. It's a glimpse into a potentially wild future of development.

Given the frequent updates and the amount of helpful tips and practical examples provided in the official documentation, I strongly recommend reading Cursor's documentation on these modes. This will ensure you have the most up-to-date information and a complete understanding of how to apply these tools effectively in your workflow.

How to help agents be better: hold context, see their errors, and so on

Even the best AI agent isn't psychic. Remember how I said they struggle with complex context and hallucinate? The biggest thing you can do to make them useful is to structure your project and workflow in a way that constantly feeds them accurate information and provides guardrails against their mistakes.

This whole section is about making your codebase AI-friendly. We want to hold context for them and make it screamingly obvious when they mess up.

Linting, typesafety, and other ways for the model to be better

If you're coding in languages known for their... flexibility, you hopefully know how easy it is to introduce subtle errors. That flexibility also gives AI models a huge playground to hallucinate. They can just make up object properties or function calls that don't exist.

That's exactly why things like strict types, linters, and formatters aren't just good practices for human developers; they're absolutely essential tools for keeping your AI agent in check.

  • TypeScript: If you're using TypeScript, make your tsconfig.json strict. This forces you (and the AI) to be explicit about data shapes.
  • ESLint: Configure ESLint with strict rules. The AI will often try to follow these rules, and if it doesn't, the errors it throws make it obvious where it went wrong, and they will retry.
  • Python: Use tools like Ruff and Ty for linting and type checking.

Here are a couple of quick examples of how types help guide the model:

1. Define a variable that can only be one of a few specific values:

typescript
type LLMModelName = "gpt-4o" | "claude-3-opus" | "gemini-1.5-flash";
interface APIResponse {
  model: LLMModelName;
  response: string;
}
// When the AI is working with an APIResponse object, it knows 'model' can
// ONLY be one of the three LLMModelName literals. It can't hallucinate 'gpt-3'.

2. Define a variable that's mostly one of a few specific values but allows flexibility, while still giving autocomplete:

typescript
type SuggestedModelName = "gpt-4o-mini" | "claude-3-haiku" | "gemini-2.5-pro" | (string & {});
interface ModelConfig {
  preferredModel: SuggestedModelName;
  // ...other config
}
// Here, the AI (and you) get autocomplete suggestions for 'gpt-4o-mini', 'claude-3-haiku', 'gemini-2.5-pro',
// but you *can* still pass any other string like 'gpt-4.1'. The type system helps,
// but doesn't completely box you in.

Matt Pocock had a great tweet about this pattern

ORM (Drizzle / Prisma / Supabase Declarative Schema)

Another fantastic way to provide structure and type safety is by using an ORM like Drizzle, Prisma, or leveraging Supabase's Declarative Schema.

When you use an ORM, your database schema (tables, columns, relationships) is defined in code, often with strong type safety. You can provide the path to your schema file, and the agent can read the actual, correct definition of your data structure. It knows the table names, the column names, and their types.

Without an ORM, you could use an MCP tool to let the AI query the database schema directly (more on MCPs later), but that adds a step and uses more tokens. Also, it would require you to make a specific rule to use MCP to check the data structure, but you would not get an explicit error in the IDE that the Agent can see and reiterate on. Using an ORM that generates type-safe code makes this information available and structured for the agent.

tRPC

Taking the idea of type safety further, consider using something like tRPC for your API layer. Instead of just making generic HTTP requests with potentially undefined request/response shapes, tRPC lets you define strict types for your API endpoints.

This means the AI working on your frontend knows exactly what kind of data to send to a specific backend endpoint, and exactly what kind of data it will get back. It eliminates a big source of hallucination where the AI might invent payload structures or expect different return types. The type definitions serve as a clear contract that the AI can read and understand.

Project Structure

Cursor has a setting you can toggle that will automatically pass the project structure (directory tree, file names) to the model. This gives the AI a high-level map of your project, helping it understand where different types of files live and generally how things are organized.

Cursor settings to pass project structure to AI Coding Agent

An alternative is using a tool like repo-to-text, which generates a text file representing your project structure. You could run this manually and provide the output as context, or even set up a Cursor rule (more on those in a bit!) that tells the AI it can run the repo-to-text command itself to get the structure.

Giving the AI this structural context helps prevent it from doing things like trying to write a new authentication hook when you already have one perfectly defined in ./src/auth/useAuth.ts. Claude 3.7, in particular, seems to love writing new implementations of things that already exist if you don't give it enough context.

Plan and Intent

This is huge, especially for complex tasks. When you're tackling something significant – investigating a tricky bug, implementing a new feature pipeline, refactoring a major part of the app – you need a plan. And crucially, the AI needs access to that plan.

I store these plans in markdown files (.md). These are great for general architectural decisions, high-level feature descriptions, or process outlines. Think docs/payment-processing.md or lib/ai-core/plan.md.

My markdown plans usually include:

  • Goal/Feature Description: What are we trying to achieve?
  • Key Decisions: Important choices or constraints that you made in the process of following the plan e.g., you decided to try different library contrasting to the rest of the codebase (sometimes I move these to cursor rules.mdc files).
  • Action Plan: A step-by-step list of what needs to be done, often with numbered high-level steps and checkboxes -[ ] for sub-steps, so I can refer the Agent "we need to continue with step 4..."

This is where you can get really granular in guiding the AI. In my action plan steps, I'll often include the exact path to the file that the step relates to. For example:

markdown
- [ ] Implement the data fetching hook: `src/hooks/useUserData.ts`
- [ ] Create the user profile component: `src/components/UserProfile.tsx`
- [ ] Add routing for the profile page: `src/app/router.tsx`

When I give this plan to the agent (by adding the markdown file as context), the AI doesn't have to guess which file to work on for each step. It has the direct path! Sometimes I'll add a comment at the top of a new file like // TODO: Implement user data fetching hook here based on plan.md and provide the file path in the plan. Then the AI knows exactly where to go and what the high-level goal for that file is.

As the Agent and I complete some of these steps and sub-steps, we not only check the checkbox in the markdown file, but also append new file paths that are related to those completed steps. This keeps the plan as a living document that reflects the current state of the project and the files involved.

example of plan.md file: smart-webp-resize/project.md

Test Driven Development

While the traditional practice of Test-Driven Development (TDD) isn't suitable for every project or team, especially in the fast-moving startup world where user feedback is the major test, it offers a valuable mental model for working with AI coding agents. The core idea isn't necessarily about strict TDD methodology, but about establishing effective feedback loops.

Think of it this way: how can we provide the agent with automated checks and criteria so it can iterate and refine its output with less manual intervention from us? TDD provides a clear example.

If you've already laid out your plan and architectural details, you can guide the agent to write tests first. You might point it to your plan file and the location for the planned code, and ask it to generate unit tests based on the requirements. Once you've reviewed and approved these tests as accurate reflections of your needs, you can then instruct the agent to implement the code, ensuring it passes those tests. The agent can then run the tests, see the results, and automatically iterate until the tests pass. This automates a significant part of the "check and fix" cycle.

But the concept extends beyond formal TDD. Consider my brother's simple script that checks the SEO health of articles on his website (video embedded below) – analyzing title and description length, Open Graph tags, and so on. This script, in essence, acts as an automated feedback mechanism. When he uses an AI agent to help write and publish articles, the agent is configured to run this script after an iteration. The script provides "test results" in the form of SEO errors or warnings. The agent can then see this feedback and make necessary adjustments to the article's metadata or content before my brother even needs to review it in detail. This significantly reduces the back-and-forth and allows the agent to produce better results autonomously.

When you're thinking about using AI agents for coding, the key takeaway from the TDD concept is to consider: What kind of automated feedback can you provide the agent so it can iterate effectively and deliver higher-quality results with less manual guidance from you? Identifying and implementing these feedback loops is crucial for leveraging the full power of AI in your development workflow.

Cursor Rules

If providing context via files and plans is like giving the AI a map and instructions for a specific trip, Cursor rules are like teaching it the general way of thinking and patterns for your project. They are a major way to inject significant, general context and direct agent behavior.

Cursor currently has two main types of rules: user rules and project rules. The old .cursorrules file is deprecated, so I won't dive into this one, but you can still use examples written for .cursorrules, but use it in the project rules .mdc files.

User Rules

These are set up in your Cursor application settings (⌘+Shift+J). They apply globally to any codebase you open and affect all interactions: Tab model, Inline edit, and all modes in Chat.

User rules are great for defining your personal preferences as a developer and the kind of output you generally want from the AI, no matter which project you're in. Things like coding style, preferred languages for comments, or general development principles.

Just like with good prompt engineering, using a structured format (like Markdown, which Cursor rules use) makes them easier to read and maintain.

Here are the user rules I currently use:

markdown
## When writing code

### All languages

- In code, write comments ONLY in English.
- Functional programming approach where applicable.
- Make minimal changes to files - modify only what's necessary to complete the task. While ensuring the solution is complete, aim for the smallest possible number of line changes to maintain code clarity and minimize potential issues.
- Be very strict with types. Use explicit types unless inference is trivial.
- Use clean functions and components. Avoid hidden side effects where possible.
- Follow all linter rules.
- Follow best practices for the specific language/framework.
- Consider these software development principles:
  - TDD (Test-Driven Development): If tests exist, try to write code that passes them. If implementing new functionality, consider drafting tests first.
  - DRY (Don't Repeat Yourself): Identify and extract reusable logic or components.
  - KISS (Keep It Simple, Stupid): Prefer the simplest solution that meets the requirements.
  - YAGNI (You Aren't Gonna Need It): Don't add complexity or functionality that isn't strictly needed for the current task.

## Terminal

- Start all terminal commands with `pwd` and `ls` as separate calls to confirm the current directory context before executing the main command.
- For Node.js and frontend packages, use `pnpm` unless specified otherwise.
- For Python packages, use `uv` unless specified otherwise.
- If the task requires the current date (e.g., for a filename or log entry), execute the `date` command in the terminal *before* using the date, to get the precise current timestamp.

## Open AI

- The format `response_format={"type": "json_schema","json_schema": {"name": "name_schema","schema": schema_here}}` is a correct way to specify JSON schema output with OpenAI API.
- Model `gpt-4o` is a valid model name. (Note: Specific model availability can change but stop rewriting it already)

## Git

- Format commit messages following the conventional commits style, e.g., "feat: add user profile page" or "fix: correct model name in config".
Tweet by @theo: 'My body is a machine that turns clean code into dirty code'

Tweet by Theo

You can see I have a section for general coding rules, terminal behavior (super helpful!), and specifics for certain tools like OpenAI and Git. I used to have language-specific sections here, but I found it's better to put language or framework-specific guidance into project rules that I can reuse across relevant projects. Which brings us to...

Project Rules

This is where things get really powerful for team collaboration and providing context specific to a single codebase. Project rules live in a .cursor/rules directory at the root of your project.

The major advantages here are:

  1. Version Control: Since they're files in your project, they're checked into Git. The whole team can see, use, and contribute to the rules. You can track changes over time.
  2. Shareable: Any developer on your team using Cursor will automatically pick up these rules when they open the project. It helps ensure everyone's AI assistant is on the same page regarding project standards and context.
  3. File-Specific Context: This is the killer feature. You can define rules that only apply when the AI is working on specific files or directories.

For example, you could have a rule file like .cursor/rules/payment-processing.mdc with content like:

markdown
---
description: Payment Service Structure and Architecture
globs: src/features/payments/*
alwaysApply: false
---

## When working in payments code

- All monetary values must be handled as cents (integers) to avoid floating point errors.
- Use the `stripe-node` library for all Stripe API interactions. Do NOT make direct HTTP calls.
- Ensure all API calls originating from this directory have robust error handling and logging.
- Be extremely cautious with personal payment information. Do not log raw card details.

You would configure this rule to apply only to files within the src/features/payments directory. Now, whenever you or a teammate asks the AI to modify code in that folder, these specific, crucial rules about handling money and using the Stripe library are automatically included as context for the AI.

This is where you can store all sorts of project-specific wisdom that the AI needs:

  • Framework-specific conventions: "When creating Next.js API routes, use the app directory structure".
  • Guidelines for interacting with specific internal libraries or services.
  • Key architectural constraints: "All data fetching must go through the api module".
  • Even pointers to important documentation: "Refer todocs/auth-flows.md for login logic".

Cursor Project Rules Examples

Here are some examples of project rules I've written for various projects. You don't need to copy them, but you can get ideas on how to structure your own.

Next.js
markdown
---
description: 
globs: *.tsx,**.tsx,*.ts
alwaysApply: false
---
# Next.js 15.3 Rules

## React Rules

1. React Compiler
	- Use React Compiler for optimized performance
	- Leverage automatic memoization provided by the compiler
	- Avoid manual memoization (useMemo, useCallback) when possible

2. Component Props
	- Use readonly for passing props
	- Define prop types explicitly using TypeScript interfaces
	- Destructure props at the component level for clarity

3. Function Organization
	- Keep function logic outside of components
	- Call external functions from components instead of defining within them
	- Use custom hooks for reusable logic

4. State Management
	- Prefer React's built-in state management (useState, useReducer)
	- Use Context API for global state that doesn't change frequently
	- Consider server components where possible to reduce client-side state

## Server Components

1. Default to Server Components Architecture
	- Every component is a Server Component by default
	- Only add `'use client'` when necessary for client-side interactivity
	- Keep as much logic as possible in Server Components for better performance

2. Component Organization
	- Place shared components in `components/` directory
	- Use `components/ui/` for reusable UI components and Shadcn
	- Use `components/app/` for app-specific components

3. Data Fetching
	- Fetch data directly in Server Components using `fetch()` with no external libraries
	- Take advantage of automatic request deduplication
	- Use `revalidatePath()` or `revalidateTag()` for data revalidation
	- Cache data with `cache()` function when appropriate
	- Use `next/cache` for fine-grained cache control
	- Make sure to write optimal data requests: if we get data from DB, try to use promeses and not block, and run in parallel

4. Server/Client Boundary
	- Keep "use client" components as thin as possible
	- Pass data down from Server Components to Client Components
	- Avoid passing unnecessary data across the boundary

## Server Actions

1. Form Handling
	- Use Server Actions with `action` prop on forms
	- Define Server Actions using the `'use server'` directive
	- Always validate data server-side using a schema validation library zod

2. Action Organization
	- Group related actions in separate files with `'use server'` at the top
	- Remember that `'use server'` basicly makes functions to be like an api, so we need to treat it propertly: data validation, auth if needed, Authentication if needed
	- Place actions in `app/actions/` directory or co-locate with related components

3. Error Handling
	- Use try/catch blocks in Server Actions
	- Return structured responses with `{ success, data, error }` pattern
	- Use `redirect()` for redirects after successful actions

4. Optimistic Updates
	- Implement optimistic updates with `useOptimistic` hook
	- Provide realistic client-side UI updates before server response

## Route Handlers
1. API Routes
- Server actions by default: try not to use api routs;
- Define API routes in `app/api/` directory;
- Use Route Handlers only when necessary (direct database access usually preferred)
- Return properly formatted Response objects

## Data Mutation
1. Prefer Server Actions over API Routes for data mutations
2. Use transactions when multiple database operations are required
3. Implement proper error handling and validation

## Component Structure
1. Keep component files focused on a single responsibility
2. Export named components instead of default exports
3. Group related components in directories with appropriate organization

## TypeScript
1. Use TypeScript for all components and actions
2. Define types for form data and action responses
3. Use Zod for runtime validation alongside TypeScript

## Performance
1. Utilize partial prerendering
2. Use dynamic imports for code splitting when appropriate
3. Implement proper caching strategies with cache tags and revalidation
4. Minimize client-side JavaScript by keeping components as Server Components when possible

## SEO and Metadata
1. Use Metadata API in layout.tsx or page.tsx files
2. Implement dynamic metadata using generateMetadata function
3. Set proper Open Graph and Twitter card metadata

## Good patterns
1. After each click user should have Immediate UI update: use toast from sonner / loading.tsx / suspense / optimistic update or other related patterns to app feel good

## Additional
1. in nextjs 15+ we need to await cookies
Architecture Example
markdown
---
description: 
globs: **/platform/**
alwaysApply: false
---
# Architecture for platform Interfaces: Core Principles

## High-Level Architecture

- **Server-First Approach**: Leverage server components for data fetching and initial rendering
- **URL-Driven State**: Store filter/sort/pagination state in URL parameters
- **Hybrid Rendering**: Server components for data-heavy operations, client components for interactivity
- **Progressive Enhancement**: Works without JS, enhanced with client-side interactions

## Key Components Structure

- **Page Component (Server)**: 
  - Handles authentication/authorization
  - Extracts URL search parameters
  - Performs initial data fetching with filters
  - Passes data to client components

- **Data Table (Client)**:
  - Displays fetched data
  - Handles row interactions (click, select)
  - Manages pagination UI

- **Search/Filter Controls (Client)**:
  - Manages filter state with debounced inputs
  - Updates URL parameters on filter changes
  - Uses dropdowns for complex filtering options

- **Server Actions**:
  - Handle database queries with filtering/sorting/pagination
  - Return only necessary data to minimize payload size

## Data Flow Pattern

1. User visits page → Server fetches initial data based on URL params
2. User changes filter → Client updates URL → Server re-renders with new data
3. User interacts with table → Client handles interaction locally when possible

## Performance Optimizations

- **Debounced Search**: Prevent excessive server calls during typing
- **Server-Side Filtering**: Apply all filters at the database level
- **Pagination**: Limit data transfer to current page only
- **React Query**: Cache reference data (countries, categories, etc.)
- **Suspense Boundaries**: Allow partial loading of UI components

## URL Parameter Convention

- `query`: Text search term
- `page`: Current page number (1-based)
- `pageSize`: Items per page
- `sort`: Field to sort by
- `direction`: Sort direction (`asc`/`desc`)
- Entity-specific filters (e.g., `type`, `country`, `status`)

## Authentication & Authorization

- Server-side auth check before data fetching
- Role-based access control using helper functions
- Redirect unauthorized users to appropriate pages

## Implementation Checklist for New Entity Pages

1. Create server page component with search param extraction
2. Implement server-side data fetching with filters
3. Create client table component for displaying data
4. Build filter component with appropriate controls
5. Add server actions for specialized data operations
6. Implement pagination or infinite scrolling
7. Add proper loading states and error handling

## Tech Stack Integration

- **Next.js 15**: App Router, Server Components, Server Actions
- **Drizzle**: Database queries
- **Supabase**: Authentication
- **React Query**: Client-side data fetching and caching
- **Shadcn/UI**: Component library for consistent UI
- **TypeScript**: Type safety across components and data
Other Examples

Some other useful links to learn more about Cursor Rules:

Implementing project rules is probably the single most impactful thing you can do to make the AI agent a reliable and knowledgeable helper.

MCPs (Model Context Protocol)

So far, we've talked a lot about giving the AI text files, rules, plans, and terminal output. That's crucial. But what if the AI needs information that isn't just sitting in a file? What if it needs to know the current state of your database schema, or look something up in your company's internal documentation, or even trigger an action outside of the code editor, like writing an issue into your Jira?

That's where the Model Context Protocol (MCP) comes in. Think of MCP as a standardized way for AI models to interact with external data sources and services.

Cursor integrates with MCP, allowing you to configure these services at either a global level (applying everywhere) or, more commonly and powerfully, at the project level (living as .cursor/mcp.json in your project directory, just like project rules). This means the AI agent, when working on your project, knows it can make requests to these services you've defined via the MCP.

How does this work? You basically tell Cursor (and the underlying AI model) about available MCP services. You define what information the model needs to provide to call the service (like arguments for a function) and what kind of data it will get back. The AI then, based on your prompt and the context, decides if calling an MCP service is relevant or necessary to fulfill your request. If you know better, just tell to the Agent.

Context7 MCP - Documentation Made Easy

You could already see that Cursor has built-in documentation indexation via context @Docs. But to be honest, I don't use it at all, since reindexing needs to be triggered manually, And most of the time I'm lazy to find a needed link to the documentation, especially when Upstash made a Context7 MCP. They keep fresh, reindexed documentation for 12k libraries.

You can read more about it in the repository with instructions or you can even browse indexed docs yourself.

Supabase, Postgres, and Other Databases

Database MCPs are another incredibly useful category. They allow the AI to interact with your database schema or even query data directly.

  • Schema Interaction: Using a Postgres MCP, the AI can fetch the actual, current database schema definition. This is a powerful alternative to ORM-generated types, especially if your schema changes frequently or you're not using a type-safe ORM.
  • Supabase MCP: this one is even more powerful then just reading schema definition. Tools listed in there: list_organizations,get_organization, list_projects, get_project, get_cost,confirm_cost, create_project, pause_project, execute_sql,list_edge_functions, deploy_edge_function,get_logs, get_project_url, get_anon_key, generate_typescript_types, create_branch, list_branches, delete_branch, merge_branch, reset_branch, rebase_branch – i'm not even sure it's good to have so many, but definitely impressive.

This drastically reduces hallucination about your data layer, ensuring the AI uses correct table and column names and understands the basic structure of your data.

Other: git, redis, sqlite, browserbase

The MCP ecosystem is growing. Beyond documentation and databases, examples are emerging for all sorts of services:

  • Git: All git commands but as MCP.
  • Redis/SQLite: Giving the AI access to specific data stores.
  • Browserbase: Potentially letting the AI interact with web pages to fetch data or perform action, test your interfaces and so much more.

You can find a list of examples and ongoing work on the official MCP examples page. This is a space to keep an eye on, as more services become accessible to AI agents via MCP.

You can make any of your services / APIs into MCP that LLM can call

This is where the real customizability comes in. The Model Context Protocol is designed so you can essentially wrap *any* service or API your team uses and expose it to the AI agent as an MCP call.

Example repo from which you can start your custom MCP server: code example. You're limited only by the services your team uses and your imagination in wrapping them.

Helper tools

Working effectively with AI, whether in Cursor or elsewhere, often involves a few complementary tools that handle tasks beyond just coding, or help you bridge the gap between your thoughts and the AI's input. Here are a few I use regularly:

  • Superwhisper: I use this constantly for voice-to-text. Instead of typing out detailed instructions for the AI, explaining a bug, or outlining a plan, I can just say it. It's much faster and feels more natural, especially when the AI needs a lot of detailed context that's in my head.
  • Coderabbit: VSCode extension for automated code reviews. Sometimes it suggests stupid things, sometimes it catches something that I would never want the world to see. But most importantly, it makes a good habit: you review your code closely – we all should, given a new AI reality.
  • T3 Chat: LLM client with most of the models you need – a great way to save money and don't ask AI in Cursor something you can ask in T3 Chat for $8 / month.

And of course, App builders like V0, Lovable, Convex, Bolt, and Firebase Studio. Since I've never tried Bolt and Firebase, we will skip them.

  • V0: I think it makes the best UI in all of app generators out there. You can deploy on Vercel, and connect to dbs, ai infra, storage, publish a project template to the community, but most importantly, you can download any component it generates into your codebase just using npx shadcn@latest add "chat_id"command.
  • Lovable: Another AI chat tool that code and deploy, from what I can tell, the connection with supabase is better, components not so beautiful as V0.
  • Convex is a backend-as-a-service that provides database, auth, hosting, and so on. So when they made Chef – an AI agent like Lovable and V0, that uses their own backend, it became way easier to deploy. In contrast to V0, where if you ask to set up Supabase Auth, it will make it, but emails to sign localhost:3000 instead of the V0 domain, because Supabase controls the link from the dashboard, instead of the env variable. Downside: you're bound to convex infra.
Tweet by @coolifyio: 'vibe coding frontend vs backend'

Tweet by Coolify

Cursor Pricing

AI-assisted development tools, especially those using powerful proprietary models, aren't free – we all love Claude, but these bills are insane sometimes.

Cursor for Pro: $20 / month

or $16 / month billed yearly

  • Unlimited completions (tab).
  • 500 requests per month.
  • Unlimited slow requests: Unlimited usage in the slow pool, but requests may occasionally be throttled during periods of high demand.
  • Max mode: An option for power users to turn on maximum context, intelligence, and tool use at token-based pricing – 0.4 cents per request.

Team prices are the same, but with usage stats dashboard and SAML/OIDC SSO, but twice as expensive.

Tweet by Kirill Markin: 'Cursor Usage Dashboard example'

Tweet by Kirill Markin

And because 500 requests are not enough if you code a lot, you enable usage-based pricing for premium models (applies to models like Sonnet 3.5 and GPT4o beyond the monthly requests included in your plan), and set your comfortable budget.

From my personal experience as a heavy user who relies on Cmd+I with significant context and often uses Claude for more complex tasks, my monthly spend, including Pro subscription, can range anywhere from $20 to $100, purely depending on how much AI work I did that month and which models were used. Users who rely on Cursor as the only LLM interaction tool and use it also for article writing, social media analysis, and other MCP tools can spend up to $350 monthly.

Is it worth it? Yes. The speed I'm testing new ideas and shipping with this thing is something I couldn't dream of before. I run developments in parallel and then review PRs and merge. Where have you seen a process like this? Yes, when you were a team lead. Of course, it's not the same quality, but it's not the same price either.

Cursor Alternatives

While Cursor is the primary tool I've been talking about, it's not the only way of AI-assisted coding. The space is evolving rapidly, and different tools offer different approaches. I talk about Cursor, especially simply because I have been using it for almost two years, and I don't know much about other similar tools. But mentioning them still seems to be fair:

UI Alternatives

The most direct alternatives often come in the form of other IDEs or editors integrating AI features into a graphical interface.

  • Windsurf: Another VSCode fork as Cursor, in talks to be acquired by OpenAI for $3 Billion, recently launched their own model SWE-1.
  • Zed: AI Code Editor from the creators of Atom and Tree-sitter AND its open source.

CLI Alternatives

For those who live primarily in the terminal, there are also command-line tools.

  • OpenAI Codex: Open Source AI Coding CLI, with already a bunch of forks – love to see it.
  • Anthropic Claude Code: Same, but closed source and only claude-3-haiku model.

Outro and Feather Reading

I've tried to mention and describe most of the things I use in everyday coding and touch on some extras that should be written in such types of articles, like alternatives and pricing, and so on. But there are still so many more things and small pieces that I didn't mention: Yolo mode, ignore patterns, tab partial accept, terminal ⌘K, and so on.

So if you plan to implement cursor into your daily coding, I would highly encourage you to dive deeper into:

  • Cursor Documentation: All the modes, available tools, custom settings – some of it could be really relevant to your use cases.
  • Cursor Change-logs: They ship fast, while I was writing this article, they made Tab model to be able to jump through files, for example, so I would advise rechecking change-logs like once every two weeks.
  • Cursor Rules Collection
  • MCP examples
  • Other AI Building tools like V0, Lovable, Bolt (it's Open Source!), Chef, and so on, try to see if any of these tools can help you.
  • My brother's article on Cursor Rules
  • PlaybooksThe community to build better apps using AI tools.
A

Andrey Markin

Full-Stack AI Developer and Consultant, helping businesses integrate AI and web technologies. He specializes in custom AI solutions, bot automation, and web development.

Published on May 20, 2025

Read More

Stuck in the Dev Job Grind? Sell Yourself as a Business (Not Just a Coder)
April 21, 2025

Stuck in the Dev Job Grind? Sell Yourself as a Business (Not Just a Coder)

Bypass traditional hiring and position yourself as a valuable business solution provider, not just a developer. Discover a different path to finding meaningful work and showcasing your skills.

Developer CareerFreelancingBusiness
Read
From Typing to Thinking: AI agents and the Developer Evolution
April 17, 2025

From Typing to Thinking: AI agents and the Developer Evolution

Explore how AI agents are transforming the role of developers from code typists to system architects, and what this means for the future of software development.

AI DevelopmentLLMsVibe Coding
Read