HomeBlogBlog post

Cursor AI Code Editor Review : Your Ultimate Guide + Two Experiments

Can AI build a full app? We tested Cursor AI in two experiments—building a chat app from scratch and enhancing an existing codebase. See the results!

Lana Ilic Avatar

Lana Ilić

Development
Cursor AI Code Editor
Development

Summary

AI-assisted development is becoming more common, and Cursor AI is one of the latest tools designed to make coding faster and easier. But unlike a typical AI pair programmer, Cursor is a fully AI-powered code editor that runs locally on your machine. You can chat with it in natural language to generate code, refactor messy parts of a project, or even debug tricky issues.

In this post, we’ll take a closer look at two main ways you can use Cursor AI:

  1. Building a new application from scratch with AI-assisted prompting.

  2. Improving an existing codebase using Cursor’s debugging, refactoring, and code review features.

By the end, you'll get an idea on how to install Cursor, get it running, and integrate it into your workflow. Let’s dive in!

Experiment 1: Building an AI-Generated Chat App

For the first test, we set out to build a chat app from scratch using Cursor AI. The goal? To see how well Cursor could generate a functional application with just a well-structured prompt. What we aimed to build:

  • User authentication

  • Real-time messaging

  • Notifications system

Setting up Cursor AI

Let’s begin by downloading the free Cursor AI installation file compatible with your operating system (Linux, Windows, or MacOS) from their website. After installation, follow the setup instructions and log in. The Cursor AI IDE will feel familiar, similar to VS Code; you can even import your VS Code settings.

Since AI models work best with clear instructions, we decided to define some Cursor Rules—these act as system prompts that help steer the AI’s responses. To generate a new cursor rule we used Cmd + Shift + P - File: New Cursor Rule.

Our rule focused on enforcing TypeScript best practices for Next.js, including functional programming, modularization, and UI styling with Tailwind CSS.

Next.js React TypeScript

You are an expert in TypeScript, Node.js, Next.js App Router, React, Shadcn UI, Radix UI and Tailwind.

Key Principles

- Write concise, technical TypeScript code with accurate examples.

- Use functional and declarative programming patterns; avoid classes.

- Prefer iteration and modularization over code duplication.

- Use descriptive variable names with auxiliary verbs (e.g., isLoading, hasError).

- Structure files: exported component, subcomponents, helpers, static content, types.

... 

View entire rule

Getting Started with Cursor's AI-Powered Features

Cursor comes packed with AI-driven tools that streamline coding and boost productivity. Here’s a quick rundown of how to get started:

  • Tab: Press the Tab key for smart code completions as you type.

  • Cmd/Ctrl + K: Quickly edit code in line with this shortcut.

  • Composer (Cmd + I): Open the unified AI interface with options to Ask, Edit, or use Agent mode.

The Composer is the core of Cursor’s AI assistance—it helps with writing, editing, and understanding code, all within your editor. It also allows you to set checkpoints, track history, and switch between different modes:

  • Normal mode: Best for quick, single-turn edits.

  • Agent mode: Uses reasoning and tools to handle more complex tasks.

  • Yolo mode: Lets the Agent run terminal commands automatically — great for automating test runs and project setup.

For this blog, we’ll be focusing on Composer and Agent mode, showing how they enhance both new and existing projects. 

Writing the First Prompt

To put Cursor AI to the test, we started by crafting a detailed prompt to generate our chat app. To get the best results, we outlined the tech stack (Next.js, Firebase, Zustand) and key features upfront. 

We ran the prompt using the Claude-3.5 model in Agent mode, allowing Cursor to process and execute complex instructions.

Project setup prompt
Project setup prompt

Once submitted, Cursor began generating the codebase and suggested various terminal commands to set up the project. To execute them, we simply clicked “Run Command” or pressed Cmd + Enter.

Of course, not everything worked perfectly. Cursor attempted to use an outdated command (npx shadcn-ui@latest init), requiring us to manually intervene and correct it. This served as a reminder that while AI can automate a lot, it still relies on developer guidance. To prevent this issue in the future, we could update our Cursor rules specifying the correct command.

Cursor also auto-generated the Firebase configuration and provided clear, step-by-step guidance on how to proceed—whether through the Firebase console or the CLI.

Getting started with Firebase
Getting started with Firebase

Before finalizing the generated code, we had the option to accept or reject individual files. We reviewed them and noticed an inconsistency in how message input validation was handled. Instead of manually fixing it, we tested another powerful feature — Code Editing.

By highlighting the relevant code and using Cmd + K, we prompted Cursor to refine the validation logic. Once the update was made, we could accept or reject the change with Cmd + Y/N.

Code Editing example
Code Editing example

Next, we leveraged Code Questions (Cmd + L) to check if our input handling was secure. Cursor provided a detailed response, suggesting ways to improve validation by adding input sanitization, rate limiting, and additional security measures.

Code questions example
Code questions example

While the basic validation was in place, this insight confirmed that AI-generated code still benefits from human review. We could have prompted Cursor to implement these improvements automatically, but for now, we decided to move forward with the current setup.

With no major issues remaining, we accepted all changes and took a closer look at the generated state management logic.

Example of generated code- Cursor.ai
Example of generated code- Cursor.ai

The code effectively utilized Zustand for state management, combining AuthSlice and ChatSlice into a single the . Notably:

  • Firebase authentication errors were handled properly with clear user feedback.

  • The subscribeToMessages function listens for real-time chat updates using Firestore queries.

  • The implementation included clean-up logic, unsubscribing listeners when the authentication state changes or components unmount.

The First Functional Version

Within minutes, we had a basic but fully functional chat app with authentication, messaging, and real-time updates.

Here’s what stood out:

  • Cursor quickly initialized the project.

  • It provided useful code explanations and debugging help.

  • The ability to accept/reject changes and iterate quickly made it easy to refine the app.

Generated application after the first prompt
Generated application after the first prompt

Improvements

While our initial design was functional, it was a bit too simple. To make it more visually appealing, we decided to explore one of Cursor AI’s more impressive features—code generation based on images.

We sourced a design from Dribbble and used it as a reference for Cursor, prompting it to generate a UI that matched the visual style. This process tested Cursor’s ability to analyze color schemes, layout structures, and key design elements.

Update design based on image
Update design based on image

The results were solid, but not perfect. Some elements weren’t aligned correctly, so we had to refine our prompts and manually adjust a few details. We also tweaked the color scheme to better fit the intended look. To add further functionality, we sent an additional prompt asking Cursor to generate a notification system and implement pin/unpin and chat search features.

The updated design was fully responsive and even included smooth animations—something Cursor implemented without us explicitly asking for it. There were a few minor z-index issues, but overall, the output was impressive.

As we iterated, we also leveraged Cursor’s commit message generation feature. It automatically summarized the changes we made, making it easier to track modifications and maintain clean version history.

Finally, to give the UI an extra layer of polish, we added a custom component from 21st.dev—simply by copying and pasting the prompt into Cursor. This last step tied everything together, reinforcing how easily Cursor can generate and integrate new features on the fly.

Final Thoughts on Experiment 1

Building a complete chat app from scratch in under 5 minutes with Cursor AI is no small feat. While it wasn’t flawless, human oversight and a few refinements quickly turned it into a solid working version.

Cursor took care of the setup and boilerplate code, allowing us to focus on what really matters—problem-solving, fine-tuning the app, and adding new features.

The entire code is available on: GitHub 🚀

Experiment 2: Enhancing an Existing Codebase

For our second test, we wanted to see how Cursor AI performs when working with an existing, non-AI-generated codebase. The goal? To assess its ability to:

  •  Explain existing code – Can it quickly break down complex logic and provide useful insights?

  • Debug an issue – How well does it detect and fix a real bug in our application?

  • Implement new features – Can it seamlessly integrate additional functionality into an active project?

We selected Cat Bot, an internal Slack bot used within our company. It’s written in Node.js and integrates with Slack’s API and Calamari to track employee data such as working hours, holidays, and send out monthly reports. Since the bot is actively maintained, it provides a great real-world testing ground for Cursor’s capabilities.

Step 1: Understanding the Codebase

Before making any changes, we wanted to see how well Cursor understands unfamiliar code. We opened a key file in Cat Bot and used Code Questions (Cmd + L) to ask Cursor to explain what the core function does.

Example of Cursor.ai explaining highlighted code
Example of Cursor.ai explaining highlighted code

Cursor.ai first indexed the project within seconds and then provided a clear breakdown, summarizing the function’s logic, key dependencies, and expected outputs. This was helpful in quickly navigating the codebase without manually tracing every function.

Step 2: Finding & Fixing a Bug

We knew there was a long-standing issue where every monthly report was labeled incorrectly, regardless of the actual month. This seemed like a hardcoded bug, so we challenged Cursor to find and fix it.

We prompted Cursor:
"There’s an issue where all monthly reports are labeled as 'March.' Can you find the cause and suggest a fix?"

Finding and fixing monthly report bug
Finding and fixing monthly report bug

Cursor scanned the code and correctly identified a section where "March" was hardcoded instead of using the current date dynamically. It suggested updating the function to pull the actual month from the system date.

We applied the fix, ran a test, and confirmed that the reports were now correctly labeled for each month.

Step 3: Adding a New Feature

To further test Cursor’s capabilities, we decided to add a small but meaningful feature—sending employees a supportive message whenever they register a sick day.

We prompted Cursor:
"Modify Cat Bot to send a Slack message wishing employees a speedy recovery whenever they log a sick day."

Initial attempt- Cursor.ai
Initial attempt- Cursor.ai

Cursor’s Initial Attempt: A Webhook That Doesn't Exist

Cursor immediately analyzed the codebase and identified that Cat Bot integrates with Calamari for absence tracking. It then attempted to register a webhook at /calamari-webhook to listen for sick-day events.

However, there was one major flawCalamari doesn’t support webhooks in its API. While Cursor structured the code correctly, it made an incorrect assumption about Calamari’s capabilities.

Guiding Cursor to a Working Solution

Since a webhook wasn’t an option, we had to adjust our approach. We rephrased our prompt:

"Since Calamari doesn’t support webhooks, modify Cat Bot to periodically check for new sick leave entries via the API and send a Slack message when a new entry is found."

Cursor quickly adapted and generated a new solution, where:

  1. A cron job runs every 10 minutes, querying Calamari’s API for new sick leave entries.

  2. It compares the fetched data with previously stored records to detect new sick days.

  3. If a new sick leave is found, it triggers a Slack message for the affected employee.

Fixed approach using polling- Cursor.ai
Fixed approach using polling- Cursor.ai

How Well Did Cursor Perform?

Cursor’s second attempt was much more practical. However, we had to guide it to the right approach, which shows that:

  • AI tools don’t validate external API limitations—they rely on us to provide context.

Clearer prompts lead to better results—by specifying "polling instead of webhooks," we got a better solution.

Slack message sent successfully-Cursor.ai
Slack message sent successfully-Cursor.ai

Final Thoughts on Experiment 2

Cursor did a solid job overall—it helped us navigate the code, fix a real bug, and add a new feature. While the code wasn’t perfect and would need deeper developer oversight in a real-world setting, it gave us a strong starting point.

✅ Explained the code well, making it easier to understand the existing structure.
✅ Found and fixed the bug in the monthly report almost instantly.
✅ Implemented a new feature, even though it needed some fine-tuning.

For an initial test, Cursor got us 90% of the way there in minutes, leaving us to handle the refinements—a good boost to productivity overall.

Conclusion

After putting Cursor AI through two real-world tests—building a new app from scratch and improving an existing codebase—one thing is clear: Cursor AI can significantly speed up development.

It handled the heavy lifting of generating boilerplate code, setting up project structures, and even suggesting fixes for tricky bugs. It wasn’t perfect—we had to step in to correct outdated commands, refine UI layouts, and clarify some prompts. But even with those minor issues, the overall time savings were undeniable.

Here’s where the Cursor really shines:
✅ Fast project setup – It got our chat app running in minutes.
✅ Smart debugging – It spotted issues and missing functions before they caused problems.
✅Code understanding – It explained unfamiliar logic in seconds.
✅ Feature expansion – It helped us add new functionality to an existing bot with minimal effort.

That said, it’s not a “set it and forget it” tool. Precise prompting makes all the difference, and some AI-generated suggestions still need a developer’s judgment. But with the right approach, Cursor AI becomes a powerful extension of your workflow, helping you ship features faster and focus on what really matters—building great software.

If you’re a developer looking to cut down on repetitive tasks and boost efficiency, Cursor AI is definitely worth a try. 🚀

Share this article: