Intent


The AI Coding Assistant arena is heating up. In recent months, excitement around Cursor IDE has sparked curiosity among developers. Rightfully so—Cursor’s entry into the IDE space with a deeply integrated AI experience turned heads. It hasn’t just plugged in AI; it’s trying to reshape the entire development experience around it, from agentic workflows to seamless chat-first coding.

Naturally, this raises a question for me: Is GitHub Copilot keeping up?

Before we think about shifting direction or adopting a Multi-IDE working model, we need a shared understanding of what’s already available to us—especially within the latest VS Code + Copilot ecosystem. Otherwise, we risk comparing tools unfairly, based on outdated versions or impressions shaped by trend cycles rather than facts. It's crucial that we evaluate them fairly—not against past memories, but against their most current, full-featured releases.

Let’s take a step back and look at what GitHub Copilot now brings to the table—especially the capabilities rolled out in recent months that may go unnoticed if your IDE isn’t up to date.

Along the way, in this post, I will briefly unpack some of the recent concepts shaping AI-assisted development workflows—like Modes of working with Code AI, Next Edit Suggestions (NES), Model Selection, and AI Rules.

This post is about clarity—not conclusions.

Model Awareness & Flexibility


In the past, GitHub Copilot was synonymous with “the OpenAI model behind the curtain.” But that’s no longer the case. Copilot has evolved into a model-flexible platform, supporting a wide range of top-tier models—empowering developers to tailor their experience based on speed, reasoning depth, task complexity, or preference.

🧩 Multi-Model Support

Copilot now gives you the flexibility to choose, switch, and experiment with models—especially useful when balancing latency, reasoning depth, or creative freedom. This shift is critical: just as developers choose libraries or frameworks, selecting the right model for the task is now part of the modern development flow.

Feel free to experiment with different models for various use cases—and share your observations as a post in our dev blog space!

Each model is known for certain strengths:

  • OpenAI GPT-4.1 (OpenAI)[April 14, 2025] – Great at instruction following, frontend development, and long-context reasoning
  • OpenAI GPT-4.5 (OpenAI) – Latest Copilot model (Apr ’25), tuned for enterprise use
  • OpenAI o3-mini (OpenAI) [April 16, 2025] – Low-latency, fast reasoning for rapid iteration loops
  • OpenAI o1 models (OpenAI) – Early GPT-4 family models with fallback support
  • Claude 3.7 Sonnet (Anthropic)[April 16, 2025] – Strong in structured code reasoning, SWE-bench performance, and agentic flows
  • Claude 3.5 Sonnet (Anthropic) – Reliable for day-to-day coding support and quick iterations
  • Google Gemini 2.5 Pro (Google)[April 16, 2025] – Excels in deep reasoning across code, math, and science (Apr ’25)
  • Gemini 2.0 Flash (Google) – Fast and multimodal, ideal for lightweight tasks and conversational coding

Cursor’s perspective on Models: Cursor started strong with default high-quality models (GPT-4, Claude) built-in, focused on experience over configuration. While model selection was not always visible, recent updates now allow model switching. Small nitpick—model switching requires resetting the conversation, unlike Copilot’s in-chat model toggle.

Copilot gives you granular control per session—ideal for experimenting with speed vs depth vs creativity. Cursor simplifies the experience with optimized defaults. Both are converging toward flexible model fluency.

When I first tried Cursor and Copilot, this was before enabling the new models in Copilot. The response styles were noticeably different—not just in tone but also in how they reasoned through problems. Now, when I switch to Claude in Copilot, I can feel the difference. The responses are more thoughtful and structured, and it’s clear: the model you choose affects the outcome.

With more model options available in Copilot, it’s finally possible to do an apples-to-apples comparison. It also made me realize that some past comparisons were closer to apples vs. oranges—comparing tools without matching the models beneath them.

Let’s now turn our attention to another hot topic in the Cursor world—Agent Mode.

Modes of Working with Code AI

As AI assistants mature, it’s no longer just about inline completions or answering a question in chat. The real leap is in how we collaborate with AI across different phases of the coding workflow—whether you’re asking a question, refactoring code, or letting an agent handle multi-step tasks.

Both Cursor and Copilot recognize this shift.

While Cursor had a head start, GitHub Copilot has rapidly caught up by releasing new capabilities—and in some areas, I’ve noticed Cursor’s also getting influenced.

Cursor’s tab-based workflows set the bar early, but Copilot now matches it with an elegant, integrated UX. Interestingly, Cursor has since updated its interface to align more closely with Copilot’s mode-based structure—a sign of mutual influence and convergence.

If you walk through the features chronologically (I skimmed through the release logs for both), you’ll notice it’s not just Copilot learning from Cursor—it’s Cursor learning from Copilot too. They’re clearly watching each other closely, and what we’re witnessing is a convergence in the capability SKUs across both tools.

🤖 Agent Mode

Copilot’s Agent Mode was introduced as experimental in response to Cursor autonomy workflow and **is now generally available in **VS Code Stable. It enables multi-step, semi-autonomous workflows, including:

  • Editing files across the workspace
  • Running terminal commands with user approval (My guess is it follows a human-in-the-loop principle when executing commands autonomously—mainly from a security standpoint)
  • Fetching and inserting web content with #fetch
  • GitHub MCP Server support
  • Available Context for Agent in GitHub Copilot got increased dramatically in last two months (see the screenshot below)
  • Switch modes mid-conversation (Ask → Edit → Agent) without losing context

Image description

The available context list is growing on both sides—as each learns from the other.

✏️ Edit Mode

Targeted edits across one or more files—Edit Mode lets you describe a change (e.g., “Convert this to use async/await”), and Copilot will propose updates in diff format, complete with undo, keep, or refine controls. Known as the Copilot Editor, it’s optimized for structured code transformation.

  • Describe a change in plain English
  • See proposed edits across one or multiple files
  • Accept or undo changes granularly with the updated diff UX
  • Supports notebooks, comments, and image context

Recent updates:

  • Unified editing experience inside the chat panel (March ’25)
  • Edit Mode support extended to JetBrains IDEs (March ’25)
  • Multi-file preview for edits (Feb ’25)
  • Image context for vision models (Feb–Mar ’25)

Cursor introduced intuitive multi-file edits early on. Copilot now delivers that—and expands it across both VS Code and JetBrains IDEs.

If I had to nitpick on Copilot, Cursor’s refactor revert UX is more flexible: it treats refactors as checkpoints you can roll forward and backward. In Copilot, you can undo changes, but there’s no native “revert the revert” capability just yet.

🔍 Ask Mode

Designed for exploration, explanation, and problem-solving. In Ask Mode, you can:

  • Query your codebase (e.g., “Where is this class used?”)
  • Get explanations or quick-start snippets
  • Explore unfamiliar repos with help from chat participants like @workspace or #codebase

Recent enhancements include (GitHub Copilot):

  • GitHub URL pasting as context — GitHub issues, discussions, and PRs [April 9, 2025]
  • Instant semantic indexing of the currently working repo (March ’25)

Cursor’s has various ways to build context for question - one of them is linking to Git PRs, Commits; GitHub copilot’s URL pasting context can help achieve the similar behaviour.

All three modes—Ask, Edit, and Agent—now live in a unified Chat view in VS Code (starting v1.99 - April 4, 2025), allowing you to switch fluidly without losing context.

(I’ll park the discussion on the quality of these capabilities for another time, we will expreiment more to judge the quality)

Other Mentions

Some features in Copilot may not make the headlines but are quietly reshaping how we work with code. Here are a few worth highlighting:

Next Edit Suggestions (NES)

Copilot’s NES predicts the next edit you're likely to make and suggests it proactively—right where it matters in the code.

  • Shows edit suggestions inline, with contextual cues
  • Now generally available with collapsed view and keyboard navigation
  • Designed for low-friction productivity during focused work sessions

NES is trying to bring Cursor’s equivalent predictive editing experience.

AI Rules (Personalization Layer)

With .github/copilot-instructions.md and prompt files (*.prompt.md), you can:

  • Define team-wide or project-specific guidance
  • Customize response behavior, code style, and structure
  • Scope prompts by file type, pattern, or directory

Cursor introduced a similar feature much earlier via .cursor/rules, allowing project-wide instructions. Copilot is not trying to match this with GitHub-native configurations.

Final Word: The Gap Is Closing

This isn’t just a feature race—it’s a shift in how we build software.

Cursor showed industry what’s possible. Copilot has responded with recent rollouts - multi-models, agentic workflows, flexible context, and seamless IDE integration.

The development experience in VS Code can now be shaped around these capabilities, offering countless combinations to build effective and adaptive developer workflows.

Now that we have a clearer view of what’s available, we can begin experimenting—shaping our daily developer workflows and development phases to better reflect how we actually code, debug, and ship.

As I mentioned earlier, the intent of this post is to bring clarity on what Copilot is capable of today—not to decide which tool is better, or which one might win tomorrow. No sides. No hype.

I’m deliberately parking the discussion on the quality of these capabilities for another time. We will explore case by case. For now, I hope this post helps set a strong, objective starting point—a grounded view to help you make informed comparisons and decisions.

**P.S.* Keep your IDE updated regularly to stay in sync with the rapid pace at which new capabilities and features are being rolled out.*