GitHub Copilot, the popular AI-powered coding assistant, has introduced a suite of new features aimed at developers seeking more efficiency and flexibility in their workflows.
A new Agent Mode aims to give developers a more interactive experience, while additional updates expand the tool’s AI capabilities with premium models and deep integration across multiple development environments.
Agent Mode: A New Era for Developers?
The most impactful announcement in Copilot’s latest update is the introduction of Agent Mode for VS Code users. Available now, this feature elevates Copilot from a passive assistant to an active coding agent. Rather than merely suggesting code, Agent Mode empowers Copilot to take action by running terminal commands, fixing issues, and even navigating through multiple files. GitHub says it allows developers to focus on higher-level tasks, leaving repetitive or error-prone activities to Copilot.
Agent Mode’s rollout is designed to work seamlessly within existing workflows, making it easier for developers to work with GitHub Copilot without changing how they approach their code. By offering hands-off solutions for basic fixes, developers can accelerate their coding process, especially when dealing with complex error logs or multi-file refactors.
This marks a significant evolution in how GitHub Copilot assists developers, offering far more than simple auto-completions or suggestions.
New MCP Server Option Gives Developers Local Flexibility
Alongside the broader Copilot expansion, GitHub has quietly released its official MCP Server repository, based on Anthropic’s Model Contex Protocol, allowing developers to self-host their own Copilot-compatible extensions. The server is designed to bridge code editors with third-party model providers and systems, giving developers more control over how and where their AI workloads are handled.
GitHub describes the MCP server as a “convenience implementation” of its MCP specification, a standard that enables agents or extensions in editors like VS Code to interact with language model APIs using a consistent protocol. The release makes it easier to test locally or experiment with LLM integrations outside of GitHub’s own infrastructure.
Unlike GitHub’s hosted services, which route Copilot requests through their own MCP layer, the open-source MCP Server gives developers the ability to experiment with their own endpoints or adapt the server to their particular use case. It’s written in TypeScript, published under the MIT license, and supports streaming responses—one of the spec’s core capabilities.
This release marks another step toward modularizing GitHub’s Copilot ecosystem. By exposing the underlying server logic as open-source, GitHub could be signaling a more interoperable future—where agents, actions, and AI integrations are no longer bound exclusively to its own infrastructure.
Premium AI Models and the New Pro+ Plan
Alongside Agent Mode, GitHub is rolling out premium AI models that elevate the capabilities of Copilot via a new GitHub Copilot Pro+ plan. Pro+ makes access to Anthropic Claude 3.5, 3.7 Sonnet, 3.7 Sonnet Thinking, Google Gemini 2.0 Flash, and OpenAI o3-mini generally available via premium requests. These models provide developers with more sophisticated AI-powered insights, such as predictive editing, real-time suggestions, and contextual code recommendations.
GitHub Copilot Pro+ is priced at $39 per month, which includes 1500 premium requests per month. This is a noticeable upgrade from the basic Pro plan, giving developers access to next-level functionality. The inclusion of GPT-4.5, Claude 3.7, and Google’s Gemini 2.0 models allows Copilot to be even more accurate in its code generation, offering tailored solutions that fit a developer’s specific needs.
For those working with large teams or enterprise-level codebases, these premium features offer significant benefits, enabling more advanced assistance for large-scale projects.
For teams working with complex or large-scale codebases, the Pro+ plan’s AI features provide significant improvements in tasks like debugging, code reviews, and error tracking across files. With that, Copilot is positioning itself as a complete development assistant, capable of handling everything from basic completions to complex troubleshooting tasks.
Copilot Code Review: Enhancing Quality Control
GitHub is also expanding its capabilities with AI-driven code reviews, now moving from public preview to general availability. The feature provides a valuable addition to the developer’s toolkit, automatically suggesting improvements, spotting bugs, and ensuring that code adheres to best practices. Copilot’s code review agent streamlines the entire code review process by offering suggestions on pull requests.
While human reviewers are still essential to ensure the quality and context of code, Copilot acts as an intelligent assistant that can spot errors or inefficiencies, enabling quicker fixes and better collaboration among team members.
Multi-Model Integration: A More Robust Copilot
The integration of multiple AI models is another significant improvement in Copilot’s functionality. GitHub has gone beyond relying solely on OpenAI’s models and has incorporated Claude 3.5, Claude 3.7, and Gemini 2.0 Flash from Anthropic and Google. These models now work together within Copilot, each offering distinct strengths that complement one another.
This multi-model setup ensures that Copilot is not limited to one AI approach but can switch between models depending on the task at hand. This means that developers can harness the most powerful models available, improving Copilot’s ability to suggest contextually appropriate code and adapt to different programming languages, frameworks, and development environments.