
Critical Vulnerability Discovered in Popular AI Code Editor
The tech landscape continues to evolve, but so do the security challenges within it. Recently, cybersecurity researchers revealed a major vulnerability within Cursor, a widely-used AI code editor, that poses a serious risk of remote code execution (RCE). This flaw, identified as CVE-2025-54135, received a high severity score of 8.6 on the CVSS scale and has now been patched in version 1.3, which was released on July 29, 2025.
The Mechanism of Attack
According to Aim Labs, which previously highlighted similar issues through reports such as EchoLeak, the flaw operates through the Model Control Protocol (MCP) servers. When used with developer-level privileges, these servers can fetch untrusted external data, creating an opening for attackers to exploit these privileges. By exploiting this vulnerability, attackers can execute commands remotely under user privileges, potentially leading to severe outcomes such as ransomware attacks, data theft, or manipulations within AI outputs.
A Simple Yet Effective Exploit
The process for the attack is alarmingly straightforward. If a user configures a new Slack MCP server via the Cursor interface, an attacker can post a message containing a command injection payload in a public Slack channel. When the victim interacts with Cursor to summarize their Slack messages, the AI agent could inadvertently execute malicious commands interspersed with the user's requests, all without user confirmation.
The Importance of Updating Security Measures
This incident sheds light on the critical need for robust security measures in AI-assisted tools. As these systems increasingly interact with external, often untrusted data sources, developers and users alike must remain vigilant. Aim Security emphasizes the necessity for security models that proactively monitor not just internal processes, but also external influences on AI runtime operations.
The release notes for version 1.3 highlight improvements aimed at safeguarding users against similar breaches in the future. It’s crucial for users and organizations employing AI tools to ensure they are using the most up-to-date versions to mitigate existing threats.
Write A Comment