BBC investigation uncovers dangerous security flaw in popular AI coding tool
BBC investigation uncovers dangerous security flaw in popular AI coding too
A BBC investigation has exposed a significant, unpatched security risk within Orchids, a popular AI "vibe-coding" platform.
These tools allow people without technical skills to build complex software using simple text prompts.
However, researcher Etizaz Mohsin demonstrated how easily this autonomy can be weaponised.
By exploiting a secret weakness, Mohsin gained full access to a reporter’s laptop, changing the wallpaper and creating files without a single click from the user.
Orchids claims a million users, including staff at global giants like Google and Amazon. Despite its prestige, the platform’s security appears fragile.
"The whole proposition of having the AI handle things for you comes with big risks," Mohsin explained.
He discovered that by inserting a tiny line of malicious code into the thousands generated by the AI, an attacker could steal financial data, access internet history, or even spy through cameras and microphones.
The San Francisco-based firm, founded in 2025, initially ignored Mohsin's warnings, later claiming they were "overwhelmed" with messages.
While this specific flaw was found in Orchids, experts suggest it serves as a broader warning for all "agentic" AI tools that operate with little human input.
Kevin Curran, a professor at Ulster University, noted that without rigorous review, such code often fails under attack.
To mitigate these risks, specialists recommend that users only run experimental AI agents on separate, dedicated machines and use disposable accounts, ensuring that the quest for coding convenience doesn't result in a total digital compromise.