Home / Technology
Cisco's president advocates 'background checks' for AI agents like human employees
Cisco's Jeetu Patel emphasises importance of background checks for AI agents
Artificial intelligence (AI) agents should be regarded like employees who undergo vetting before being trusted with significant tasks, according to Cisco's leader.
In an interview with Euonews Next, Jeetu Patel explained that AI agents representing us "require background checks," akin to human staff.
As firms hasten to introduce autonomous AI systems that can generate code, supervise tasks, and make decisions, security measures must advance just as rapidly, he explained.
"It's crucial to defend the agent from global threats and also shield the world from a misbehaving agent," he remarked.
Patel's insights emerge as Cisco speeds up its AI integration. He anticipates that by the close of 2026, the organisation will possess "at least half a dozen products" entirely crafted by AI, eliminating human-written code, he noted.
"Every developer at Cisco will integrate AI as a fundamental tool in their workflow," he expressed.
"The real concern isn't AI taking your job but someone outperforming you by leveraging AI," he commented.
StackBlitz CEO Eric Simons expressed to Business Insider that he aims to have more AI agents than personnel at his startup this year — a significant shift reflecting broader industry changes.
AI is capable of coding, task management, and cross-platform coordination. OpenClaw, a personal AI assistant functioning across messaging platforms, exemplifies how agents can manage tasks with minimal human oversight.
"This offers a glimpse into an unpredictable future," Simons conveyed to Business Insider in a Wednesday feature.
"Your AI representatives will negotiate with others' agents over prices, check dining availabilities, and even debate political issues for you," he elaborated.
Software stocks have dipped as investors consider the possibility of AI substituting traditional software.
Nonetheless, there are cautionary signals. As agentic systems scale, they can become unstable. If misjudgments intensify with increasing agent interactions, minor faults can escalate, stated Nicolas Darveau-Garneau, a former Google leader and author of "Be a Sequoia, Not a Bonsai."
