Gossip Herald

Home / Technology

Serious AI bugs found in Meta, Nvidia, and Microsoft inference frameworks, researchers warn

Cybersecurity experts have uncovered major vulnerabilities across AI systems built by Meta, Nvidia, and Microsoft

By GH Web Desk |
Serious AI bugs found in Meta, Nvidia, and Microsoft inference frameworks, researchers warn
Serious AI bugs found in Meta, Nvidia, and Microsoft inference frameworks, researchers warn

Cybersecurity researchers have issued a stark warning after discovering critical flaws across the inference engines of Meta, Nvidia, and Microsoft, raising fresh concerns about how widely deployed AI infrastructure handles untrusted data.

According to a new report from Oligo Security, the vulnerabilities stem from unsafe implementations of ZeroMQ and Python’s pickle deserialisation, an issue that had quietly spread across multiple frameworks due to code reuse.

The most significant flaw originated in Meta’s Llama model framework, where the recv_pyobj() function left systems vulnerable to arbitrary code execution.

Although Meta patched the issue last October, researchers say the same insecure pattern appeared in open-source projects like vLLM and SGLang, amplifying the overall risk.

Meanwhile, inference tools linked to Nvidia and Microsoft were also found to be exposed through similar ShadowMQ-style weaknesses.

Security analysts have warned that attackers could exploit these openings simply by sending malicious data to exposed sockets, potentially compromising high-value AI workloads.

While patches are now available, the findings have renewed debate over the security of rapidly evolving AI ecosystems.

Researchers concluded that companies including Meta, Nvidia, and Microsoft must accelerate efforts to harden inference pipelines as reliance on large-scale AI systems continues to grow.