Checkmarx Surfaces Lies-in-the-Middle Attack to Compromise AI Tools
Checkmarx today published a technique it has uncovered that poisons artificial intelligence (AI) agents models in a way that convinces them to tell end users that certain activities and behaviors are safe when in fact they are high risk. Darren Meyer, security research advocate at Checkmarx Zero, a research arm of the company, said this..