Guy Amit (Ben-Gurion University), Moshe Levy (Ben-Gurion University), Yisroel Mirsky (Ben-Gurion University)

Deep neural networks are normally executed in the forward direction. However, in this work, we identify a vulnerability that enables models to be trained in both directions and on different tasks. Adversaries can exploit this capability to hide rogue models within seemingly legitimate models. In addition, in this work we show that neural networks can be taught to systematically memorize and retrieve specific samples from datasets. Together, these findings expose a novel method in which adversaries can exfiltrate datasets from protected learning environments under the guise of legitimate models.

We focus on the data exfiltration attack and show that modern architectures can be used to secretly exfiltrate tens of thousands of samples with high fidelity, high enough to compromise data privacy and even train new models. Moreover, to mitigate this threat we propose a novel approach for detecting infected models.

View More Papers

Pisces: Private and Compliable Cryptocurrency Exchange

Ya-Nan Li (The University of Sydney), Tian Qiu (The University of Sydney), Qiang Tang (The University of Sydney)

Read More

COSPAS Search and Rescue Satellite Uplink: A MAC-Based Security...

Syed Khandker (New York University Abu Dhabi), Krzysztof Jurczok (Amateur Radio Operator), Christina Pöpper (New York University Abu Dhabi)

Read More

TinyML meets IoBT against Sensor Hacking

Raushan Kumar Singh (IIT Ropar), Sudeepta Mishra (IIT Ropar)

Read More

Separation is Good: A Faster Order-Fairness Byzantine Consensus

Ke Mu (Southern University of Science and Technology, China), Bo Yin (Changsha University of Science and Technology, China), Alia Asheralieva (Loughborough University, UK), Xuetao Wei (Southern University of Science and Technology, China & Guangdong Provincial Key Laboratory of Brain-inspired Intelligent Computation, SUSTech, China)

Read More