Evasion, Poisoning, Extraction, and Inference: Tools to Defend and Evaluate Published 2021-07-20 Download video MP4 360p Recommendations 40:01 Learning from the Enemy: A Look Inside the Techniques of Ocean Lotus /APT32 1:16:04 Ten Years Hence Lecture: Adversarial Attacks on Large Language Models 22:26 Membership inference attacks from first principles 10:01 Will AI Help or Hurt Cybersecurity? Definitely! 1:29:10 Andrew Ng: Deep Learning, Education, and Real-World AI | Lex Fridman Podcast #73 48:27 Attacking Machine Learning: On the Security and Privacy of Neural Networks 04:39 Generative AI's Potential Cyber Security Risks 34:48 The Unreasonable Effectiveness of JPEG: A Signal Processing Approach 43:48 Generative AI + Education: Will Generative AI Transform Learning and Education 37:03 Ghost in the Machine: Adversarial AI Attacks 18:05 How AI 'Understands' Images (CLIP) - Computerphile 19:34 Back to the Drawing Board: A Critical Evaluation of Poisoning Attacks on Federated Learning 08:10 Overview of Adversarial Machine Learning 16:56 Vectoring Words (Word Embeddings) - Computerphile 13:44 Dataset Poisoning on the Industrial Scale 26:46 Andrew Ng: Advice on Getting Started in Deep Learning | AI Podcast Clips 49:42 The Price is WRONG - An Analysis of Security Complexity 51:12 Radio Hacking: Cars, Hardware, and more! - Samy Kamkar - AppSec California 2016 03:38 IBM Adversarial Robustness Toolbox 59:50 Leslie Lamport: Thinking Above the Code Similar videos 36:53 Defending Against Adversarial Model Attacks 09:15 Diagnosis of ML Evasion Attacks 07:21 DARPA vs. Poison AI: GARD Program Ready for Implementation 27:53 [DSC 5.0] Security of Machine Learning - Jelena Milosevic 43:47 Zen and the Art of Adversarial Machine Learning 58:06 Poisoning Web-Scale Training Datasets - Nicholas Carlini | Stanford MLSys #75 34:40 Let's Code: Adversarial Robustness Toolbox (ART) – Create adversarial input to check AI 52:39 Hacking AI with Counterfit - Cybersecurity Fundamentals Webinar 1:13:26 Deep Learning Day: Classifiers and Adversarial Attacks by prof. Andrea Cavallaro 01:00 Worst Case Scenario: Pentagon's Poisoned AI 41:04 Adversarial AI—The Nature of the Threat, Impacts, and Mitigation Strategies 52:11 04/27/2021 -- Bo Li (UIUC) 1:15:08 Lecture 11 - Deep Learning Foundations by Soheil Feizi : Poisoning Attacks and Defenses 1:18:24 SE4AI: Security 20:15 Membership Inference Attacks against Machine Learning Models 12:01 Concealed Data Poisoning Attacks on NLP Models 44:52 Winning by deception: tactics, techniques, and procedures of adversarial A.I. More results