Agenda | Tutorials |
Tutorial K |
A Primer on Adversarial Machine Learning (AML) |
Fees |
$250 USD each |
Date |
Monday – February 24, 2025 |
Time |
1:00 PM – 4:30 PM PT |
Overview |
Adversarial machine learning (AML) is a field at the intersection of machine learning (ML) and computer security (cybersecurity). While ML model development prioritizes accuracy and efficiency, AML focuses on model security and robustness. This tutorial introduces core AML concepts, relevant governance documents, the taxonomy of attack techniques, and mitigation strategies.
Outline:
|
Instructors | Ronald Nussbaum, Brian Tung, Andrew Brethorst, Nehal Desai, Dominic Berry, The Aerospace Corporation |
Biographies |
Ronald Nussbaum is a Project Leader in the Software Implementation and Integration Department at The Aerospace Corporation. In this role, he leads and supports a variety of programs related to artificial intelligence, data architectures, computer wargaming, and other areas. A selfdescribed generalist, Ronald’s research interests span the breadth of theoretical computer science. Possessing a strong affinity for hands-on programming, he has a passion for applying machine learning techniques to complex big data problems. Ronald holds a B.S. degree in computer science from Grand Valley State University, as well as M.S. and Ph.D. degrees in computer science from Michigan State University.
Brian Tung, is a Senior Engineering Specialist in the Data Science and Artificial Intelligence Department at The Aerospace Corporation. He has led projects in machine learning, data science, natural language processing, distributed systems, and network security. He is also the Andrew Brethorst is a Senior Engineering Specialist in the Data Science and Artificial Intelligence Department at The Aerospace Corporation. Mr. Brethorst completed his undergraduate degree in cybernetics from UCLA, and later completed his master’s degree in computer science with a concentration in machine learning from UCI. Much of his work involves applying machine learning Techniques to image exploitation, telemetry anomaly detection, intelligent artificial agents using reinforcement Nehal Desai is a data scientist in the Data Science and Artificial Intelligence Department at The Aerospace Corporation who works on national security space (NSS) applications. Prior to Aerospace, Nehal worked at Los Alamos National Lab, Silicon Graphics, and IBM. Nehal has a Ph.D. in Mechanical engineering from North Carolina State University. In his spare time Nehal enjoys raising hedgehogs and skeet shooting. Dominic Berry is a cyber security researcher with the Cyber Assessments and Research Department at The Aerospace Corporation. Before coming to the company Dominic earned his B.S. in computational modeling and data analytics with a focus in cyber security and cryptography at Virginia Tech. Outside of work Dominic enjoys board games and spending time with his wife and son. |
Description of Intended Audience and Recommended Prerequisites |
This tutorial is intended for developers training, testing, or integrating artificial intelligence (AI) models, particularly generative AI (GenAI) models; developers designing, building, or maintaining data repositories; cybersecurity professionals; and personnel managing or providing oversight to such programs. |
What can Attendees Expect to Learn |
Attendees will gain knowledge of adversarial machine learning concepts, an understanding of the importance of cybersecurity in all aspects of the generative artificial intelligence (GenAI) life cycle, hands-on experience with realistic scenarios as both attacker and defender, insight into what trends they should be monitoring, and a list of resources if they wish to learn more about AML |
Tutorials |