GeekPwn Contest & CAAD Village Schedule

We are so excited to announce the speakers and topics of GeekPwn Contest and CAAD Village at DEFCON26.

  • Tips:
    GeekPwn Contest, Friday 10:00-13:00, Contest Stage at Caesars
    GEEKPWN Party, Friday at 20:30, Scenic at Flamingoopen to register
    CAAD Village, Friday and Saturday 10:00-16:00, Lake Mead at Flamingo

GeekPwn Contest Schedule

Time Session

Guest

10:00-10:10 Opening Remarks Daniel Wang
10:10-10:15

Introduction of CAAD

Judges and Contestants
10:15-10:30

CAAD CTF Commentary

Haibing Wang

Bo Li

10:30-10:50

Secure learning in adversarial environments

Bo Li
10:50-11:15

Adversarial Attacks on Face Detectors using Neural Net based Constrained Optimization

Joey Bose
11:15-11:20

Introduction of Robot Agent Challenge

Judges and Contestants
11:20-12:50

Robot Agent Challenges Commentary

Paul Oh

Lisheng Lv

12:50-12:55

CAAD CTF Finalists Annoucement

12:55-13:00

Robot Agent Challenges Finalists Annoucement

CAAD Village Schedule

Time TOPIC

SPEAKER

Friday

10:00-13:00 CAAD CTF
14:00-14:30

Hardware Trojan Attacks on Neural Networks

Joseph Clements
14:30-15:00

How to leverage the open-source information to make an effective adversarial attack/defense against deep learning model

Wei Li

Xiaojin Jiao

15:00-15:30

Recent progress in adversarial deep learning attack and defense

Wenbo Guo

Alejandro Cuevas

15:30-16:00

Weapons for Dog Fight:Adapting Malware to Anti-Detection based on GAN

Zhuang Zhang

Bo Shi

Hangfeng Dong

16:00-16:30

Boosting Adversarial Attacks with Momentum

Tianyu Pang

Saturday

10:00-10:30

Transferable Adversarial Perturbations

Mengyun Tang

10:30-11:00

Targeted Adversarial Examples for Black Box Audio Systems

Rohan Taori

Amog Kamsetty

11:00-11:30

The Vanishing Trick for Self-driving Cars

Weilin Xu

Yunhan Jia

Zhenyu Zhong

11:30-12:00

Practical adversarial attacks against challenging models environments

Moustafa Alzantot

Yash Sharma

12:00-12:30

Adversarial^2 Training

Yao Zhao

Yuzhe Zhao

TALKS ABSTRACTS AND BIOS

Secure learning in adversarial environments

Bo Li, assistant professor in the department of Computer Science at University of Illinois at Urbana–Champaign

Advances in machine learning have led to rapid and widespread deployment of software-based inference and decision making, resulting in various applications such as data analytics, autonomous systems, and security diagnostics. Current machine learning systems, however, assume that training and test data follow the same, or similar, distributions, and do not consider active adversaries manipulating either distribution. Recent work has demonstrated that motivated adversaries can circumvent anomaly detection or classification models at test time through evasion attacks, or can inject well-crafted malicious instances into training data to induce errors in classification through poisoning attacks. In addition, by undermining the integrity of learning systems, the privacy of users’ data can also be compromised. In this talk, I will describe my recent research on generating evasion attacks in adversarial environments. I will also provide some insights about how to leverage data properties, such as spatial consistency as potential detection approach against such adversarial attacks.

Dr. Bo Li is assistant professor in the department of Computer Science at University of Illinois at Urbana–Champaign, and is a recipient of the Symantec Research Labs Graduate Fellowship. She was a postdoctoral researcher in UC Berkeley, working with Professor Dawn Song. Her research focuses on both theoretical and practical aspects of machine learning, computer vision, security, privacy, game theory, social networks, and adversarial deep learning. She has designed several robust learning algorithms, a scalable framework for achieving robustness for a range of learning methods, and a privacy preserving data publishing system.

Adversarial Attacks on Face Detectors using Neural Net based Constrained Optimization

Avishek Bose, Master’s student at University of Toronto

Adversarial attacks involve adding, small, often imperceptible, perturbations to inputs with the goal of getting a machine learning model to misclassifying them. While many different adversarial attack strategies have been proposed on image classification models, object detection pipelines have been much harder to break. In this talk, I propose a novel strategy to craft adversarial examples by solving a constrained optimization problem using an adversarial generator network. The approach is fast and scalable, requiring only a forward pass through our trained generator network to craft an adversarial sample. Unlike in many attack strategies, we show that the same trained generator is capable of attacking new images without explicitly optimizing on them. The attack is evaluate on a trained Faster R-CNN face detector on the cropped 300-W face dataset where it manages to reduce the number of detected faces to 0.5% of all originally detected faces. In a different experiment, also on 300-W,I demonstrate the robustness of our attack to a JPEG compression based defense typical JPEG compression level of 75% reduces the effectiveness of our attack from only 0.5% of detected faces to a modest 5.0%.

Joey Bose (Tweet@bose_joey)is a Master’s student at the University of Toronto under the supervision of Parham Aarabi. His current research interests involve crafting Adversarial Attacks on Computer Vision models using GAN’s. Beginning Fall 2018 he will start his PhD at McGill / MILA under the supervision of Will Hamilton and Jackie Cheung.He is also a research intern at Borealis AI where he works on on applying adversarial learning principles to learn better embeddings models through adversarial sampling schemes.

Hardware Trojan Attacks on Neural Networks

Joseph Clements, work with Dr. Yingjie Lao’s Secure and Innovative Computing Research Group conducting research

Driven by their accessibility and ubiquity, deep learning has seen rapid growth into a variety of fields, in recent years, including many safety-critical areas. With the rising demands for computational power and speed in machine learning, there is a growing need for hardware architectures optimized for deep learning and other machine learning models, specifically in tightly constrained edge based systems. Unfortunately, the modern fabless business model of manufacturing hardware, while economic, leads to deficiencies in security through the supply chain. In addition, the embedded, distributed, unsupervised, and physically exposed nature of edge devices would make various hardware or physical attacks on edge devices as critical threats. In this talk, I will first introduce the landscape of adversarial machine learning on the edge. I will discuss several new attacks on neural networks from the hardware or physical perspective. I will then present our method for inserting a backdoor into neural networks. Our method is distinct from prior attacks in that it was generated to neither alter the weights nor inputs of a neural network. But rather, it inserts a backdoor by altering the functionality of operations implemented by the network on those parameters during the production of the neural network.

Joseph Clements works with Dr. Yingjie Lao’s Secure and Innovative Computing Research Group conducting research on Adversarial AI in edge based Deep Learning technologies. In the fall semester of 2017, Joseph joined Clemson University’s Holcombe Department of Electrical and Computer Engineering in pursuit of his PhD. He graduated with a bachelor’s degree in computer engineering from the University of South Alabama in May of 2016. There, he engaged in research with Dr. Mark Yampolskiy on the security of additive manufacturing and cyber-physical systems. His research interests include machine learning and artificial intelligence, security and VLSI design.

How to leverage the open-source information to make an effective adversarial attack/defense against deep learning model

Wei Li and Xiaojin Jiao, NorthWestSec

Adversarial attack/defense against machine learning models in digital world.

NorthWestSec, a team of independent security researchers concentrating on AI and security topic. They have demonstrated the Hacking Google reCaptcha using Deep learning technology at GeekPwn 2017 in Silicon Valley.

Recent progress in adversarial deep learning attack and defense

Wenbo Guo and Alejandro Cuevas, JD-Omega

In this talk, the speaker will introduce the state-of-art techniques in both defense and attack. More specifically, he will summary the most effective attack approach and the defense mechanisms. He will also share the approaches their team adopted for the competition.

Wenbo Guo is a Ph.D. student in the College of Information Science and Technology at Pennsylvania State University. Currently, he is a research intern at JD security research center in Silicon Valley. Before joining the Penn State, he got his Master degree from Shanghai Jiao Tong University in 2017. His research mainly focuses on deep learning as well as its applications in program analysis and security. He has published several research papers in the high-quality journals and conferences, such as KDD.

Alejandro Cuevas, originally from Paraguay, graduated in May 2018 from The Pennsylvania State University with a B.S. Security and Risk Analysis. As an undergraduate, Alejandro co-authored 2 papers in different areas within computer security. At Penn State, Alejandro has worked on analyzing the challenges in the reproduction of crowd-reported vulnerabilities and is currently involved in a project presenting a novel RNN for memory alias analysis. Furthermore, Alejandro has also extensively collaborated with EPFL, exploring the security challenges faced by the ICRC and helping in the deployment of an anonymous communication protocol with provable traffic-analysis resistance. Alejandro is currently applying to Ph.D. programs and hopes to start in the fall of 2019.

Weapons for Dog Fight:Adapting Malware to Anti-Detection based on GAN

Zhuang Zhang, Bo Shi, Hangfeng Dong, from Tencent Yunding Lab(Tweet@YDLab9)

Since the malware come out,there is a fight between malware and AV. So more and more methods based on machine learning apply to detect malware. We will share how to detect  polymorphic malware based on CNN,then we will introduce a method use generative adversarial network to generate adversarial malware examples to  bypass machine learning based detection models.

Zhuang Zhang is the senior researcher at Tencent Yunding Laboratory.

Bo Shi is the Ecosystem Director of Tencent Yunding Laboratory.

Hangfeng Dong is the researcher of Tencent Yunding Laboratory.

Boosting Adversarial Attacks with Momentum

Tianyu Pang and Chao Du, THU

Deep neural networks are vulnerable to adversarial examples, which poses security concerns on these algorithms due to the potentially severe consequences. Adversarial at- tacks serve as an important surrogate to evaluate the robustness of deep learning models before they are deployed. However, most of existing adversarial attacks can only fool a black-box model with a low success rate. To address this issue, we propose a broad class of momentum-based iterative algorithms to boost adversarial attacks. By integrating the momentum term into the iterative process for attacks, our methods can stabilize update directions and escape from poor local maxima during the iterations, resulting in more transferable adversarial examples. To further improve the success rates for black-box attacks, we apply momentum iterative algorithms to an ensemble of models, and show that the adversarially trained models with a strong defense ability are also vulnerable to our black-box attacks. We hope that the proposed methods will serve as a benchmark for evaluating the robustness of various deep models and defense methods. With this method, we won the first places in NIPS 2017 Non-targeted Adversarial Attack and Targeted Adversarial Attack competitions.

Tianyu Pang is a first-year Ph.D. student of TSAIL Group in the Department of Computer Science and Technology, Tsinghua University, advised by Prof. Jun Zhu. His research interest includes machine learning, deep learning and their applications in computer vision, especially robustness of deep learning.

Transferable Adversarial Perturbations

Bruce Hou and Wen Zhou, Tencent Security Platform Department

State-of-the-art deep neural network classifiers are highly vulnerable to adversarial examples which are designed to mislead classifiers with a very small perturbation. However, the performance of black-box attacks (without knowledge of the model parameters) against deployed models always degrades significantly. In this paper, We propose a novel way of perturbations for adversarial examples to enable black-box transfer. We first show that maximizing distance between natural images and their adversarial examples in the intermediate feature maps can improve both white-box attacks (with knowledge of the model parameters) and black-box attacks. We also show that smooth regularization on adversarial perturbations enables transferring across models. Extensive experimental results show that our approach outperforms state-of-the-art methods both in white-box and black-box attacks.

Bruce Hou, senior security researcher with more than four years of experience in Tencent Security Platform Department, mainly focuses on the classification of images and videos, human-machine confrontation and the attacks and defenses of cyber security.

Wen Zhou, senior security researcher with multiple years of experience in Tencent Security Platform Department, mainly focuses on the research of computer vision, adversarial-examples and so on.

Tencent Blade Team was founded by Tencent Security Platform Department, focusing in security researches of AI, mobile Internet, IoT, wireless devices and other cutting-edge technologies. So far, Tencent Blade Team has reported many security vulnerabilities to a large number of international manufacturers, including Google and Apple. In the future, Tencent Blade Team will continue to make the Internet a safer place for everyone.

Targeted Adversarial Examples for Black Box Audio Systems

Rohan Taori and Amog Kamsetty,  undergrades at UC Berkeley studying EECS

The application of deep recurrent networks to audio transcription has led to impressive gains in automatic speech recognition (ASR) systems. Many have demonstrated that small adversarial perturbations can fool deep neural networks into incorrectly predicting a specified target with high confidence. Current work on fooling ASR systems have focused on white-box attacks, in which the model architecture and parameters are known. In this paper, we adopt a black-box approach to adversarial generation, combining the approaches of both genetic algorithms and gradient estimation to solve the task. We achieve a 89.25% targeted attack similarity after 3000 generations while maintaining 94.6% audio file similarity.

Rohan Taori(Tweet@rtaori13) is an undergrade at UC Berkeley studying EECS with an interest in machine learning and AI. He heads the educational division at Machine Learning at Berkeley and is also a researcher at BAIR (Berkeley AI Research).

Amog Kamsetty is an undergraduate studying EECS at UC Berkeley, with an interest in both machine learning and systems. He is involved with Machine Learning @ Berkeley and is currently pursuing research at UC Berkeley RISE Lab.

The Vanishing Trick for Self-driving Cars

Weilin Xu, Yunhan Jia, Zhenyu Zhong, from Baidu X-Lab

We will introduce a magic trick that vanishes objects in front of self-driving cars adversarial machine learning techniques.

Weilin Xu is the intern at Baidu X-Lab, PhD candidate at the University of Virginia.

Yunhan Jia is the senior security scientist at Baidu X-Lab.

Zhenyu Zhong is the staff security scientist at Baidu X-Lab.

Practical adversarial attacks against challenging models environments

Moustafa Alzantot and Yash Sharma, UCNESL

[Abstract]

Moustafa Alzantot is a Ph.D. Candidate in Computer Science at UCLA. His research interests include machine learning, privacy, and mobile computing. He is an inventor of two US patents and the recipient of several awards including the COMESA 2014 innovation award. He worked as an intern at Google, Facebook, and Qualcom.

Yash Sharma is a visiting scientist at Cornell who recently graduated with a Bachelors and Masters in Electrical Engineering. His research has focused on adversarial examples, namely pushing the state-of-the-art in attacks in both limited access settings and challenging domains. He is interested in finding more principled solutions for resolving the robustness problem, as well as studying other practical issues which are inhibiting us from achieving AGI.

Adversarial^2 Training

Yao Zhao and Yuzhe Zhao, from YYZZ Team

Targeted attacks of image classifiers are difficult to transfer from one model to another. Only strong adversarial attacks with the knowledge of the classifier can bypass existing defenses. To defend against such attacks, we implement an “adversarial^2 training” method to strengthen the existing defenses.

Yao Zhao is an applied scientist at Microsoft AI & Research working on natural language understanding/generation and search ranking. During his Ph.D. at Yale University, he worked in the field of computuer vision and optics.

Yuzhe Zhao is a software engineer in Google Research, working on natural language understanding. He recently earned his Ph.D. from Yale University. Previously, he received his undergraduate degree in mathematics and physics from Shanghai Jiao Tong University.


More info about GeekPwn : 2018.geekpwn.org

 

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s