Research group
We are developing AI/ML models and algorithms for cutting-edge security solutions to protect critical infrastructures. In parallel, our research prevents attackers from using these technologies to break in and harm through, e.g., offensive and defense evaluation strategies, security and privacy by responsible design, and network traffic analysis.
Our research aims to design, develop and verify models and algorithms in intersecting areas of machine learning, anomaly detection, systems and AI security, and distributed systems. We look forward to the security and privacy problems leveraging and protecting artificial intelligence/machine learning for various critical infrastructure systems. Over the past years, our interests have evolved to include topics in security and privacy analytics, responsible machine learning, fault detection and resolution, threat modelling, adversarial attacks (e.g., backdoors, bit-flip) and defense, and private Internet communications.
A common theme in our most recent research is developing, improving, or rigorously analysing machine learning models and algorithms to detect, prevent, and diagnose faults, failures, anomalies, or attacks from a single system to large-scale critical infrastructures. Further, we have been exploring offensive and defensive security solutions for AI systems against emerging attacks such as backdoors, bit-flip, poisoning, evasion, gradient leakage, and data leakage. This group is co-located in a larger research environment called Autonomous Distributed Systems Lab.