Research Aims

In essence, our lab advances practical security by protecting software and data in real-world environments. We study both offensive and defensive dimensions of security by understanding attacks (e.g., code injection, code reuse), designing robust defenses (e.g., attack surface reduction, moving target defense), and investigating security-relevant artifacts (e.g., digital forensics). As new technologies emerge, they introduce novel attack vectors, necessitating additional (and adaptive) layers of defense.
To achieve our objectives, we investigate the essential processes involved in executable binaries to understand the underlying code semantics. These stages include 1) code generation that transforms high-level source into machine code; 2) obfuscation techniques that intentionally hinder analysis; 3) static and dynamic analysis that reasons about structural and behavioral properties; 4) bug discovery that inspects program behaviors to uncover vulnerabilities; 5) patching that fixes security flaws directly at the binary level; and 6) decompilation that reconstructs higher-level representations to support reversing. As illustrated above, in the era of artificial intelligence (AI), one of our key research areas envisions the unification of traditionally separate tasks into an AI-driven pipeline. Such a holistic approach enables scalable and resilient analysis of both benign and malicious software.
Another direction lies in securing AI models as the above illustration. We tackle the security challenges and defenses spanning the AI lifecycle, from data collection and training to deployment and inference. Our research addresses threats such as poisoning, backdoors, evasion, extraction, inversion, and membership inference, and develops holistic defenses including sanitization, adversarial training, unlearning, and watermarking to build robust, trustworthy, and privacy-preserving AI systems.

Fundings (Past to Present)

gantt title Ongoing Projects with Fundings dateFormat YYYY-MM-DD axisFormat %m/%y section Fundings [12] Modular AI Watermarking : 2025-06-01, 36M [11] Binary Micro-patching: 2024-06-01, 33M [10] Securing Memory-Safety Languages: 2024-06-01, 48M

[12] Research Laboratory for Modular AI Watermarking for Generative AI Compliance, Supported by NRF (National Research Foundation of Korea); A joint project led by Jonguk Hou at Hallym University.

[11] Binary Micro-Security Patch Technology Applicable with Limited Reverse Engineering Supported by IITP (Institute of Information & Communications Technology Planning & Evaluation); A joint project led by Jooyong Yi at Ulsan National Institute of Science and Technology (UNIST).

[10] Development of Integrated Platform for Expanding and Safety Applying Memory-safe Languages Supported by IITP (Institute of Information & Communications Technology Planning & Evaluation); A joint project led by Yuseok Jeon at Korea University.


[9] Proofs and Responses against Evidence Tampering in the New Digital Environment, Supported by IITP (Institute of Information & Communications Technology Planning & Evaluation); A joint project led by Hyoungkee Choi at Sungkyunkwan University (SKKU).

[8] AI Platform to Fully Adapt and Reflect Privacy-Policy Changes, Supported by IITP (Institute of Information & Communications Technology Planning & Evaluation); A joint project led by Simon Woo at Sungkyunkwan University (SKKU).

[7] A Generalizable and Continual Deep Learning Model for Inferring the Context of Binary Codes, Supported by NRF (National Research Foundation of Korea).

[6] Code Unpacking Intermediate Representation Conversion, Supported by NSRI (National Security Research Institute).

[5] A Metric to Measure the Quality of Decompiled Codes, Supported by NSRI (National Security Research Institute); A joint project led by Sungjae Hwang at Sungkyunkwan University (SKKU).

[4] Efficient Fuzzing for Internal Communication Protocols with Firmware of Unmanned Vehicles, Supported by NSRI (National Security Research Institute); A joint project led by Daehee Jang at Sungshin Women’s University.

[3] Semantic-aware Executable Binary Code Representation and its Applications with BERT., Supported by NRF (National Research Foundation of Korea).

[2] Autonomous Car Security as part of Advance Research and Development for Next-generation Security., Supported by IITP (Institute of Information & Communications Technology Planning & Evaluation); A joint project led by Yuseok Jeon at Ulsan National Institute of Science and Technology (UNIST), Haehyun Cho at Soongsil University, and Dokyung Song at Yonsei University.*

[1] Vulnerability Analysis on IoTivity Protocol Provisioning., Supported by NSRI (National Security Research Institute); A joint project led by Hyongkee Choi and Jaehoon Paul Jeong at Sungkyunkwan University (SKKU).