AI for Security, Security for AI (ASSA 2026)

Artificial intelligence (AI), particularly large language models (LLMs), generative AI (GenAI), and multimodal systems, is rapidly transforming digital ecosystems. Alongside these advances, new vulnerabilities, attack surfaces, and security challenges are emerging. At the same time, AI is becoming a critical enabler for advanced cybersecurity solutions. This dual role of AI as both a tool for defense and a target of attacks has given rise to the paradigm of AI for Security and Security for AI.

This special session aims to bring together researchers, practitioners, and policymakers to explore cutting-edge advances at the intersection of AI and cybersecurity. We particularly encourage submissions that address real-world deployments, emerging threats, trustworthy AI, and secure AI system design.

Key topics of interest include, but are not limited to:

AI for Cybersecurity

  • AI/ML/LLM-based approaches for cyber threat detection, analysis, and response
  • Security domain-adapted text language models
  • AI-driven intrusion detection and prevention systems (IDS/IPS)
  • Malware detection, classification, and automated reverse engineering using AI
  • AI for cyber threat intelligence (CTI).
  • Autonomous cyber defense and agentic AI for security operations.
  • AI for vulnerability discovery, exploit detection, and patch prioritization
  • AI-assisted digital forensics and incident response
  • Graph-based and multimodal AI for security analytics
  • AI for security in IoT, firmware analysis
  • Applications of AI in combating misinformation, fraud, and social engineering attacks

Security for AI (Trustworthy and Robust AI Systems)

  • Security and privacy of LLMs, foundation models, and multimodal AI systems
  • Adversarial attacks and defenses for AI (e.g., prompt injection, jailbreaks, evasion attacks)
  • Data poisoning, backdoor attacks, and model extraction/inversion attacks
  • Red teaming and evaluation frameworks for AI systems
  • Secure and trustworthy AI deployment (including AI supply chain security)
  • Privacy-preserving AI: federated learning, differential privacy, secure multi-party computation
  • Robustness, generalization, and uncertainty estimation in AI models
  • AI alignment, safety, and risk mitigation in real-world applications
  • Detection and mitigation of hallucinations and unsafe outputs in LLMs

The list above is not meant to be exhaustive. However, irrelevant papers that do not meet any topic of cyber security might be desk-rejected without a full review.

Submission Guideline

https://kse2026.kse-conferences.org/call-for-papers/

Session Organizers

Assoc. Prof. Nguyen Viet Hung

Le Quy Don Technical University (LQDTU), Vietnam, hungnv@lqdtu.edu.vn

Dr. Phan Viet Anh

Le Quy Don Technical University (LQDTU), Vietnam, anhpv@lqdtu.edu.vn

Dr. Cao Van Loi

Le Quy Don Technical University (LQDTU), Vietnam, loi.cao@lqdtu.edu.vn