Trustworthy AI through Machine Learning, Logic, and Neuro-Symbolic Integration (TA-MLNI 2026)
Artificial intelligence is increasingly deployed in high-impact domains, yet concerns regarding reliability, explainability, robustness, fairness, and accountability remain major barriers to wider adoption. While machine learning has achieved remarkable performance across many tasks, purely data-driven approaches often struggle to provide interpretable, controllable, and verifiable decision-making processes. At the same time, several approaches have been explored to address these limitations. Knowledge-based methods, such as knowledge graphs, ontologies, logical reasoning, and other symbolic approaches, offer structured representations and explicit reasoning capabilities that can complement statistical learning , alongside other directions focusing on robustness, evaluation, and human-centered design.
This special session aims to bring together researchers and practitioners working at the intersection of machine learning, knowledge representation, and trustworthy AI to advance the development of reliable and human-centered intelligent systems. The session will explore how learning-based, knowledge-based, and human-centered methods can improve transparency, robustness, reasoning ability, and human trust in intelligent systems. It also welcomes work on theoretical foundations, methodologies, tools, and real-world applications that support the design, evaluation, and governance of trustworthy AI.
Topics of Interest
Relevant topics include, but are not limited to:
- Trustworthy AI
- Explainable and interpretable AI
- Neuro-symbolic AI
- Symbolic reasoning for machine learning
- Hybrid AI systems
- Knowledge graphs for AI
- Ontologies and semantic technologies
- Knowledge-enhanced learning
- Logical reasoning in intelligent systems
- Causal and relational reasoning
- Robustness, fairness, and accountability in AI
- Verification and validation of AI systems
- Uncertainty-aware and reliable AI
- Human-centered and responsible AI
- AI governance and standards
- Knowledge-grounded large language models
- Neuro-symbolic methods for LLMs
- Graph machine learning for trustworthy AI
- Real-world applications in healthcare, education, cybersecurity, industry, and science
Submission Guideline
https://kse2026.kse-conferences.org/call-for-papers/
Session Organizers
Teeradaj Racharak
Tohoku University, racharak.teeradaj.c3@tohoku.ac.jp
Watanee Jearanaiwongkul
Tohoku University, jearanaiwongkul.watanee.b2@tohoku.ac.jp
Prarinya Siritanawan
Shinshu University, prarinya@shinshu-u.ac.jp
Trung Vo
Japan Advanced Institute Science and Technology, trungvo@jaist.ac.jp