This seminar explores the security challenges and optimization techniques for large language models through a series of expert-led sessions.
It begins with a hands-on CTF workshop on jailbreaking LLMs, equipping participants with offensive and defensive techniques.
The sessions then delve into vulnerabilities, including remote code execution risks and evaluation strategies to ensure AI reliability.
Afternoon talks cover advanced jailbreak tactics, mitigation strategies, and enterprise AI agent deployment, providing practical insights for securing and optimizing LLM applications.
Designed for developers, researchers, and security professionals, the seminar offers a comprehensive view of LLM security, evaluation, and real-world implementation.