Prompt & Protect: Mastering Security and Maximizing Value from LLMs

Hands-On Seminar - Participants are required to Bring Their Laptops

Main Speaker

Learning Tracks

Course ID

52059

Date

15-07-2025

Time

Daily seminar
9:00-16:30

Location

Daniel Hotel, 60 Ramat Yam st. Herzliya

Overview

This seminar explores the security challenges and optimization techniques for large language models through a series of expert-led sessions. It begins with a hands-on CTF workshop on jailbreaking LLMs, equipping participants with offensive and defensive techniques. The sessions then delve into vulnerabilities, including remote code execution risks and evaluation strategies to ensure AI reliability. Afternoon talks cover advanced jailbreak tactics, mitigation strategies, and enterprise AI agent deployment, providing practical insights for securing and optimizing LLM applications. Designed for developers, researchers, and security professionals, the seminar offers a comprehensive view of LLM security, evaluation, and real-world implementation.

Who Should Attend

  • Developers
  • PT Engineers
  • Researchers
  • Software Architects
  • IT professionals

Prerequisites

Course Contents

  • Hands-On Workshop: Jailbreaking LLMs – Lecturers: Eran Shimony and Shai Dvash
    • A practical session where participants will apply jailbreak techniques and explore security vulnerabilities in LLM systems.
 
  • Anatomy of an LLM Remote Code Execution Vulnerability – Lecturer: Shaked Reiner
    • Exploring a critical security vulnerability that enables remote code execution via manipulated input. The session covers the discovery process, security implications, and mitigation strategies for AI-powered applications.
 
  • Squeezing the Lemon: How to Get More Out of Your LLM – Lecturer: Alex Abramov
    • A practical guide to maximizing LLM potential through prompt engineering, task decomposition, and system integration. The talk includes optimization techniques for improved speed, efficiency, and production deployment.
 
  • Breaking and Securing LLMs: Evolving Jailbreaks and Mitigation Strategies – Lecturer: Niv Rabin
    • An in-depth analysis of advanced LLM jailbreak techniques and defensive strategies. Topics include semantic fuzzing, iterative refinement mechanisms, evaluation methods, and hybrid mitigation approaches.
 
  • LLM Evaluation is the New Unit Tests: Making Data-Driven Decisions – Lecturer: Roy Ben Yossef
    • Examining structured evaluation methods for generative AI, ensuring reliability and performance in unpredictable AI environments.
 
  • Hello, this is Your Security Bot Speaking – Lecturer: Michael Pasternak 
    • A deep dive into the development of an AI security bot leveraging LLMs for enhanced decision-making, effectively doubling security team capacity.
 
  • How to Execute LLM Agents in Your Organization – Lecturer: Michael Balber
    • A case study on CyberArk’s development of an intelligent agent with RAG capabilities. The session covers retrieval optimization, dataset creation, evaluation methodologies, and improving AI-driven responses by 60%.
                   

The conference starts in

Days
Hours
Minutes
Seconds