A hands-on masterclass for engineers, business leaders, and compliance professionals running AI in any form. Building practical AI security skills from technical implementation to executive understanding
Most organizations didn't choose to deploy AI - it crept in or was ordered in. Engineers use coding assistants with repo access. Vendors quietly add AI features. Customer-facing products gain agents that browse the web and call internal APIs. Compliance teams are asked to certify systems nobody fully understands.
This masterclass is built for the people in the middle of that reality. You'll work through practical AI deployments, find how broken things are, fix and mitigate, and learn how to communicate, to engineers, to executives, to customers, and to regulators.
Work with vulnerable AI deployments - chatbots with account access, agents with tool use, coding assistants with repo permissions. Find prompt injection, confused deputy scenarios, supply chain weaknesses, and agentic exploitation. Then fix them.
Understand how AI risk maps to OWASP LLM Top 10, MITRE ATLAS, the EU AI Act, NIST AI RMF, and ISO 42001 - without drowning in compliance checklists. Build governance grounded in what you've actually tested and fixed.
Engineers, business leaders, and compliance professionals all face AI security from different angles. This masterclass brings them together to practice the conversations that need different language and risk appetite - board-level trade-offs, customer questionnaires, regulator-facing posture statements.
Leave with a working AI inventory methodology, a tested risk assessment approach, an acceptable use policy you actually believe in, and templates you can adapt to your organization the week you return.
Testimonials from "Cyber Everywhere All at Once" — the foundation masterclass that this AI-focused edition builds on.
Cybersecurity Masterclass, April 2026: "I would definitely recommend this workshop to others, I loved it especially how it made us wear different hats from Pentesters right to Compliance officers. I discovered the hats I am somewhat confident wearing and those that still need some work. I also loved how it refined our soft skills like tips on presentation and public speaking."
Cybersecurity Masterclass, April 2026: "The multiple perspectives were really useful in understanding what goes on in an organisation. It can get pretty intensive and hectic, probably because of how much is distilled into this short sessions. I would definitely recommend this workshop. There's a lot of gems and knowledge shared, and the multiple perspectives are truly appreciated."
Cybersecurity Masterclass, April 2026: "The hands-on approach and direct feedback during the exercises and from speakers were valuable and helped with absorbing and retaining the knowledge of the topics that were worked through. I would highly recommend this workshop to others, just because the experience was excellent and of high standard."
Cybersecurity Masterclass, Feb 2025: "The course is brilliant. The speaker spoke from experience he gathered over the past years, and most of the use cases are really realistic to the point that it is quite scary to be an infosec manager. He's not only warning us many things that can go wrong, but also telling us how to make things right. Simulation, practice, even the management roleplay he did is impeccable as many upper management didn't actually know what even their company is doing."
Cybersecurity Masterclass, April 2026: "Most valuable: Insights, anecdotes, guests, teamwork. Would recommend this workshop to anyone, 10/10 times. Solid 10/10."
Cybersecurity Masterclass, April 2026: "Very useful, handy and educational. This is a startup and should be raised on community level — every tech and dev company should be involved."
Cybersecurity Masterclass, April 2026: "All 4 days information and workshop are relevant to company-oriented problems. Advanced workshop with practical usage and examples. Yes, will recommend — very useful workshop with practical application."
This masterclass is intentionally cross-functional. AI security fails when engineers, business, and compliance work in isolation.
Three speakers confirmed. Additional guest experts to be announced as the curriculum is finalized in mid-July.
Chief Information Security Officer
CISO at Blue dot and Sourcico, with experience across SaaS startups and large enterprises in banking, telco and energy sectors. Founder of BeyondMachines.net.
Announced mid-July 2026
Additional guest experts will be announced when the full curriculum is finalized.
Announced mid-July 2026
Additional guest experts will be announced when the full curriculum is finalized.
Announced mid-July 2026
Additional guest experts will be announced when the full curriculum is finalized.
When: November 2026 - exact dates announced mid-July 2026. Format: two weekends (Friday-Saturday), full days from 9AM to 6PM.
Where: Base 42 (Rimska 25, Skopje)
A practical AI security program built around simulated organizational environments, covering discovery, testing, defense, and governance through hands-on work and structured discussion.
Limited to 20 participants to ensure personalized interaction and feedback.
Includes: Pre-masterclass preparation week with OWASP LLM Top 10 and MITRE ATLAS material, mid-session bridge week with conference calls and homework, daily expert sessions, hands-on AI exploitation and defense exercises, ready-to-use templates, structured recovery week with small-group facilitated calls, and 30-day post-masterclass support.
"What AI is actually in your organization — including the parts you didn't authorize"
Most organizations don't start with an AI strategy. AI creeps in through employees, vendors, and developer tools long before anyone draws a diagram of it. Day 1 grounds participants in how AI systems behave differently from traditional software, where AI shows up across modern organizations, and how to find what's already there. The day combines foundational theory with hands-on discovery work using simulated organizational environments.
Topics covered: AI system fundamentals and trust boundaries, shadow AI discovery, AI in the software development lifecycle, AI inventory methodology, hands-on practice with simulated AI deployments.
"Structured testing of AI systems — and learning to communicate as things break"
Once you know what AI is in your environment, you need to understand how it actually fails. Day 2 covers the major classes of AI vulnerabilities — prompt injection, output handling, RAG exploitation, agentic tool misuse, and supply chain risk — and gives participants structured testing experience against the simulated environments. The day also includes a communication block addressing what happens when findings emerge: who needs to know, what gets written down, and how to work with legal during AI-related incidents.
Topics covered: OWASP LLM Top 10 and MITRE ATLAS in practice, AI-specific testing techniques, structured test planning and execution, incident communication with internal and external stakeholders, regulatory considerations.
"Fix what you can, build the practices that prevent recurrence"
Testing produces findings. Day 3 turns those findings into action. Participants work through defensive patterns for AI systems and apply them directly to the issues uncovered on Day 2. The work covers both technical fixes and the engineering practices that prevent the same problems from reappearing. Each team also produces internal guidance grounded in the choices they actually made, not generic policy language.
Topics covered: AI defensive patterns and architectural choices, agent authorization and scoping, human-in-the-loop design, monitoring AI-specific signals, secrets and data handling, prioritization and residual risk, engineering guidance development.
"Make AI security an ongoing practice — and explain it to people who don't work for you"
By Day 4, participants have an inventory, know how things break, and have started fixing. The final day is about turning that into a sustainable practice and explaining it clearly to audiences with very different expectations. Theory covers the AI governance landscape and what to watch for in the next 18 months. The practical work has participants produce real artifacts — a risk assessment, an acceptable use policy, an external posture statement — and then defend them in simulated meetings with executives and customers.
Topics covered: AI governance frameworks (EU AI Act, NIST AI RMF, ISO 42001), risk assessment for probabilistic systems, acceptable use policy development, executive and board-level communication, customer and procurement conversations, regulatory posture.
Reserve your seat now at the current tier price - no payment required to reserve. Pricing is locked at the tier in effect when you reserve. Full registration and invoicing opens mid-July 2026.
Lock in the €500 Very Early Bird price before end of August. No payment required to reserve.