How it works
Four tracks. One complete picture.
Each track builds on the last. Every topic appears exactly once. Click any module to see what's inside and open it directly.
A structured knowledge base covering every layer of the AI attack surface. Four tracks from threat fundamentals to hands-on labs with VectaX, AgentIQ, and DiscoveR.
Mirror Security
Documentation-style pages for each product: encrypted AI with VectaX, runtime guardrails with AgentIQ, and automated red teaming with DiscoveR.
How it works
Each track builds on the last. Every topic appears exactly once. Click any module to see what's inside and open it directly.
Who it's for
Three roles, different entry points, same knowledge base.
Attack surfaces unique to LLMs, agents, and RAG pipelines. Test for prompt injection, data leakage, and model manipulation before they become production incidents.
Security principles that fit the architecture before deployment. Secure model selection, inference endpoint hardening, agent tool permissioning, encrypted memory.
Practical guidance on GDPR, EU AI Act, ISO 42001, and NIST AI RMF with controls that actually apply to LLMs, agents, and generative AI deployments.
FAQ
AI security cannot be an afterthought. Start with the fundamentals and build up to hands-on red teaming.