
AISEC Certification Course
For Technology Professionals
The AISEC Certification provides the knowledge to sift through information, separate facts from the noise, and understand what’s behind AI's excitement.
It gives you an understanding of what hype is and what isn’t and, more importantly, where AI is going.
It explains the problems that can occur when AI isn’t properly managed and the approaches adversaries take to attack AI.
It also allows you to understand and set up the guardrails needed to ensure your business can use AI safely and responsibly.
February 2025 in Melbourne
Face-to-Face Training: $1,950 + GST. The fee includes Level 1 & 2 exams.

Trainer
Dr. Malcolm Shore
Malcolm is a Technical Director at Kode-1 and an adjunct Professor at the Centre for Cyber Resilience at Deakin University. Malcolm held the role of Director of Infosec at GCSB, the national security agency in New Zealand, for a decade and has subsequently held various CSO positions.
He represented both NZ and Australia on the ASEAN Cyber Security Strategy Committee, CSCAP, and subsequently had the opportunity to attend several Global Cyberspace Conferences. As part of his role in capacity building for cybersecurity in Australia, he initiated and helped develop Certificate IV in Cybersecurity for TAFE institutes across Australia.
aisec CERTIFICATION COURSE
-
Level 1 provides a solid introduction to the history and contemporary perspectives on AI and shows where each component of contemporary AI is positioned in the Gartner hype cycle. It lifts the hood on AI models and explains what they are underneath and how they were built.
AI threats, as detailed in the guidance documents from OWASP and MITRE, are explained with a practical demonstration of an attack. We also examine how to address the risk of AI and incident response.
There’s been an increasing level of government intervention in the use of AI, and we cover the requirements established by the ISO standards committee and the US and European governments. We show how to manage governance, maintain compliance with external obligations, and effectively manage risks to internal AI goals.
Australia is in the process of regulating Mandatory Guardrails for the use of AI in high-risk systems. We explain the concept of guardrails and the specifics of proposed Australian regulation. We also demonstrate how to use AI vulnerability scanners and guardrails and how they can effectively protect against AI mistakes.
On completing Day 1 AISEC, you can take a test to prove your knowledge of AI and be awarded the formal AISEC Level 1 Certificate.
-
To become competent AI cyber practitioners, we need to know and understand the concepts of AI and be able to apply them. The Level 2 certification picks up the training on AI from where Level 1 left off. We will dig deeper into the technology, threats, vulnerabilities, and countermeasures that are part of the AI discipline.
This level of certification teaches hands-on skills and requires attendees to be familiar with the Python scripting language. We’ll use the Google Colab scripting environment and run scripts locally on our PCs. We’ll also work with well-known AI repositories like HuggingFace and Ollama.
We’ll examine AI threats more closely and work through some labs. Here, we will learn how to use prompts and thought injections to access information that should not be available and how to compromise an AI model.
To defend against attacks, we’ll learn about monitoring our networks to detect AI usage and watch for attacks against our AI applications. We’ll also learn how to configure guardrails to protect against specific threats and scan AI models for vulnerabilities. We’ll address each of the Australian Mandatory Guardrails and guide satisfying our obligations and evidence compliance.
Completing Day 2 and passing the practical assessments will give you the formal AISEC Level 2 Certificate, proving your competence as an AI security practitioner.