View on GitHub

Lecture Series

Python, AI, and Cybersecurity Resources

AI Security

Focusing on the protection of AI systems from malicious actors and the ethical use of AI models.

1. Adversarial Attack Simulation (Red Teaming)

Goal: Proactively test models against sophisticated prompt injection and data poisoning attacks.

2. PII Masking & Data Privacy Guardrails

Goal: Prevent sensitive information from being processed by LLMs or stored in insecure logs.

3. Model Inversion & Extraction Defense

Goal: Prevent attackers from stealing model parameters or reconstructing sensitive training data.