Level 1 · Lesson 8

AI Safety &
Security

AI is powerful, but like all powerful tools, it can be misused. This lesson teaches you about AI-powered threats—deepfakes, phishing, social engineering, misinformation, and scams—and how to protect yourself, your organization, and your customers. This is the final lesson of Level 1.

Watch the Lecture

Lesson Overview

This is the final lesson of Level 1. You have learned how AI works, how to use it effectively, how to act ethically with it, and how to think about data responsibly. Now you need to understand the security dimension: how to protect yourself and your organization from AI-powered attacks and misuse.

What You Will Learn

  • AI-Powered Threats (8.1): Deepfakes and synthetic media, phishing enhanced with AI, social engineering at scale, misinformation and disinformation, AI-generated scams
  • Protecting Yourself & Your Organization (8.2): Security best practices, recognizing AI-generated content, verification techniques, organizational security policies
  • Safe AI Usage Policies (8.3): Acceptable use policies, what to share and not share with AI, data classification, incident reporting procedures
Why AI Security Matters

AI does not create new threats—attackers have always used phishing, social engineering, and fraud. But AI amplifies these threats at scale. A phishing email that once took hours to craft can now be generated in seconds. A voice used for blackmail can be synthetically reproduced. Understanding AI-specific threats is essential for staying safe.

How This Lesson Is Structured

Each chapter builds your security awareness. By the end of Level 1, you will understand both how to use AI responsibly and how to defend against AI-enabled attacks.

Three Dimensions of AI Safety

1. Technical Safety

AI systems themselves can be safe or unsafe. A poorly designed AI system can perpetuate bias, leak sensitive information, or behave unpredictably. Understanding how to evaluate AI systems for safety is important for organizations adopting new AI tools.

2. User Safety

You can use AI safely or unsafely. Sharing passwords with AI, pasting confidential information, or relying too much on AI for critical tasks are unsafe practices. This lesson teaches you how to use AI safely.

3. Defense Against AI-Powered Attacks

Attackers now use AI to make phishing, fraud, and social engineering more effective. You need to recognize and defend against these attacks. This is the focus of this lesson.

Five Principles for AI Safety

1. Verify, Do Not Trust

In the age of deepfakes and AI-generated content, never trust without verification. If something seems important (especially if it involves a request for action or sensitive information), verify it through an independent channel before responding.

2. Assume Sophisticated Attacks

Attackers are using AI. They can create convincing phishing emails, videos, or voice recordings. Assume attacks are sophisticated and designed to fool you. Default skepticism is appropriate.

3. Defend in Depth

No single defense is perfect. Use multiple layers: technical controls (authentication, encryption), organizational policies, training, and personal vigilance. Defense in depth means attackers have to overcome multiple barriers.

4. Report Concerns

If you encounter something suspicious, report it. If you are unsure if something is safe, ask. Organizations with strong security cultures encourage reporting concerns rather than hiding them.

5. Keep Learning

AI threats are evolving. Attackers develop new techniques. Defenses improve. Stay informed about emerging threats. What is safe practice today might not be safe next year as technology evolves.

Completing Level 1

After completing Lesson 8, you will have completed Level 1: AI Aware. You will have learned:

  • How AI works (Lessons 1-3)
  • How to use AI effectively (Lessons 4-5)
  • How to use AI ethically and responsibly (Lesson 6)
  • How to think about data (Lesson 7)
  • How to defend against AI-powered threats (Lesson 8)

This prepares you for Level 2: Practitioner, where you will learn more specialized skills for deploying AI in organizational contexts.