Lesson Overview
Ethics is not an abstract philosophical topic when you are working with AI. It is a practical, daily concern that affects how you use AI tools, what data you share, whether you disclose AI use to colleagues and customers, and how your organization builds trust and manages risk.
This lesson covers four interconnected domains of AI ethics that every AI-aware worker needs to understand:
What You Will Learn
- Privacy & Data Protection (6.1): What information is safe to put in AI prompts, what requires anonymization, and how GDPR and CCPA affect your organization
- Intellectual Property & Attribution (6.2): Who owns AI-generated content, copyright concerns, proper attribution practices, and organizational IP policies
- Transparency & Disclosure (6.3): When and how to disclose AI use to colleagues, customers, and stakeholders; building trust through openness
- Building an Ethical AI Practice (6.4): Creating your personal ethical framework, aligning with organizational policies, raising concerns responsibly
Organizations are increasingly evaluating employees and vendors based on AI ethics practices. Understanding these issues positions you as someone who can be trusted with sensitive projects, who thinks about implications and risks, and who can navigate the complex landscape of responsible AI use. This is not just the right thing to do—it is becoming a professional requirement.
How This Lesson Is Structured
Each chapter stands alone, but they build on each other. You will understand the key ethical issues, then learn practical frameworks for making ethical decisions in your daily work.
Throughout this lesson, you will encounter real scenarios and case studies. AI ethics is not about following rules. It is about developing judgment. The goal is to help you internalize these principles so you can apply them in novel situations you have not encountered before.
Five Core Principles of Responsible AI Use
Before diving into specific chapters, here are five principles that thread through all the material:
1. Do No Harm
The foundation of responsible AI use is considering potential harms: to individuals, to the organization, to society. This includes obvious harms (exposing confidential information) and subtle ones (perpetuating discrimination through biased AI systems). When in doubt, ask: "Who could be negatively affected by this decision, and how?"
2. Transparency Builds Trust
Hiding AI use creates suspicion. People are more willing to accept AI in their lives when they understand how it is being used, what it does, and what its limitations are. Disclosure often requires more upfront effort but saves enormous amounts of damage control later.
3. Respect Privacy and Data Rights
Information is valuable, both as an individual and to your organization. Treating other people's data (and your organization's data) with respect means thinking carefully about what information you share, with whom, and whether that sharing is authorized. Privacy is not about hiding wrongdoing—it is about respecting human dignity and autonomy.
4. Give Credit Where Credit Is Due
If AI was meaningfully involved in creating something, acknowledge it. This applies whether you are using an AI to write an email, generate code, create graphics, or analyze data. Attribution is not about assigning blame—it is about honesty and maintaining credibility.
5. Develop Judgment, Not Just Rules
You cannot memorize a rule for every ethical scenario. Instead, develop the ability to think through ethical dilemmas: consider multiple perspectives, anticipate consequences, and make decisions you would be comfortable explaining and defending. This is how ethical professionals operate in all fields, and it applies equally to AI.
The Role of Organizational Policy
Ethics does not exist in a vacuum. You operate within an organizational context that has its own values, policies, and constraints. A responsible AI practice means understanding your organization's AI policies and ethical guidelines.
However, organizational policy is not the end of the story. Sometimes your organization has not yet developed clear AI policies (many organizations are still figuring this out). Sometimes you encounter a situation that your organization's policies do not address. Sometimes you believe an organizational policy is wrong. In these cases, developing your own ethical judgment and knowing how to raise concerns responsibly becomes even more important.
There will be moments when your personal ethical judgment and organizational expectations do not align. This lesson equips you to navigate these tensions. Sometimes the answer is to comply with organizational policy even if you disagree. Sometimes it is to raise concerns through proper channels. And sometimes, if the issue is serious enough, it might mean escalating or even leaving. Developing this judgment is part of becoming an ethical professional.
How to Get the Most from This Lesson
Engage with scenarios. Each chapter includes real or realistic scenarios. Do not just read them. Pause and think about what you would do, before reading the analysis. This develops your judgment.
Reflect on your organization. After each chapter, mentally translate the concepts to your actual workplace. What policies does your organization have? Are there gaps? What have you seen happen that relates to these issues?
Start practicing now. Do not wait until you encounter a major ethical dilemma to think about these issues. Start making small conscious choices today about privacy, attribution, and transparency. Build habits that will serve you when bigger questions arise.
Continue learning. This lesson covers foundational AI ethics concepts. As you advance through higher levels of the CAP curriculum, you will develop more sophisticated understanding of algorithmic fairness, bias detection, AI governance, and compliance frameworks.