Level 1 · Chapter 6.3

Transparency &
Disclosure

Hiding AI use creates suspicion and damages trust. Disclosing it builds credibility. Learn how to communicate transparently about AI with colleagues, customers, and stakeholders—and understand when disclosure is essential vs. optional.

Watch the Lecture

The Trust Question: Should You Tell People?

You write a proposal using AI tools. It is good work—the AI gave you a solid structure, you refined it significantly with your own insights, and you believe it will genuinely help the client. Should you tell the client AI was involved?

The instinct of many people is: no. Why risk raising questions or appearing less creative? But the instinct is actually misguided. Disclosure, when done well, maintains trust. Hiding it until discovered damages trust far more severely.

This chapter teaches you how and when to disclose AI use. The goal is not to burden every interaction with disclosure discussions, but to be transparent about AI involvement when it matters—when external parties have a legitimate interest in knowing.

The Trust Principle

In any professional relationship, if something could affect the other party's evaluation or decision, you should disclose it. Conversely, if disclosure would not change their evaluation (using an AI spell-checker is technical, not creative), you do not necessarily need to mention it.

When Disclosure Matters Most

Client and External Work

If you are providing work to a client or external stakeholder, they have a right to know how that work was created. They may care for business reasons (Do they want AI-generated content? Are they concerned about outsourcing to AI?), for legal reasons (compliance or contract requirements), or for strategic reasons (competitive advantage). When in doubt, disclose.

A practical approach: Include a brief note in proposals or deliverables if AI was meaningfully involved. Something like: "This proposal's initial structure was developed with AI assistance, but all strategic recommendations and client-specific analysis are original." This signals transparency without being defensive.

Published or Public-Facing Content

If your work will be published, distributed widely, or posted publicly, disclosure becomes more important. Readers, listeners, or viewers increasingly expect to know if AI was involved in creating content. This is especially true in media, publishing, and creative industries.

Some organizations now require AI disclosure labels on content: "This article was written with AI assistance" or "This image was generated with AI." Even where not required, transparency is professionally safer.

Work That Influences Decisions About People

If AI-generated or AI-assisted work is being used to make decisions about people—hiring decisions, medical recommendations, performance evaluations, loan approvals—disclosure is essential. Decisions makers need to understand what tools contributed to the decision so they can account for potential AI biases and limitations.

In these contexts, disclosure protects both the affected individuals and your organization. It says: "Here is what influenced this decision, and here are the limitations of that analysis."

High-Stakes or Sensitive Contexts

Legal work, medical analysis, financial advice, psychological assessment, safety-critical systems: in any high-stakes domain where errors matter tremendously, disclosure is important. The professional and legal standard is increasingly becoming: if AI was meaningfully involved, stakeholders should know.

When Disclosure Is Less Critical

Internal Productivity Tools

Using AI to help you draft an email to your team, organize a brainstorm, or clean up rough notes: this is AI as a productivity tool, not AI as a creator. You do not need to disclose to your team that you used an AI spell-checker or grammar tool to refine your message. This is like not mentioning that you used a word processor instead of writing by hand.

Routine Technical Assistance

Using AI to debug code, explain an error, or help troubleshoot: this is technical assistance. If the code works and you tested it, mentioning that you asked an AI for help is optional. The code's quality is what matters.

Internal Work Where You Add Significant Value

If you are using AI as a starting point for analysis, brainstorming, or planning, but you significantly revise, refine, and adapt the output, disclosure is less critical for internal use. However, if the work will be presented externally, reconsider.

How to Disclose AI Use Effectively

Be Honest and Specific

Vague disclosure is worse than no disclosure. Instead of "I used AI," say "I used ChatGPT to generate an initial outline and brainstorm ideas, then significantly developed the strategy section myself." Specificity gives people accurate understanding of the work's nature.

Explain the Role of AI

Different people will interpret "AI was involved" differently. Clarify: Did AI generate the entire first draft? Generate ideas that you then developed? Help you refine your own work? Providing specific examples helps:

  • "The code structure was written by me, but I used AI to debug the error in line 47."
  • "The research and analysis are entirely my own. I used AI to help organize the findings into a clear outline."
  • "The initial concept came from AI brainstorming. I then developed it, tested it, and adapted it for this specific context."

Emphasize Your Contribution

If you spent significant time on judgment, refinement, quality assurance, and adaptation, emphasize that. "I used AI to accelerate the initial drafting phase, which saved 6 hours of work, allowing me to spend 8 hours on strategic refinement and client-specific adaptation" tells a different story than "I used AI."

Address Potential Concerns Directly

If disclosing AI use to a client, anticipate concerns and address them proactively. If the client might worry about quality, emphasize your quality assurance process. If they might worry about unique thinking, emphasize the original analysis. If they might worry about confidentiality, explain what safeguards you used (enterprise version, data agreements, anonymization, etc.).

Real-World Disclosure Scenarios

Scenario 1: Client Proposal

Situation: You have prepared a proposal for a client using AI tools for initial brainstorming and structure.

Disclosure approach: "This proposal was developed using a combination of my strategic thinking and AI assistance for initial brainstorming and organization. All recommendations are based on my analysis of your business, and all client-specific content is original. I used AI tools to accelerate the planning phase, allowing me to dedicate more time to strategic refinement."

Why this works: It is specific, explains the AI's role, emphasizes your contribution, and addresses the client's likely concern (quality and originality).

Scenario 2: Published Article

Situation: You have written an article for publication using AI assistance for research synthesis and initial drafting.

Disclosure approach: Add a note: "This article was researched and written by [your name], with AI assistance for research synthesis. All original analysis, argumentation, and conclusions are my own."

Why this works: Readers appreciate transparency. It signals that you are confident in your work while being honest about your process.

Scenario 3: Code Review

Situation: You have written code using AI assistance for parts of the implementation.

Disclosure approach: Comment the code: "This algorithm was developed with AI assistance but has been thoroughly tested and verified by me." Or mention in code review: "I used AI to help with the initial implementation of the sorting function, which I then tested extensively."

Why this works: Technical teams understand that tools are used. What matters is that you tested the code and are confident in it. Transparency shows good practice.

Scenario 4: Internal Memo

Situation: You have prepared an internal memo using AI for drafting and organization.

Disclosure approach: No disclosure needed, unless the content is sensitive or will be shared externally. If disclosure seems appropriate, a brief note is sufficient: "Drafted with AI assistance" in a footer is enough.

Why this works: Internal communication is understood to use productivity tools. Disclosure becomes necessary only if the content will be used externally or if it contains analysis that warrants attribution.

Handling Concerns About Disclosure

"Won't Disclosing AI Use Make Me Look Less Creative?"

This is the most common concern, and it is actually backwards. Disclosing AI use while emphasizing your creative judgment makes you look thoughtful and strategic. Hiding AI use until discovered makes you look deceptive. Professional creativity in the AI age means knowing how to leverage tools intelligently, not pretending you did not use them.

"What If the Client Does Not Want AI?"

If the client has a preference against AI (or a requirement that work be human-created), disclosure ensures you find out early. Better to address it upfront than to have a client reject work because they discover AI was used. If the client does not want AI-assisted work, you can either redo it without AI or explain why AI assistance actually benefits them (efficiency, quality, cost) if true.

"Isn't This Over-Disclosure?"

There is a middle ground. You do not need to disclose purely technical tool use (spell-check, grammar tools, editing software). But when AI meaningfully contributes to content creation, analysis, or creative work, disclosure is appropriate. Use judgment: if it would change the other person's evaluation of the work, disclose it.

Organizational Approaches to Disclosure

Transparency as Standard Practice

Some organizations are adopting transparency as standard practice: whenever AI is meaningfully involved in external-facing work, it is disclosed. This removes ambiguity and builds credibility across the organization.

Client Communication Templates

Organizations can develop templates for disclosing AI use to clients. This ensures consistency and reduces the friction of disclosure. Something like: "How AI was used in this project" becomes a standard part of project documentation.

Internal Guidelines

Many organizations are developing guidelines specifying when disclosure is required and how to do it. If your organization has guidelines, follow them. If not, create them or suggest to management that the organization develop them.

Building a Culture of Transparency

Start with Leadership

If leaders in your organization transparently discuss AI use, it becomes normalized. If leaders hide AI use or discourage talking about it, a culture of secrecy develops.

Share Success Stories

When disclosure goes well—a client appreciates learning about AI use, transparency builds trust—share that example. It shows that transparency can be positive.

Address Fears Directly

If people are afraid that disclosing AI use will hurt their credibility or client relationships, address that fear with evidence. Many clients actually appreciate knowing that AI is being used thoughtfully and strategically.

Connect to Ethics

Frame transparency not just as risk management but as ethical practice. Transparency respects the intelligence and autonomy of colleagues and clients. It says: "Here is how I made this decision. You have the information to evaluate whether you agree."

Key Takeaway

Transparency about AI use builds trust and credibility far more than secrecy does. Disclose AI involvement when the work is external-facing, influences important decisions, or affects how others would evaluate the work. Be specific about what AI did and what you did. Emphasize your creative and critical contribution. Address potential concerns proactively.

Transparency is not about apologizing for using AI or diminishing your contribution. It is about honest communication. It says: "Here is how I created this. Here is what AI did. Here is what I did. This is the quality of work I am confident in." That approach builds professional credibility.

What Comes Next

With principles of privacy, intellectual property, and transparency understood, Chapter 6.4 brings everything together: building your own ethical AI practice. You will develop a personal framework for making ethical decisions, understand how to align with organizational policies, and learn how to raise concerns responsibly when you encounter ethical dilemmas.