Personal Security Practices
1. Verify Unusual Requests
If someone asks for something unusual (especially if it involves sensitive information, payments, or actions with consequences), verify the request through an independent channel. Call the person directly using a known phone number. Email them from an official account. Do not reply to the email or call the number they provided—use a number you know is correct.
This is the single most effective defense against phishing, impersonation, and social engineering.
2. Be Skeptical of Video and Audio
Video and audio are no longer reliable evidence. If a video shows something important (especially something that would be surprising or consequential), assume it might be a deepfake until proven otherwise. If a voice call asks for something sensitive, verify through another channel.
3. Protect Your Identifying Information
Your photos, voice recordings, and video are valuable to deepfake creators. Limit what you post publicly. Be careful about video calls with strangers. Reduce the amount of easily accessible training material attackers could use to create deepfakes of you.
4. Recognize Social Engineering
Social engineers manipulate people by appealing to emotions: urgency, authority, fear, curiosity. If a message creates strong emotion or urgency, pause. Take time to think. Do not let urgency override judgment.
5. Use Strong Authentication
Use complex passwords and multi-factor authentication (MFA) on important accounts. MFA makes it much harder for attackers to access accounts even if they have a password. This is one of the highest-impact security measures you can implement.
Detecting AI-Generated Content
Deepfake Detection
Visual signs: Deepfakes often have subtle artifacts: unnatural eye movements, inconsistent lighting, strange lip-sync timing, skin textures that do not look quite right. But modern deepfakes are becoming harder to detect visually.
Contextual checking: Ask: is the content consistent with what we know about this person? Would they say this? Is this consistent with their recent statements? Deep fakes might be technically perfect but contextually wrong.
Source verification: Where did this come from? Is it from the official source, or did it come from social media? Official channels (official websites, verified social media accounts) are more trustworthy.
Technical analysis: For high-stakes claims, use forensic tools to analyze video. These tools look for deepfake artifacts at a technical level. However, they require expertise and time.
AI-Generated Text Detection
AI-generated text is harder to detect than AI-generated media. Tools exist to detect it, but they are not perfect. Instead of trying to detect AI text, look for the content itself: Does it make sense? Is it coming from a trusted source? Does the writing style match the person?
Checking Claims and Sources
For important claims (especially if you are going to act on them or share them): verify independently. Check official sources. Use fact-checking websites. Look for multiple sources confirming the claim. If multiple sources disagree, investigate before acting.
Organizational Security
Technical Controls
Email filtering: Organizational email systems can block known phishing attempts and suspicious emails. These are not perfect, but they catch most automated attacks.
Endpoint protection: Software on computers and phones that detects and prevents malware and suspicious activity. Important for both AI-powered and traditional attacks.
Access controls: Limiting who has access to sensitive systems, data, and funds. If attackers cannot access important systems because most people do not have access, the damage is limited.
Policies and Procedures
Verification procedures: Formal processes for verifying unusual requests. For example: before approving large wire transfers, verify the request through a second channel.
Incident response plans: What to do if a security incident occurs. Having a plan means you can respond quickly and limit damage.
Security training: Regular training on recognizing phishing, social engineering, and AI-powered attacks. Trained employees are your best defense against social engineering.
Communication and Culture
Reporting: Make it easy for employees to report suspected phishing, social engineering, or security concerns. Encourage reporting rather than punishing mistakes. The employee who reports a suspicious email is protecting the organization.
Leadership modeling: If leadership models good security practices, employees will too. If leaders ignore security recommendations, employees will as well.
If You Suspect an Attack
Immediate Actions
Do not comply with requests. If you suspect phishing or social engineering, do not follow the attacker's instructions. Do not click links, do not open attachments, do not provide information.
Report it. Report suspicious emails, messages, or calls to your organization's IT or security team. Include details: when you received it, what it asked for, who it appeared to come from. This helps the organization identify and block the attack.
Do not spread it. Do not forward phishing emails or suspicious messages to others (even as a warning) without removing the malicious links. Forwarding can help the attack spread.
Escalation
If a sensitive account is compromised: change your password immediately and notify your security team. If financial information is at risk: contact your bank. If you believe you have been scammed: file a report with your local police and the relevant consumer protection agency.
Defense in Depth
No single defense is perfect. The best approach uses multiple layers:
- Technical layer: Email filtering, antivirus, intrusion detection
- Procedural layer: Verification procedures, incident response plans
- Human layer: Training, awareness, judgment
- Organizational layer: Culture that values security, reward for reporting concerns
When one layer fails, others catch the attack. This is why organizations use multiple security measures rather than relying on any single one.
Key Takeaway
Protecting Against AI-Powered Attacks
Protection combines personal practices (verify requests, be skeptical of media, strong authentication), technical measures (email filtering, endpoint protection), and organizational measures (policies, training, incident response).
Defense in depth means no single failure compromises security. The best defense remains human judgment: verifying requests, thinking skeptically, and reporting concerns. Combined with technical and organizational measures, this creates resilience against AI-powered attacks.