What We Mean by “Shadow AI”
Shadow AI refers to the use of AI tools and services without IT or security oversight, often through personal accounts, browser extensions or mobile apps.
For employees, it is fast and helpful, which is why it sidesteps governance, DLP, logging and legal controls. For credit unions, that creates a direct path for member personally identifiable information (PII) – SSNs, account numbers, loan details – to leave the environment with no audit trail and unclear retention or training practices.
So while Shadow AI may seem useful, it can expose your credit union and its members: a single copied account number in a personal ChatGPT session can expose member data indefinitely.
The Scale (and Why Bans Don’t Work)
Multiple studies point to the same conclusion: employees will use AI – approved or not.
- Microsoft research shows unsanctioned AI usage is already widespread.
- Software AG found that half of employees are Shadow AI users, and nearly half would continue even if explicitly banned.
- CybSafe and the National Cybersecurity Alliance report that 38% of employees share sensitive work information with AI tools without employer knowledge.
- Here's the problem: KnowBe4 found only 18.5% of employees even know their company has an AI policy.
Practical takeaway: assume AI usage is happening and govern accordingly. Bans drive activity underground; sanctioned tools and visible rules bring it into the light.
Consumer vs. Enterprise AI: Data‑Handling Realities
Not all AI tools treat data the same. Consumer chatbots (AI outside of your credit union) can retain and use data for model improvement depending on settings; enterprise offerings (AI within your credit union) typically exclude your prompts/outputs from training and provide administrative controls over retention and logging. If staff paste member data into consumer tools, you’ve lost control of where that data lives and how long it persists.
For example, if a loan officer pastes a member's application into ChatGPT to draft a denial letter, you have no way to recall that data, audit who saw it or verify it won't be used in future model training.
Red Flags That Shadow AI is Already in Play
These indicators don’t prove misuse on their own, but they often appear early when Shadow AI is already in play.
- Unusual clipboard activity or large copy/paste patterns (extensions can capture clipboard contents).
- Traffic to AI domains from unmanaged devices or personal accounts.
- Suspiciously polished or formulaic language in emails (overly formal, generic tone or phrases like “I hope this message finds you well” from staff who don't normally write that way).
- Unapproved browser extensions requesting broad permissions (e.g., “read and change data on all sites,” scripting).
A Practical, Credit Union-Ready Response
Before jumping into a 30-day execution plan, it helps to align on what a realistic response looks like in a credit union environment – one that enables safe AI use rather than pushing it underground.
1) Provide a safe, sanctioned alternative
Give employees an approved path so they don't reach for personal tools: e.g., enterprise AI (Microsoft 365 Copilot, Google Workspace Gemini Enterprise or Claude for Work) where you control who can use it, what data they can input and every prompt is logged. Tie it to your identity stack (SSO/MFA), apply role-based access, and publish simple "green/yellow/red" use cases: what's allowed, what requires review and what is out of bounds.
Examples:
- Green: "Draft a member newsletter about holiday hours"
- Yellow: "Summarize a regulatory update" (requires review for accuracy)
- Red: "Generate a loan denial letter using member data"
Align controls with National Institute of Technology AI risk management framework (NIST AI RMF) categories (Map, Measure, Manage, Govern)
2) Instrument for visibility
You can’t govern what you can’t see. Focus on:
- Network and DNS monitoring for AI domains and usage patterns
- Cloud access security broker (CASB) or inline data loss prevention (DLP) controls for uploads to unsanctioned AI services
- Browser extension governance: allow list of essential extensions, limit scripting permissions where feasible and review extension inventories regularly
These controls uncover risk early, before it becomes an incident.
3) Update your policy—and make it impossible to miss
Keep it short, action-oriented and everywhere: onboarding, login banners, email signatures during AI awareness week, learning management system (LMS), SharePoint and manager talking points for team meetings. Given that only ~18.5%of employees, in one survey, know their company's AI policy, you need repetition and reinforcement across channels.
4) Coach for data hygiene, not just compliance
Train employees to:
- Never paste member identifiers (SSNs, account numbers, loan details) into consumer AI tools
- Verify outputs before use
- Remove metadata from documents
- Treat AI like any third-party data processor
Reinforce this with quick decision trees – “Is this okay to paste?”– and phishing-resistant workflows.
5) Close vendor and model-risk gaps
If you're evaluating an AI vendor, ask these questions before signing:
- Will our data be used to train your models?
- Where is our data stored (geo-fencing requirements)?
- Can we export audit logs?
- What's your breach notification timeline?
- Who are your sub processors, and can we approve them?
- Do you provide model-update transparency when algorithms change?
A 30‑Day Fast Start for Credit Unions
Ownership typically sits with Information Security in partnership with IT and Compliance, with executive sponsorship to ensure adoption.
- Week 1 – Visibility: quick employee survey; review DNS/HTTP logs for AI domains; brief execs/board on risk and plan.
- Week 2 – Guardrails: publish a one‑page AI Acceptable Use (cover approved tools, prohibited data types and consequences.); stand up an approved AI option for low‑risk tasks; enable DLP rules for SSNs/account numbers to AI domains.
- Week 3 – Hardening: block or require approval for browser extensions; audit existing ones for permissions like “read all data;” tighten CASB/MDM for unsanctioned uploads.
- Week 4 – Vendors & training: add AI clauses to vendor DDQs/DPAs; launch five-minute scenario-based training: (i.e., ”A member asks you to look up their loan – can you use ChatGPT to draft the email?”); report early metrics and refine.
Frameworks and Expectations
NIST's AI Risk Management Framework and NCUA AI guidance provide the structure; your job is to translate them into everyday decisions. Start with NIST's Generative AI Profile to identify which risks apply to your use cases, then map controls to your 30-day plan.
The regulators are clear: AI risk should be governed using the same risk-based, documented and auditable controls expected for any material third-party or data-processing technology.
Bottom Line
Shadow AI isn’t driven by malice, it’s driven by momentum. Employees want to serve members faster and better. The goal isn’t to stop AI use, but to channel it into safe, governed and auditable workflows that protect member trust and stand up to regulatory scrutiny. Optiri helps credit unions assess exposure, deploy approved AI safely and establish right-sized governance mapped to NIST and credit-union realities. Reach out to Optiri today to learn more about how we can help prevent Shadow AI usage at your credit union.
Sources & Further Reading
- IBM: What is Shadow AI? (https://www.ibm.com/think/topics/shadow-ai)
- Securiti: Shadow AI Explained (https://securiti.ai/what-is-shadow-ai/)
- Microsoft & LinkedIn Work Trend Index 2024 (https://blogs.microsoft.com/blog/2024/05/08/microsoft-and-linkedin-release-the-2024-work-trend-index-on-the-state-of-ai-at-work/)
- ITPro: 71% using unapproved AI (https://www.itpro.com/technology/artificial-intelligence/microsoft-says-71-percent-of-workers-have-used-unapproved-ai-tools-at-work-and-its-a-trend-that-enterprises-need-to-crack-down-on)
- Software AG study coverage (https://www.eweek.com/news/unauthorized-ai-use-surges-at-work/)
- CybSafe/National Cybersecurity Alliance (https://www.cybsafe.com/press-releases/study-almost-40-of-workers-share-sensitive-information-with-ai-tools-without-employers-knowledge/)
- KnowBe4 2025 survey (https://www.knowbe4.com/press/knowbe4-research-uncovers-disconnect-between-ai-adoption-and-policy-awareness-in-the-workplace)
- Security.com: Malicious clipboard extensions (https://www.security.com/threat-intelligence/chrome-extensions-are-you-getting-more-you-bargained)
- Dark Reading: AI extensions risk (https://www.darkreading.com/cyber-risk/ai-browser-extensions-security-battleground)
- NIST AI RMF and GenAI Profile (https://www.nist.gov/itl/ai-risk-management-framework)
- NCUA AI Resources (https://ncua.gov/regulation-supervision/regulatory-compliance-resources/artificial-intelligence-ai)
- GAO 2025 AI in Financial Services (https://www.gao.gov/products/gao-25-107197)
- OpenAI Data Controls (https://platform.openai.com/docs/guides/your-data)
- Google Gemini for Cloud data use (https://docs.cloud.google.com/gemini/docs/discover/data-governance)
- Anthropic Enterprise Retention Controls (https://privacy.claude.com/en/articles/10440198-custom-data-retention-controls-for-enterprise-plans)
- Anthropic consumer policy update (https://www.anthropic.com/news/updates-to-our-consumer-terms)