As someone who has had the honor of delivering keynotes at both Oracle and SQL Saturday data events, I’ve spent time with professionals on both sides of the database world. I’ve held the title of database administrator, developer, engineer and architect for structured, transactional giants and the analytics-heavy ecosystems of eight different database platforms, (not counting Cloud platforms). My keynotes have often been focused on a topic that unites us all today: the urgent need to address the threat of Shadow AI.
Shadow AI refers to the unauthorized or unmonitored use of artificial intelligence tools - particularly public, cloud-based generative AI. This risk is often unintentional by employees or teams and as expected, without official IT oversight. It’s the new Shadow IT, but far more dangerous. In my keynotes on data protection and ethics in the AI era, I’ve emphasized that the biggest security gap in AI today isn’t the models - it’s the humans using them.
Recent research by Harmonic AI discovered that sensitive personal information (PII), including names, emails, and even health data, was estimated between 8% in 4th quarter of 2024 and upwards of 16% by 1st quarter of 2025 just in the free version of ChatGPT. This unintentionally exposes organizations and customers to potential misuse or unintended storage in AI model training data. Add to that authentication keys, source code, internal documents, and employee records being submitted for "help" or "summarization" and you’ve got a ticking compliance time bomb.
Database professionals know what’s at stake. We've spent decades architecting systems to protect PII, meet HIPAA, PCI-DSS, and GDPR requirements, and now a single unauthorized API call to a free AI tool can bypass all that governance built into our relational systems.
Shadow AI poses a unique and unprecedented risk:
- It operates outside corporate audit trails.
- It often runs on systems that have no SLA, no enterprise support, and no logging.
- And it’s being fed the exact kind of sensitive data we’ve spent years safeguarding.
This isn’t a philosophical problem, it’s a very real and growing liability. The answer isn’t to stop AI innovation. It’s to approach it responsibly, with clear policies, well-communicated training programs, and the use of enterprise-grade AI tools that are approved, governed, and monitored.
Organizations must:
- Define what AI tools are authorized.
- Train employees on how and when to use them safely.
- Block or sandbox access to public AI tools whenever data classification policies are violated.
- Ensure enterprise tools are configured to comply with data governance and retention requirements.
The future of AI is powerful, but it must also be ethical, secure, and compliant. If you have access to critical data, the threat of Shadow AI is real, and our collective responsibility is to ensure we don't trade innovation for risk. Let’s start to talk about the risk of Shadow AI and build data-driven organizations with policies and protections in place to ensure it doesn’t bypass everything we’ve all worked so hard to secure at the database level.
DBAKevlar Out