Problems displaying this newsletter? View online.
Database Weekly
The Complete Weekly Roundup of SQL Server News by SQLServerCentral.com
Hand-picked content to sharpen your professional edge
Editorial
 

This Week's AI Trust Problem Became Everyone’s Problem

There’s a saying in security circles: the weakest link isn’t the lock on the front door but the spare key under the mat. This past week gave us two vivid, simultaneous demonstrations of that principle, and if you’re building anything in the AI space right now, both deserve your full attention.

The Mythos Leak and Accidental Transparency

Let’s start with Anthropic. On March 26, two security researchers, (Roy Paz of Layer Security and Alexandre Pauwels of the University of Cambridge) discovered that Anthropic’s content management system had been misconfigured to make uploaded assets public by default unless explicitly marked private. Nearly 3,000 unpublished internal documents spilled out, including a draft blog post describing a next-generation model internally called “Claude Mythos,” part of a tier Anthropic calls “Capybara.”

The model is described as a significant step beyond the current Opus flagship, including stronger benchmarks across coding, academic reasoning, and, most notably, cybersecurity tasks. The leaked draft describes it as “currently far ahead of any other AI model in cyber capabilities” and warns it “presages an upcoming wave of models that can exploit vulnerabilities in ways that far outpace the efforts of defenders.”

That’s Anthropic’s own language about their own model and now let that sink in.

To be clear, this was not a malicious breach. This was a CMS configuration error, which is a checkbox left unchecked and a default left unreviewed. But the lesson here isn’t about the model itself. It’s about the fact that Anthropic had been privately briefing government officials about Mythos’s cybersecurity implications for weeks before the accidental release and they knew there were security issues. The concern was real enough to warrant closed-door conversations at the highest levels and then a content management oversight made those conversations public for everyone.

The AI industry has a complicated relationship with transparency. We tout openness and responsible disclosure, but we also operate under the implicit assumption that the most capable systems will be handled with proportionate care. A draft blog post about an “unprecedented cybersecurity risk” doesn’t belong in a publicly accessible CMS bucket. This is a process failure, not a technical one, and it’s the kind of failure that scales badly as AI systems become more powerful.  I have significant fears professionally that the speed of innovation where AI is concerned, is always going to leave us more vulnerable to scenarios like the above.

The LiteLLM Attack and When Security Tools Become the Attack Vector

Now, with that said, let’s talk about something, which in my mind, is the more instructive story of the week and the one that has direct consequences for anyone running AI workloads today.

LiteLLM is the connective tissue of the modern AI stack. If you’re building an application that calls OpenAI, Anthropic, Bedrock, Gemini, or virtually any other LLM provider, there’s a reasonable chance LiteLLM is sitting in the middle of it all as a unified proxy layer. It handles routing, fallback, cost tracking, and criticalmost anything which sits directly between your application and your API credentials. It downloads roughly 3.4 million times per day and is present in an estimated 36% of cloud environments.

On March 24, two malicious versions (1.82.7 and 1.82.8) were published to PyPI. They were available for approximately three hours before PyPI quarantined them.

What makes this attack genuinely sophisticated, and worth understanding in detail?  TeamPCP, the threat group responsible, didn’t attack LiteLLM directly and one of the areas of breaches that I’m constantly fascinated with and what happened was part of a conversation during a podcast I was on with Hamish Watson earlier this week.

The hackers attacked Trivy, a widely used open-source vulnerability scanner, five days earlier by exploiting a misconfigured CI/CD workflow to exfiltrate its PyPI publishing credentials. LiteLLM’s own build pipeline used Trivy without a pinned version, so when that compromised scanner ran inside LiteLLM’s CI/CD process, the attackers inherited LiteLLM’s publishing credentials. One dependency, one unpinned version which resulted in one painful, chain reaction.

The payload they deployed was a three-stage attack: a credential harvester sweeping SSH keys, cloud credentials, Kubernetes secrets, .env files, and API tokens; a Kubernetes lateral movement toolkit deploying privileged pods across every node; and a persistent systemd backdoor polling a typosquatted domain for additional instructions. This wasn’t smash-and-grab but was designed for long-term persistence and expansion.

The nuance here that the headlines missed is the following: the attackers specifically targeted security tools. Trivy is a vulnerability scanner and Checkmarx KICS is an infrastructure-ascode security analyzer. These are the tools organizations trust to protect them. By compromising the guardians first, TeamPCP got the keys to the castle without ever having to knock on the front door.  This is the reason I use the analogy of locking all the doors, the garage, the car doors, the trunk, etc., because you realize that many breaches happen over time and from many angles and layers, so you can dance like no one is watching, but you better secure like everyone is.

What the Two Stories Have in Common

These events may look different on the surface, as one is an accidental internal disclosure, the other is a deliberate criminal supply chain campaign, but they share an underlying dynamic that matters enormously to anyone advising organizations on AI adoption and that's we've built the AI infrastructure stack on a foundation of implicit trust, and that trust is being systematically exploited.

LiteLLM is trusted because it’s popular and because it simplifies a real problem. Trivy is trusted because it’s supposed to be a security tool. Anthropic’s CMS is trusted by employees uploading internal materials. In each case, the trust was not unwarranted, but it was also unverified, not versioned, and inconsistently monitored.

In my advisory work, I talk often about AI “duct taped” as a layer on top of processes that haven’t been hardened for them. The LiteLLM attack is the technical illustration of that risk. When you route your OpenAI API keys, your Anthropic credentials, your AWS tokens, your Kubernetes secrets, and your database credentials through a single intermediary layer, that layer becomes the highest-value target in your stack. You’ve done the attacker’s aggregation work for them.

The Mythos leak, meanwhile, illustrates something different and that's the governance gap between what organizations know internally about AI risk and what their operational processes reflect. I want to give credit where credit is due:  Anthropic knew enough to brief government officials, but they either didn’t know enough or hadn’t enforced rigorously enough to keep unpublished documents out of a public-accessible CMS bucket. The sophistication of the model and the simplicity of the failure are almost comically misaligned for an AI organization like Anthropic, one that I respect more so than any other AI vendor.

The Lessons That Actually Matter

A few things I’d take from this week:

Pin your dependencies. I know it sounds basic and you’re right, it is basic. The LiteLLM attack was enabled by an unpinned Trivy version in a CI/CD pipeline. Version pinning is not optional in production AI workloads...full stop.

Treat your AI gateway like a secrets vault. If LiteLLM or any similar proxy layer is sitting in your stack with access to multiple API providers, it needs to be treated with the same rigor you’d apply to your secrets manager. Audit it, monitor it, isolate it, and watch for unexpected outbound connections.

Your security tools are part of your attack surface. This is the lesson that should keep security teams up at night. The tools you use to scan for vulnerabilities have privileged access to your build pipeline, your credentials, and your infrastructure. If they’re compromised, the attacker has everything they need without touching your application code, databases or other assets you’re commonly concerned with.

Governance can’t lag capability. Anthropic building a model they describe as posing “unprecedented cybersecurity risks” while simultaneously having a CMS misconfiguration that exposes internal documents is not a company-specific failure, but a symptom of an industry innovating faster than its own processes. This applies to every organization adopting AI right now. Your AI capabilities and your AI governance need to be on the same roadmap and in the same step.

The week of RSA 2026 gave us a lot to think about. The models are getting more powerful and the hackers are getting more sophisticated. Unfortunately, the infrastructure connecting them is as fragile as any other software ecosystem we’ve ever built, maybe more so, because of how much hyperscalers have concentrated into it.

The question isn’t whether to adopt AI, but whether you’re being honest with yourself about what that adoption actually requires.

Peace out,

~DBAKevlar

Join the debate, and respond to the editorial on the forums

 
The Weekly News
All the headlines and interesting SQL Server information that we've collected over the past week, and sometimes even a few repeats if we think they fit.
Vendors/3rd Party Products

Redgate Flyway’s Product Updates – March 2026

This month we’re bringing you official GitHub Actions for Redgate Flyway, usability improvements in Flyway Desktop, and a look at what’s new, what’s in preview. Plus: earlier visibility of code-review results, helping teams keep quality high and reviews flowing smoothly as AI increases the volume of changes.

AI/Machine Learning/Cognitive Services

Why most enterprise AI projects fail – and how to fix them

The problem isn’t a data quality or infrastructure issue, but rather a architectural positioning one. If teams over-engineer complex models before knowing what can go wrong in production, they are creating more problems to solve, while increasing the total cost of ownership (TCO) of the production environment in the process.

Brent Added AI to sp_BlitzIndex. I Had to Try It.

From SQLFingers

Six posts. That's how long it took me to go from s...

When AI Breaks the Systems Meant to Hear Us

From O'Reilly Radar - Insight

On February 10, 2026, Scott Shambaugh—a voluntee...

BMC’s Jennifer Margules on Intelligent Enterprise Orchestration

From Past News - RSS Feeds

In this episode of eSpeaks, Jennifer Margles, Dire...

Qwen3.5-Omni Debuts as Alibaba’s Most Advanced Multimodal AI Model Yet

From Past News - RSS Feeds

Alibaba unveils Qwen3.5-Omni, a fully omnimodal AI...

Keeping AI Queries Under Control

From Callihan Data

AI usage isn’t slowing down, and it continues to consume more and more electricity. Microsoft has recognized this and is acknowledging its own contributions thanks to the increased usage...

Administration of SQL Server

Everything you should know about the SQL Server Resource database

This article explains what the SQL Server Resource database is, why it exists, and how it affects patching, upgrades, and troubleshooting – without the mythology that often surrounds it.

Tips for tempdb Resource Governance

From Curated SQL

Rebecca Lewis shares a few tips: Someone runs a ma...

Hyperthreading and SQL Server Licensing

From Curated SQL

Joe Obbish provides a warning: Azure VMs with hype...

PSBlitz Updates

From Curated SQL

Vlad Drumea has a changelog: For anyone not famili...

Where SQL Server Meets AI: The Case for Hybrid Architecture

From Sherpa of Data

Part 4 in a series on evolving SQL Server environm...

Because 'It Seems Fine' Is Not a Strategy.

From SQLFingers

Most SQL Server problems do not arrive all at once...

Querying msdb: A Pre-Migration Audit for SQL Agent Jobs

From Andy Broadsword

Most SQL Server environments have more jobs, sched...

SQL Server 2025 CU4 Adds Automatic Updates

From Brent Ozar Unlimited

April 1, 2026: Big news for everyone who has to ma...

Read/Write Ratio in a SQL Server database

From Dr SQL

Repost (with some cleaning up since the book referenced was posted 18 years ago Read/Write Ratio versus Read/Write Ratio?.And I kind of hate that title now that I read...

Updates to Straight Path Solutions sp_Check Procedures

From Curated SQL

Jeff Iannucci has some updates: This month though there are mostly a few small updates for the tools, as next month’s updates should also include…

Career, Employment, and Certifications

How to pass Microsoft certification exams: tips and guidance

From Simple Talk

Learn how to pass Microsoft certification exams with practical exam-day tips, question strategies, time management advice, and common pitfalls to avoid.… The post How to pass Microsoft certification exams: tips...

Cloud - AWS

Announcing managed daemon support for Amazon ECS Managed Instances

From AWS News Blog

Amazon ECS Managed Daemons gives platform engineers independent control over monitoring, logging, and tracing agents without application team coordination, ensuring consistent daemon deployment and comprehensive host-level observability at scale.

Conferences, Classes, Events, and Webinars

Tickets Now On Sale For All Three PASS Summit Events

That's right, the most community-driven data event is coming to Chicago, Frankfurt and Seattle this year. Futureproof your career with trusted high-quality training, and make genuine connections with the most welcoming community.

Are Your Monitoring Tools Creating More Risk Than They Solve?

Redgate’s upcoming Coffee & Clarity session dives into why legacy monitoring is creating hidden risk for DBA teams: blind spots, noise, and slow detection. On May 6, hear why leaders are re-evaluating their approach and gain practical, actionable insights to strengthen your monitoring strategy moving forward.

Data Mining / Data Analysis

How to set up a data analysis environment in esProc SPL (compared to Python)

This article is the first in this six-part “Moving from Python to esProc SPL” series. You’ll learn how to set up esProc SPL, install it on different operating systems, configure your development environment, and load your first dataset. You’ll also write your first SPL script and compare the setup process with Python. By the end of this first article, you’ll have a fully-functional esProc SPL environment and be ready to look into its capabilities in-depth.

Data Storytelling and Visualisation

Smoothed Lines and Data Visualization

From Curated SQL

Kerry Kolosko digs into data visualization theory: Power BI development is a relatively straight forward process when managed by one individual start to finish. But…

Database Design, Theory and Development

First Normal Form (1NF): Breaking the ‘unbreakable rule’ in database design

From Simple Talk

Learn what First Normal Form (1NF) means in databa...

MDX/DAX

The Third Edition of the Mastering DAX Video Course – unplugged

From Sqlbi

Alberto and I recorded an unplugged session to tal...

User-Defined Functions vs Calculation Groups in DAX

From Curated SQL

Marco Russo and Alberto Ferrari take a look back at calculation groups: The introduction of user-defined functions (UDFs) in DAX changes the way we think…

Microsoft Fabric ( Azure Synapse Analytics, OneLake, ADLS, Data Science)

Capacity Overage in Microsoft Fabric

From Curated SQL

Pankaj Arora has a new ‘give us money’ lever: ...

Generating Excel Reports via Fabric Dataflows Gen2

From Curated SQL

Chris Webb builds a report: So many cool Fabric fe...

Microsoft Fabric ETL and the Air Traffic Controller

From Curated SQL

Jens Vestergaard rethinks a metaphor: In February ...

Performance Tuning SQL Server

Diagnosing a textConnection() Slowdown in R

From Curated SQL

Yihui Xie looks into an issue: Running quarto render on a document with that single chunk took 35 seconds. The equivalent rmarkdown::render() finished in under half a second. As…

PostgreSQL

How User-Defined Types work in PostgreSQL: a complete guide

I’m sure I’m not alone when I say, sometimes I get sidetracked. In this particular instance, I hadn’t intended to start learning about User-Defined Types (UDT) in PostgreSQL – I just wanted to test a behavior that involved creating a UDT. But, once I started reading, I was hooked. I mean, four distinct UDTs with different behaviors? That’s pretty cool. Let’s get into it.

Cornelia Biacsics: Contributions for week 12, 2026

From Planet Postgres

From March 23 to March 26, the following contribut...

User-Defined Types in PostgreSQL

From Curated SQL

Grant Fritchey dives into functionality: I’m sur...

Richard Yen: The Hidden Behavior of plan_cache_mode

From Planet Postgres

Introduction Most PostgreSQL users use prepare...

Deepak Mahto: Why Ora2Pg Should Be Your First Stop for PostgreSQL Conversion

From Planet Postgres

I have been doing Oracle-to-PostgreSQL migrations ...

Lætitia AVROT: pg_service.conf: the spell your team forgot to learn

From Planet Postgres

I’ll be honest with you. I’m old school. My ID...

Umut TEKIN: Patroni: Cascading Replication with Stanby Cluster

From Planet Postgres

Patroni is a widely used solution for managing Pos...

Antony Pegg: Replicating CrystalDBA With pgEdge MCP Server Custom Tools

From Planet Postgres

A disclaimer before we start: I'm product manageme...

Vibhor Kumar: pg_background v1.9: a calmer, more practical way to run SQL in the background

From Planet Postgres

There is a kind of database pain that does not arrive dramatically. It arrives quietly. A query runs longer than expected. A session stays occupied. Someone opens another...

Elizabeth Garrett Christensen: Postgres Vacuum Explained: Autovacuum, Bloat and Tuning

From Planet Postgres

If you’ve been using Postgres for a while, you?...

PowerPivot/PowerQuery/PowerBI

Setting up Power BI Version Control with Azure Dev Ops

From FourMoo

In this blog post is a way set up version control ...

Professional Development

Finding a Burrito in Ireland

From Curated SQL

Andrew Pruski has my attention and my interest: A ...

Python

How to set up a data analysis environment in esProc SPL (compared to Python)

From Simple Talk

Learn how to install and set up esProc SPL for dat...

T-SQL and Query Languages

Define the question before writing the query

From Dr SQL

I am writing a presentation to do a couple of time...

Calculating Net Present Value and Internal Rate of Return in T-SQL

From Curated SQL

Sebastiao Pereira is back with more calculations: ...

Tools for Dev (SSMS, ADS, VS, etc.)

Protecting the SSMS Server List

From Curated SQL

Jens Vestergaard provides an update: Twelve years ...

How to Add Clippy Support to SSMS

From Erik Darling Data

Step 1: Install the Clippy digital assistant SSMS ...

 
RSS FeedTwitter
This email has been sent to {email}. To be removed from this list, please click here. If you have any problems leaving the list, please contact the webmaster@sqlservercentral.com. This newsletter was sent to you because you signed up at SQLServerCentral.com. Note: This is not the SQLServerCentral.com daily newsletter list, and unsubscribing to this newsletter will not stop you receiving the SQL Server Central daily newsletters. If you want to be removed from that list, you can follow the instructions on the daily newsletter.
©2019 Redgate Software Ltd, Newnham House, Cambridge Business Park, Cambridge, CB4 0WZ, United Kingdom. All rights reserved.
webmaster@sqlservercentral.com

 

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -