Do you really know what’s happening with your server estate?

Color cogs to represent varied server estate
Managing an IT team is a tough job. While demands on services are increasing, both internally and externally, the need to be aware of and in control of application and server performance is a constant pressure. Right now, there are four key areas that are of particular concern for those responsible for IT departments and teams.

1. Server estates are growing

The growth in data continues, as does the infrastructure required to manage and store it, with IDC’s Global StorageSphere forecast, 2021-25, predicting worldwide data storage capacity will increase from 6.7 zettabytes in 2020 to 16.1 zettabytes in 2025. With a compound annual growth rate of 19.2%, that’s a lot of extra infrastructure and servers to accommodate and, whether it’s on-premises or in the cloud, all of the data will need to be managed and monitored.

As well as knowing how much space is required, IT leaders need to understand what is being stored and where, and they have important decisions to make over what needs to be stored, for how long, and who has access to it.

To meet the demands of development teams, and follow good DevOps practices, you also need to be able to provide development and test data environments with data that closely matches production, without compromising security or confidentiality.

2. Server estates are becoming more diverse

Alongside the growth in the size of estates is the increase in their complexity. A good indicator is the Stack Overflow Developer Survey 2022, which reveals that developers are now working with an average of 2.7 database types, with open source databases like PostgreSQL and MySQL being used alongside the usual suspects like SQL Server and Oracle. The same survey also shows that developers are expected to take on multiple roles, so it’s not unusual for them to switch between coding a front-end application in JavaScript and then a back-end database in SQL.

This move from monolithic SQL Server or Oracle database estates to mixed estates with a variety of database types has been prompted by two reasons.

Firstly, in the quest to be more competitive, optimize performance, and release features to users faster, the need to use the right database for the right job is becoming key. SQL Server, for example, is great for transaction processing and business intelligence purposes. For mobile and geospatial applications, PostgreSQL may well be more suitable.

Secondly, the rise in cloud usage has confused the mix further. Flexera’s 2023 State of the Cloud Report reveals that 87% of businesses now use multiple cloud providers, so alongside SQL Server on-premises, organizations now use AWS, Azure and Google Cloud, with the choice once again made on particular use-cases.

With data spread across different database types and server platforms, both on-premises and in the cloud, maintaining an overview of the health and performance of estates becomes a lot more difficult.

3. Speed of delivery has become the cornerstone of operations

While server estates are growing in size and increasing in complexity, it hasn’t stopped the speed at which new features and improvements are released. If anything, it has encouraged it, with the adoption of the cloud and multiple database types offering new opportunities for organizations to take advantage of.

This presents a problem for IT teams and particularly DBAs who are responsible for monitoring the ongoing performance of servers and identifying any problems that arise. With more servers and instances to monitor, across more platforms, for example, it can be hard to spot a long-running query on an Azure Database for PostgreSQL over there, while identifying why a deployment failed on a SQL Server database on-premises over here.

This is where having cross-database capabilities in place, and standardizing the way different database types are developed, deployed and managed can help with maintaining – and increasing – the speed of delivery.

4. Remaining compliant is keeping DBAs up at night

While the typical threat most people think about when considering data breaches is hackers, Thales 2023 Data Threat Report found that, for the second year running, ‘Human error’ was the number one concern, ahead of ‘Hacktivists’ and ‘Nation-state’ actors.

The risk of any breach, whether caused by internal or external actors, brings about the danger of reputational damage alongside any fines that may come from regulators, which can be hefty. And breaches don’t just happen on production environments. Development and test servers can be just as vulnerable, and can often be a secondary thought when thinking about security practices.

The key to resolving this is integrated DevOps pipelines for both application and database development that keep servers secure and reduce the risk of data breaches, yet enable agile processes. Particularly when they’re allied to a Test Data Management approach which uses methods like data subsetting, test data generation or data masking techniques to sanitize sensitive data in database copies used in development and testing.

Keeping on top of a server estate can be hard

So you’ve got a growing, diverse server estate that needs to enable fast service delivery while remaining compliant.

It’s important you and your team know about, and can resolve, performance issues before they have an impact on your customer’s experience​. If there’s an issue with any server, wherever it is, on-premises or in the cloud, you need to know about it, and be able to resolve the issue, fast.

You also need to know where your customer data lives, what parts of it are regarded as Personally Identifiable Information (PII), and who has access to it for compliance reasons. Yet at the same time, you need to speed up the development process with quick access to production-like copies of databases for use in development and testing that don’t expose that data.

The traditional approach is to use scripts to manually monitor your server status. Sure, you could hire an army of DBAs to do this, if only you could spin up database experts as quickly as you can spin up cloud servers.

The reality is that manual monitoring of your estate leaves you blind, vulnerable and slow. Your team aren’t aware of potential issues, and instead have to react when they occur. In doing this, they then spend a lot of time going through logs and reports to identify where and how the problem occurred, before spending even more time working out how to fix it.

Meanwhile, a vulnerable server could be leaking sensitive data without you knowing it, so that the first indication you get that something is wrong is the regulator knocking on your door with a hefty fine.

With monitoring, you’re in control

When you proactively monitor your estate with a third-party tool, you can be alerted to and resolve performance issues quicker, before they affect your business.​ You also increase the visibility of the impact of deployments on server performance and availability and ensure an effective feedback loop between teams is maintained.

At the highest level, a single pane of glass view gives you the immediate status of all of your servers, regardless of where they are and how they are hosted, and your team has at their fingertips the tools to drill down, find and fix issues. Powerful features like capacity planning, backup logs and security breach alerting also mean you can always be on top of your server estate.

When you combine this with the provisioning of sanitized database copies that allow teams to work on realistic but secure copies of your production database, you create an ecosystem that enables you to confidently manage your entire estate while delivering a speedy yet secure experience for your customers.

Ready to discuss how to monitor your server estate for performance and availability? Check out our solution page or get in touch to discuss your requirements.

 

Tools in this post

Redgate Monitor

Real-time SQL Server and PostgreSQL performance monitoring, with alerts and diagnostics

Find out more