Databricks

External Article

Data Streaming Databricks in Azure

  • Article

The core functionality of Apache Spark has support for structured streaming using either a batch or a continuous method. The two most popular sources of input data are data lake files and Kafka events. Check point files in the data lake are used to keep track of what data has been processed to date.

2025-03-14

Blogs

Speaking at SQL Saturday Austin 2025

By

SQL Saturday Austin 2025 is in just a few days. I am honored to...

Updating SSMS is Easy (w/ v21)

By

I’ve been using the SSMS preview for v21. This is the next evolution of...

Execute Fabric Data Pipeline from Azure Data Factory

By

In the blog post Call a Fabric REST API from Azure Data Factory I...

Read the latest Blogs

Forums

SQL Query Performance with Left Join

By juniorDBA13

I have a query that is performing poorly It has a left join to...

The Long Running Backup

By Steve Jones - SSC Editor

Comments posted to this topic are about the item The Long Running Backup

Interview Tips

By Steve Jones - SSC Editor

Comments posted to this topic are about the item Interview Tips

Visit the forum

Question of the Day

The Long Running Backup

I have a long running backup of the Sales database during a maintenance window that is going to take another 2 hours to complete on SQL Server 2022. I also need to perform some proactive work and add another file (ndf) to the Sales database to handle the expected growth of the next month. Can I do both simultaneously?

See possible answers