Okay, just found one really interesting feature... described here: https://cloud.google.com/bigquery/docs/querying-wildcard-tables#limitations
It makes sense if your data pipeline generates something like CSV files with a format of [standard file prefix]MMDDYY
the part to keep in mind is that in BigQuery, you often upload a series of huge files/CSVs/whatever. In addition to filtering what's IN a table, in BQ, you can filter the tables that "participate" in the query, like this:
max != 9999.9 # code for missing data
AND _TABLE_SUFFIX BETWEEN '29' and '35'
It's like partitioned tables without the hassle of partitioning. It also means you can import entire files with the same structure into their own tables, and then combine them with _TABLE_SUFFIX instead of writing a messy UNION ALL query that would require dynamic T-SQL. If you're doing time-based analysis (like changes in value or whatever over time), you can query all the tables at once. (Super handy if you're testing a query ... you can just eliminate all but the first table by changing the _TABLE_SUFFIX filter so only one table is returned.) Funky, but I can definitely see how this would be helpful if you're doing queries on a massive scale. I seriously think it's time for a book! Otherwise, I might spend too much time thinking in terms of how SQL Server does it, and the two database engines are vastly different.
Once upon a time, I did cancer research, and their studies were all in separate databases/tables. It would have been pretty wild to do a query using something like _TABLE_SUFFIX (I wonder if there's a PREFIX version of that.) to get ALL the data from ALL the protocols of the same type (like "lung", "central nervous system", etc) and just do a super quick summary of *all* of them at once, just to look for patterns.