A Snowflake warehouse that is too small will cause all kinds of grief. Here’s how to detect and fix this problem and use AI to automate it all.
If you have large or complex Snowflake queries that are performing slowly, there could be all kinds of things wrong with them. We have a list of the top 8. Perhaps the most basic problem is a warehouse that is undersized. Imagine you tried to pour 10 ounces of milk into an 8 ounce glass. Of course you’d end up with 2 ounces spilled all over your table.
When a large query in Snowflake cannot fit into the memory configured for your warehouse, it “spills” bytes as well. First, it will spill them to the local disk. As you know, disk is much slower than memory, and if that wasn’t a problem enough, now the query is waiting on I/O back and forth and performance will decrease dramatically. Even worse, if there isn’t enough disk space locally, it will spill the bytes to remote storage, meaning the extra I/O has to crawl over the network as well.
Finding Snowflake spilled bytes
Fortunately, Snowflake tracks all this in snowflake.account_usage.query_history. If you have access to the Snowflake database (by default ACCOUNTADMIN is the only role with access), you can use this query to find the top 10 queries with spilled bytes in the last 30 days:
SELECT query_id, SUBSTR(query_text, 1, 50) partial_query_text, user_name, warehouse_name, bytes_spilled_to_local_storage, bytes_spilled_to_remote_storage FROM snowflake.account_usage.query_history WHERE (bytes_spilled_to_local_storage > 0 OR bytes_spilled_to_remote_storage > 0 ) AND start_time::date > dateadd('days', -30, current_date) ORDER BY bytes_spilled_to_remote_storage, bytes_spilled_to_local_storage DESC LIMIT 10;
You can change the date range by adjusting the “-30” in the dateadd function to whatever you would like, and also change how many results show by changing the “10” in the LIMIT clause. Once you know the offending queries, you can then work to resolve the issue.
Fixing Snowflake spilled bytes
There’s really no magic to this, since you can’t violate the laws of physics. You basically have 2 options:
- Reduce query size. Try to get the 10 ounce query down to the 8 ounce capacity you have. This is easier said than done, especially if it isn’t your query. But it doesn’t hurt to ask if the user really needs all those rows, joins, etc.
- Increase warehouse size. Another obvious choice is to increase the warehouse size. Of course you could go up a size. Or you could consider using a Snowpark optimized warehouse.
Snowpark optimized warehouses are designed for queries that use a lot of memory. Snowflake had machine learning use cases in mind for these, so they provide 16x more memory and 10x more local cache per node than a standard warehouse. So this might seem like a quick fix. But there are several factors to be aware of before switching to a Snowpark optimized warehouse:
- More expensive. Snowpark optimized warehouses are 50% more expensive than standard warehouses.
- Limited on the low end. The smallest a Snowpark warehouse can be is medium. This can prevent you from saving money if there is ever a time when a small or even x-small is appropriate for your workload.
- Downtime. You must suspend your warehouse before switching types and wait for the ALTER DATABASE operation to complete, and this might be difficult for some warehouses.
The AI automated alternative
The problem with everything I just described is that you have to take the time out of your schedule to do this. And of course it won’t be just once. Plus, you’ll need to keep an eye on warehouse size. Leave it too small and you’ll be crying over spilled bytes. Leave it too large, and you’ll be crying over a larger bill. Do you really need to add this pressure to your daily work?
Keebo is the only full-automated Snowflake optimizer. Our patented machine learning algorithms work 24×7 to make the right adjustments at exactly the right time to save you money. This includes the warehouse size. We give you a slider bar setting so our robots know how to prioritize savings or performance to meet your needs. And we don’t need to look at any user data to do this–we look at 76 performance parameters including bytes_spilled_to_local_storage and bytes_spilled_to_remote_storage. If you’d like to learn more, check us out. We’d love to show you a personalized demo and start you on a free trial.