※This is an English translation of the original Japanese article: https://dev.classmethod.jp/articles/snowflake-generation-2-standard-warehouses/

This is Sagara.

Snowflake has released new warehouses with improved performance, called "Generation 2 standard warehouses".

https://docs.snowflake.com/en/release-notes/2025/other/2025-05-05-gen2-standard-warehouses

I tried comparing the speed of these new warehouses with traditional ones, and I'll summarize the results here.

What are Generation 2 standard warehouses?

Below is a translation of a quote from the official documentation. "Generation 2 standard warehouses" are the next generation of Snowflake's current standard virtual warehouses, focused on improved performance.

Generation 2 standard warehouses (Gen2) are an updated version (“next generation”) of Snowflake’s current standard virtual warehouses, focusing on improving performance for analytics and data engineering workloads. Gen2 is built on faster underlying hardware and intelligent software optimizations, including enhancements for delete, update, merge operations, and table scan operations. With Gen2, most queries complete faster, allowing more processing to be done concurrently.

https://docs.snowflake.com/en/user-guide/warehouses-gen2#label-gen-2-standard-warehouses-region-availability

Available Regions

This feature is Generally Available (GA) from its release, but it is currently only available in the following regions:

  • AWS us-west-2 (Oregon)
  • AWS eu-central-1 (Frankfurt)
  • Azure East US 2 (Virginia)
  • Azure West Europe (Netherlands)

Cost

Looking at the Snowflake Service Consumption Table, the cost is as shown in the figure below. Compared to standard warehouses, the cost is 1.35 times higher for AWS and 1.25 times higher for Azure.

2025-05-06_09h57_11

Trying It Out

Let's actually check how the query speed changes compared to the previous warehouses.

Warehouse Definition

Execute the following query to define the warehouses for verification. To use a "Generation 2 standard warehouse", simply set resource_constraint = standard_gen_2.

-- Define the traditional warehouse for comparison
create or replace warehouse large_wh_gen1
  warehouse_size = large;

-- Define the Gen2 warehouse
create or replace warehouse large_wh_gen2
  resource_constraint = standard_gen_2
  warehouse_size = large;

Verification Queries

We will use sample queries for TPC-DS provided as Snowflake sample data for verification. Regarding data volume, the snowflake_sample_data.tpcds_sf10tcl.store_returns table has 2.9 billion rows, and the snowflake_sample_data.tpcds_sf10tcl.web_sales table has 7.2 billion rows.

Before executing each query, we disable the query cache and suspend the warehouse to ensure the cache is not used.

  • Preparation Query Before Traditional Warehouse Verification
-- Disable query cache at the session level
alter session set use_cached_result = false;

-- Suspend and resume the traditional warehouse
alter warehouse large_wh_gen1 suspend;
alter warehouse large_wh_gen1 resume;
use warehouse large_wh_gen1;

-- Specify the target schema for the query
use schema snowflake_sample_data.tpcds_sf10tcl;
  • Preparation Query Before Gen2 Warehouse Verification
-- Disable query cache at the session level
alter session set use_cached_result = false;

-- Suspend and resume the Gen2 warehouse
alter warehouse large_wh_gen2 suspend;
alter warehouse large_wh_gen2 resume;
use warehouse large_wh_gen2;

-- Specify the target schema for the query
use schema snowflake_sample_data.tpcds_sf10tcl;
  • Query 1
with customer_total_return as
(select sr_customer_sk as ctr_customer_sk
,sr_store_sk as ctr_store_sk
,sum(SR_RETURN_AMT_INC_TAX) as ctr_total_return
from store_returns
,date_dim
where sr_returned_date_sk = d_date_sk
and d_year =1999
group by sr_customer_sk
,sr_store_sk)
 select  c_customer_id
from customer_total_return ctr1
,store
,customer
where ctr1.ctr_total_return > (select avg(ctr_total_return)*1.2
from customer_total_return ctr2
where ctr1.ctr_store_sk = ctr2.ctr_store_sk)
and s_store_sk = ctr1.ctr_store_sk
and s_state = 'NM'
and ctr1.ctr_customer_sk = c_customer_sk
order by c_customer_id
limit 100;
  • Query 2
with wscs as
 (select sold_date_sk
        ,sales_price
  from  (select ws_sold_date_sk sold_date_sk
              ,ws_ext_sales_price sales_price
        from web_sales
        union all
        select cs_sold_date_sk sold_date_sk
              ,cs_ext_sales_price sales_price
        from catalog_sales) x ),
 wswscs as
 (select d_week_seq,
        sum(case when (d_day_name='Sunday') then sales_price else null end) sun_sales,
        sum(case when (d_day_name='Monday') then sales_price else null end) mon_sales,
        sum(case when (d_day_name='Tuesday') then sales_price else  null end) tue_sales,
        sum(case when (d_day_name='Wednesday') then sales_price else null end) wed_sales,
        sum(case when (d_day_name='Thursday') then sales_price else null end) thu_sales,
        sum(case when (d_day_name='Friday') then sales_price else null end) fri_sales,
        sum(case when (d_day_name='Saturday') then sales_price else null end) sat_sales
 from wscs
     ,date_dim
 where d_date_sk = sold_date_sk
 group by d_week_seq)
 select d_week_seq1
       ,round(sun_sales1/sun_sales2,2)
       ,round(mon_sales1/mon_sales2,2)
       ,round(tue_sales1/tue_sales2,2)
       ,round(wed_sales1/wed_sales2,2)
       ,round(thu_sales1/thu_sales2,2)
       ,round(fri_sales1/fri_sales2,2)
       ,round(sat_sales1/sat_sales2,2)
 from
 (select wswscs.d_week_seq d_week_seq1
        ,sun_sales sun_sales1
        ,mon_sales mon_sales1
        ,tue_sales tue_sales1
        ,wed_sales wed_sales1
        ,thu_sales thu_sales1
        ,fri_sales fri_sales1
        ,sat_sales sat_sales1
  from wswscs,date_dim
  where date_dim.d_week_seq = wswscs.d_week_seq and
        d_year = 1999) y,
 (select wswscs.d_week_seq d_week_seq2
        ,sun_sales sun_sales2
        ,mon_sales mon_sales2
        ,tue_sales tue_sales2
        ,wed_sales wed_sales2
        ,thu_sales thu_sales2
        ,fri_sales fri_sales2
        ,sat_sales sat_sales2
  from wswscs
      ,date_dim
  where date_dim.d_week_seq = wswscs.d_week_seq and
        d_year = 1999+1) z
 where d_week_seq1=d_week_seq2-53
 order by d_week_seq1;

Verification Results

Query 1

On average, the Gen2 warehouse was about 2.6 seconds faster.

Warehouse Type Run 1 Run 2 Run 3 Run 4 Run 5 Average
Traditional 8.7s 9.4s 8.9s 10s 8.7s 9.14s
Gen2 7.4s 6.2s 6.1s 6.4s 6.7s 6.56s

Here are the query profiles as well. You can see that the proportion of time spent on Aggregate operations is lower for the Gen2 warehouse.

  • Traditional Warehouse

2025-05-06_10h41_39

  • Gen2 Warehouse

2025-05-06_10h40_55

Query 2

On average, the Gen2 warehouse was about 24 seconds faster. I was surprised by the significant difference compared to Query 1!

Warehouse Type Run 1 Run 2 Run 3 Run 4 Run 5 Average
Traditional 48s 48s 47s 48s 49s 48.00s
Gen2 17s 14s 13s 12s 12s 13.60s

Here are the query profiles as well. You can see that the proportion of time spent on Aggregate and Join operations is lower for the Gen2 warehouse. Since Query 1 scanned '4.09GB' and Query 2 scanned '66.83GB', it seems the performance benefits of the Gen2 warehouse become more apparent with larger scan volumes.

  • Traditional Warehouse

2025-05-06_10h35_56

  • Gen2 Warehouse

2025-05-06_10h36_41

Conclusion

Snowflake has released the new 'Generation 2 standard warehouses' with improved performance, so I conducted a speed comparison with traditional warehouses.

The difference in Query 2 was particularly surprising... With the processing time being more than three times faster (48.00s vs 13.60s), it's clear that the Gen2 warehouse offers significantly better cost efficiency, even considering the 1.35x (AWS) / 1.25x (Azure) price increase.

Although I only tested with two queries this time, within this scope of verification, I felt that the performance benefits are particularly evident for queries with large scan volumes and significant Aggregate or similar processing.

Currently, they are not yet available in regions in Japan, but these verification results make their release highly anticipated. Please give them a try when they become available in your region!