Join tables releases error, due to exceed join memory, need to increase size of join

Yes, when will this be released? We are currently running into this issue and highly frustrating in terms of breaking our automation and workflows.

Hi All, we are going to soon release the new feature to increase the SPICE JOIN limit for secondary table size. I would like to ask for your feedback here, what size do you need to unblock your use case? 5GB, 10GB, 20GB?

Please be noted for two things:
1/ the new feature is only applicable to SPICE dataset join. If you have data from different data source that you want to join, the limit is still 1GB. You can ingestion those data tables into SPICE to get unblocked by the new size limit.
2/ the size limit is only applied to all the secondary tables adding together, the primary (left) table doesn’t have size limit. Current limit is 1GB.

Thanks!

1 Like

Hi @m0ltar, previously we are treating all SPICE datasets same behavior as cross-source datasets. When users request to do a join with SPICE datasets, we basically have to load the secondary data into memory and run the join. This approach is similar to how we deal with cross-source data. That’s why we have the 1GB limit.

We are going to soon launch SPICE JOIN feature. With the new capability, we are going to move data in to the same node, and run the JOIN all in SPICE infrastructure. Meaning we are now regarding SPICE datasets single source. That’s why now we will support much higher limit beyond 1GB.

Can you share your use case? If we increase the limit from 1GB, what’s the new limit can unblock your use case? Thanks!

1 Like

How much do you need @Viacheslav ?

@Ashokquicksight We are going to increase the limit for all secondary table size in a SPICE JOIN. What is the new size limit (current limit is 1GB) is good for you?

We no longer have the use case, as we had to refactor everything and move the JOINs back to the database. Otherwise, I am not sure. How can I measure the JOIN size? I think it depends on the data storage, compression and how QS uses it.

@emilyzhu

Our workaround was to use custom SQL to merge the second largest dataset with the largest one to decrease the total size of the remaining datasets to less than 1 GB. Normally we would want to re-use that second largest dataset in other child datasets, but with this approach, we lose that re-usability.

If the total size of the remaining datasets outside of the largest one can be up to 20 GB, that would work for our use case.

Thanks @David_Wong and @m0ltar for the feedback! We are actually in the phase of preview for the SPICE JOIN capability with limited increases to 20GB. Do you want to try the preview feature?

@Viacheslav, @evanbetterc and @Ashokquicksight - also for your information.

Feel free to message me offline about your account ID and region, we can enable it for you.

Thanks!

When do we plan to launch this feature for everyone ? And where can we view the current configured join limits ?

Hi @VAMSI1, we will launch the feature in a few months, if you wish to preview it, please message me offline. Thanks!

@emilyzhu

while i’m combining different datasources to make a join, due to memory size limitations we are facing issues.
Error: Memory capacity limitation exceeded for data sources in the join configuration.Breakdown of data source memory capacity in bytes by LogicalTableId at the point of failure.

Can you enable me the preview feature which has spice join capability of 20GB.
Account ID- 884515407868
Region - India

Hi emilyzhu, I also want to try the preview. How can I contact you offline?
Best regards,
Mattias