QuickSight ran out of join memory while processing your request

suddenly I have one dataset that has started to fail to refresh with his error. this started yesterday with no change to the structure of the dataset. I tried to reduce the amount of data. previously it had sales records from 2018 until now. I limited that to 2023 and up and it still failed. I limited it to July 2024 and up and started working again.

is there a way to know which table join in the dataset is causing this issue?

no change was introduced to this dataset before it started to fail and the underlying data in the source tables did not change either

@Ali_B ,

Have a look here : Joining dataset is not allowed due to capacity[High Priority] - #4 by Sanjeeb2022

The new update since July 2024 is 20GB as my older post says 1GB

  • Datasets stored in SPICE – This type of join contains tables that are all ingested into SPICE. The combined size of all secondary tables in this join cannot exceed 20 GB.

Kind regards,
Koushik

Thanks for the reply.

I’m aware of this limitation, however, from July 30th to July 31st I doubt the data grew suddenly to exceed the limit. and even after restricting the amount of data to 2 years worth of records ( rather than 7 years as previously it was working with), it failed again.

this dataset was refreshing just fine when the limit was 1GB.

@Ali_B ,

If you say that data suddenly grew, then analyze the model and check why it increased : could be duplication ? . This would more of a fix at your end as QuickSight is just querying the underlying datasources.

Kind regards,
Koushik

No, the data didn’t suddenly grow, that’s what I was trying to say.

It was working just fine even before the SPICE join table limit increased from 1Gb to 20GB. only yesterday it started throwing this error and I can’t see a reason why. I tried scaling back the amount of data from 7 years to 2 years and SPICE threw the same error, only when I limited it to 1 year it started working again but we still need to see the full 7 years.

@Ali_B ,

Have a look at the datasets and the joins which have been defined to check why there is record explosion if you say it was working even with the 1GB limit.

Kind regards,
Koushik

Hi @Ali_B

It’s been a while since we last heard from you, did you have any additional or follow up questions regarding your initial topic? Were you able to try out the last approach that was shared?

If we do not hear back within the next 3 business days, I’ll go ahead and mark the solution to close out the topic.

Thank you!

Hi @Ali_B

Since we have not heard back from you, I’ll go ahead and close this topic. However, if you have any additional questions, feel free to create a new topic in the community and link this discussion for relevant information.

Thank you!