How do we get more details on a general Something Went Wrong message from a dataset custom SQL apply when all that is provided is the region, timestamp and requestID?
@thudnut is your sql work on the respective db?
Yes.
The query joins 33 views (to combine 33 duplicate schemas in the same db) with the use of Quicksight query parameters.
If I limit the custom sql to 14 views joined, 9 parameters used in each join, then it succeeds. Adding just 1 more join and the error pops up.
@thudnut you might be hitting the threshold.
ref : Data source quotas - Amazon QuickSight
Quotas for SPICE are as follows:
- 2,047 Unicode characters for each field
- 127 Unicode characters for each column name
- 2,000 columns for each file
- 1,000 files for each manifest
- For Standard edition, 25 million (25,000,000) rows or 25 GB for each dataset
- For Enterprise edition, 1 billion (1,000,000,000) rows or 1 TB for each dataset
Thanks for the ref. This is a Direct SQL query, not Spice, from a MySQL AWS database.
Quotas for queries are as follows:
- 127 Unicode characters for each column name. 16 characters is the current max column name being used
- 2,000 columns for each dataset. 37 columns total
- 2-minute quota for generating a visual, or an optional dataset sample. Possibly an issue, though the error pops up in less than 10 seconds
- Data source timeout quotas apply (varies for each database engine). Possibly an issue, though the error pops up in less than 10 seconds
Is there a way to confirm which Quota is being exceeded?
It can’t be timeout from the QS side, I have Custom SQL running for more than 30 min.
Your final output only has 37 columns after joining 33 views; I probably didn’t understand your scenario.
Clarification … 33 views (drawing from different schemas) all with the same 37 columns are joined with UNION’s.
select * from view1
where col1 = <<$param1>> …
union
select * from view2
where col1 = <<$param1>> …
union
…
Hi @thudnut - The best way to test your sql simply copy the sql and test it in a sql client tool and see how much time it took to provide the complete data set. Alternatively you can raise a ticket to AWS Customer support and provide the request ID and they can see internal logs and help you. To raise a ticket, please follow the link - Creating support cases and case management - AWS Support
Regards - Sanjeeb
Maybe you running into that.
Also, In your approach; it looks like you are not caching data in SPICE, this will do real-time queries, if this is not required, I would create a view in SQL, and then fetch all data from that view to QS at once with a specific refresh schedule. And filter will filter the data at the SPICE level(default/recommended
behavior), Which will be very fast and will have very low latency.
Hi @Sanjeeb2022. Thank you for the link. I’ve created a ticket to get the internal logs.
The sql was crafted in my client tool before bringing it over to the dataset. It runs successfully in under 10 seconds.
@neelay
The available SPICE storage shows as 59gb and the dataset is over 505gb. Increasing this space might be an option for us. Is 505gb (and growing) reasonable for SPICE storage?
For the CustomSql required parameters and constraints:
DataSourceArn = Is present
Name = Is present and less than 64 characters. I’ve tested names ranging from ‘A’ to ‘AL-ME’.
SqlQuery = Is present and less than 65536. The query length with parameters is a total of 20784 characters.
Columns = Is present and less than 2048 items. Currently the query returns 37 columns.
@thudnut probably.
25 million (25,000,000) rows or 25 GB for each dataset. For Enterprise edition, 1 billion (1,000,000,000) rows or 1 TB for each dataset.
You may also need to think about the cost of storage(SPICE), and time for data transfer. And if possible do some aggregation before bringing them to spice.