@ajinkya_ghodake
How are you planning to consume the data after you export it? Excel can’t handle files with 10 million rows. The limit in Excel is the same as the limit in QuickSight - 1 million.
Hi @ajinkya_ghodake - Thanks for posting the question. This is an interesting usecase. If you are exporting the data as csv file ( which is a tabular format), you do not need QuickSight to be used as a Data Extraction tool. It is better to shift this workload to Snowflake as an data extract job. This way you can export the data quickly. Exporting 1M rows from a reporting or visualization tool is not recommended and I agree with @David_Wong point.
Hi @ajinkya_ghodake - When you are mentioning user needs to do the filter and then export to CSV, Is the filter is fixed? if yes, then possibly you can create a data pipeline using glue which will read the data from S3, put the filter as per user and create files in CSV for the user . However this will require a some amount of data engineering work. One of another approach will be purely custom solution and having an interface where user can provide the filter and you can run custom pyspark and generate csv files for them. This is require a bit of coding.
Quick Sight Team,
We’re encountering the same limitations. These days, working with datasets containing over 1 million rows is quite common and no longer considered large. Do you have any plans to increase this limit?
We’re working with a SPICE dataset that contains approximately 135 million records, and we need to export it to a CSV file to off load into our Redshift database. Despite applying various filters—including filtering down to just one day of data—we’re still exceeding the 1 million row limit. Please advise.