Hi All,
I have a csv data file in my s3 bucket. I have connected it as a quicksight dataset source using manifest file. The problem I am facing is that it is creating double the number of rows that I have in my S3 file every time I refresh it. This is affecting my sum and other counts.
Any help? Do I need changes in my JSON file below?
Based on the above you just have one file which is being used to load to SPICE and used in your dashboard.
Check the number of rows after the refresh the completes. Post a screenshot of what you mean with double number of rows ?
Hi, no it is not yet solved. I am seeing duplicates in my analysis even when I do count.
I have to select distinct in these cases.
My S3 file has the appropriate records but when I connect to Quicksight it is duplicating each record and doubling the output.
Can you
1/post a screenshot of your s3 bucket ? Is there only 1 file with 4 records and you see 8 records when it is finally ingested into SPICE ?
2/post a screenshot of the data prep ( we should be seeing only 4 records )
I would like to validate the above 2 before asking you to log a support ticket for further assistance.
please remove the URIPrefixes from the manifest file and test the ingestion process. Let us know if that solves the issue , if not then would recommend to open a support ticket.