My database counts more than 7k records. However, only around 3k is ingested in quicksight. There is no sign of the remaining ones and there is no error in the ingestion.
It shows as if the dataset has only around 3k records initially, which is not the case.
What is causing this ? Thank you
Please download the error log(Skipped rows) and examine which specific field is causing the issue. It might be related to some date fields having inconsistent formats, or numeric, decimal, and integer fields conflicting in datatype.
Hi @Xclipse,
Thank you for your answer.
The error log mentions only 10 rows. There is no mention of the +4k remaining records. Is there anything else to check ?
I have tried uploading the same dataset but in an excel file and all the records were ingested. The problem occurs when I use an S3 manifest.
Files that you select for import must be delimited text (for example, .csv or .tsv), log (.clf), or extended log (.elf) format, or JSON (.json). All files identified in one manifest file must use the same file format. Plus, they must have the same number and type of columns. Amazon QuickSight supports UTF-8 file encoding, but not UTF-8 with byte-order mark (BOM). If you are importing JSON files, then for globalUploadSettings
specify format
, but not delimiter
, textqualifier
, or containsHeader
.
Please refer to the below documentation this might be helpful for you.