SPICE ingestion error

Hi,
I am getting this error while trying to ingest a dataset into Spice

‘Error threshold exceeded for dataset (10000) (maxErrors = 10000)’

The dataset has 300k rows and is 9.7 MB.

How can i resolve the issue?

Best,

Artur

Seems like it is probably related to a data type mismatch, like a field that is coded as a Number but has some characters in it. Or a date field that has dates with different formats in it.

1 Like

Thanks. Will investigate and check my dataset.

Hi, @ArtSal. Did @Jesse’s solution work for you? I am marking his reply as, “Solution,” but let us know if this is not resolved. Thanks for posting your questions on the QuickSight Community Q&A Forum!

Hi @Jesse ,
I have the same issue. In the past, I have been refreshing my QS dataset from s3 .csv file. The .csv in s3 was produced by some redshift queries. Now, I changed the redshift definition table so that a specific field is of varchar datatype whereas in the past it was numeric. I also labelled the numeric field to get the varchar values for each row. Now, the .csv file is produced successfully in s3 (with the varhcar values) but QS fails to refresh it and produces this error. Do you have any explanation? To me it is strange that the .csv is not loaded to QS as long as it exitsts in s3

QuickSight datasets store the column type as part of the metadata. If there is a datatype mismatch then the scheduled ingestion likely will fail. You can try to update the field type in your dataset and trigger the ingestion again. Alternatively you can create a new dataset pointing to the same S3 bucket, and then in your analyses you can replace the old one with the new one - that process is usually more forgiving.

1 Like

Hi @Jesse . Now I create a new .csv file inside an S3 path by copying it there from Redshift. I check that the redshift table exists and all data are there. Same for the .csv in the s3 bucket. But when I try to import it from QS using the manifest directing to my file, the import seems to be successful but no data appear at all! Why??

@Fotis_flex
When the import is successful , can you check how many rows have been imported ? ( post a screenshot ) .

1 Like

Out of curiosity, why aren’t you connecting directly to the Redshift table instead?

In your manifest, are you pointing to the bucket or folder? Or are you specifying the file name? I would recommend pointing it to the folder or bucket level.

1 Like

Hallo. That was a good tip! Eveyrithibg now rolls good - thanks!

1 Like