Was there a recent update to how SPICE imports work, perhaps when used in conjunction with custom SQL? I’m seeing an odd bug involving column orders like so:
Given custom SQL that looks like this:
SELECT
SOME_VARCHAR_COLUMN_A,
SOME_DATETIME_COLUMN_B,
SOME_INTEGER_COLUMN_C
FROM MY_TABLE
that returns rows like this:
SOME_VARCHAR_COLUMN_A | SOME_DATETIME_COLUMN_B | SOME_INTEGER_COLUMN_C
======================+========================+======================
abcdefg | 1659355922000 | 1234
Some other string | 1659432273000 | 192384719
And given a data set where I previously had done something like rename SOME_VARCHAR_COLUMN_A to “Column A”, I am now getting errors on SPICE refresh that say the following:
Error threshold exceeded for dataset (10000) (maxErrors = 10000)
SKIPPED ROWS
10001 rows where SOME_DATETIME_COLUMN_B field date values were not in a supported date format.
The error file that the refresh dialog looks something like this:
ERROR_TYPE | COLUMN_NAME | SOME_DATETIME_COLUMN_B | SOME_INTEGER_COLUMN_C | Column A
===============+========================+========================+=======================+=========
MALFORMED_DATE | SOME_DATETIME_COLUMN_B | abcdefg | 1659355922000 | 1234
MALFORMED_DATE | SOME_DATETIME_COLUMN_B | Some other string | 1659432273000 | 192384719
So, it appears that the fields being served to SPICE from the query result are still being served in the same order (A, B, C), but the column that was renamed is now placed after the other columns in parsing order (B, C, A).
As a workaround, I changed my renamed columns back to their original names, but I’m still getting SPICE ingest errors. (I’ve even confirmed via the CLI with aws quicksight describe-data-set --aws-account-id my-acct-number --data-set-id my-dataset-id
that the RenameColumnOperation blocks are gone from the DataTransforms section in the LogicalTableMap).