Are there any calculated fields in the dataset that reference fields you’re trying to change?
Data mismatches, such as schema changes in the source (e.g., new columns, type shifts like string to decimal), invalid values (nulls, overflows), or calculated fields failing (e.g., division by zero, incompatible types in ifelse) can trigger this.
DATA_TOLERANCE_EXCEPTION: There are too many invalid rows. Amazon QuickSight has reached the quota of rows it can skip and still continue ingesting. Check your data and try again.
The DATA_TOLERANCE_EXCEPTION in this scenario suggests that while your individual tables are clean, the Join operation is generating a result set that the SPICE engine cannot process. This typically happens when the join creates a massive “data explosion” or when hidden characters in the Redshift string fields break the ingestion stream. Since the error occurs during the join mapping phase rather than row-by-row loading, the error log fails to generate, which is why your download link is greyed out.
What to Check Next:
Check for “Many-to-Many” Explosions: Ensure your join keys don’t have thousands of duplicate values (like blanks, NULL, or default zeros) in both tables. This creates a Cartesian product that can blow up a dataset from thousands of rows to millions, crashing the refresh.
Verify String Padding (CHAR vs. VARCHAR): Redshift CHAR fields are fixed-length and pad values with spaces. If one side is CHAR and the other is VARCHAR, the join may fail or behave inconsistently; use TRIM() in a Custom SQL statement to align them.
Scan for Hidden Characters: Check for carriage returns (\n or \r) within the Redshift strings. These hidden characters can break the row delimiters during the transfer to SPICE, leading to “undefined” exceptions.
Test via Custom SQL: Replace the QuickSight UI join with a Custom SQL block. If the join works in the Redshift console but fails in QuickSight, you’ve confirmed the issue is with how SPICE is receiving the joined result set.
Monitor SPICE Capacity: Verify that the projected size of the joined data doesn’t exceed your account’s available SPICE memory, as a large join can quickly exhaust your allocated GBs.