QuickSight Only Showing 8 out of 59 Columns from S3 CSV - No Errors Reported

I’m connecting CSV files from S3 to QuickSight using a manifest file, but only 8 out of 59 columns are showing up (random columns, not sequential), and QuickSight reports no errors, warnings, or skipped rows during ingestion. The CSV has 59 columns with comma delimiters, contains IoT sensor data with many empty values (consecutive commas), and the timestamp column format is 2026-01-07 00:01:00+05:30 (with a + symbol in the timezone offset). I’ve tried various manifest configurations including textqualifier set to ", ', empty string, and completely removed, but nothing works. I suspect the + symbol in timestamps might be causing parsing issues, but I cannot modify the source CSV files. Has anyone encountered this issue where QuickSight silently drops columns without reporting errors, and are there any manifest settings that could help parse all columns correctly?

Sample data:

timestamp,controltype,dispressure,head,ikwtr,kw,mode,nocondwpr,outputfrequency,status,sucpressure,tripstatus,componentsid,inwatertemp,noctr,...,component_name
2026-01-07 00:01:00+05:30,1.0,8.0,0.0,0.0003,0.03,1.0,1.0,0.0,0.0,20.8,0.0,bhb-tri_10,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,Condenser Water Pump 1

Current manifest:

{
    "fileLocations": [{"URIPrefixes": ["S3addressplaceholder"]}],
    "globalUploadSettings": {
        "format": "CSV",
        "delimiter": ",",
        "textqualifier": "\"",
        "containsHeader": "true"
    }
}

Hi @listin

Welcome to the Quick community!

Quick often drops columns during S3 CSV ingestion via manifest when parsing fails due to inconsistent delimiters from consecutive commas (empty fields) or complex formats like timezone offsets in timestamps.

Remove textqualifier entirely, as unquoted CSVs with consecutive commas can confuse quoted field handling, your sample data lacks quotes around values. Add selectColumns to explicitly include all 59 column names from the header, forcing to parse them.

Please refer to the documentation and community post below, as it may help resolve the column parsing issue.

RESOLVED - The issue was related to SPICE storage capacity, not the manifest configuration.

After deleting some old datasets and data sources from SPICE storage and removing the textqualifier line entirely from the manifest file, all 59 columns are now loading correctly. Interestingly, I had tried increasing SPICE storage earlier (I have over 100 GB free in the account), but that didn’t resolve the issue - only deleting unused datasets worked.

This raises a follow-up question: Is there a limit to the total number of columns SPICE can handle across all datasets in an account (not just per dataset)? If so, how can I monitor or check if I am approaching this limit? The AWS documentation mentions a 2,048 column limit per dataset, but I’m wondering if there’s an aggregate account-level constraint that isn’t well documented.

Hi @listin

To monitor SPICE capacity, access the Manage Quick admin page and select SPICE capacity from the navigation pane. This shows total used/unused capacity (purchased + bundled) broken down by datasets. Note: Only admins can view this information. Configure SPICE memory capacity - Amazon Quick Suite

You can also check individual dataset column counts directly on the Datasets page. Quick enforces a 2048 column limit per dataset, exceeding this will cause ingestion failures. Data preparation limits - Amazon Quick Suite

To optimize SPICE usage, reduce unnecessary columns using dataset recipes during preparation. Calculated fields created at the dataset level are stored in SPICE, while analysis level calculated fields are computed at runtime. Item limits for Amazon Quick Sight analyses in the Quick Sight APIs - Amazon Quick Suite

Hi @Xclipse

Thank you for the detailed response!

Just to clarify - in my case, I’m well within the 2,048 column limit per dataset (only 59 columns), and we have over 100 GB of free SPICE capacity in the account. The strange behavior was that only 8 random columns were showing up without any ingestion errors or warnings.

What resolved it was deleting old unused datasets from SPICE, not adding more capacity. This makes me wonder if there’s some other constraint beyond total SPICE storage capacity - perhaps a metadata limit, total column count across all datasets, or some indexing threshold that isn’t documented?

Has anyone else encountered a situation where SPICE ingestion silently fails to recognize columns even when capacity is available, but works after cleanup? It seems like there might be an undocumented soft limit that gets hit before the hard capacity limit.

Hi @listin

When SPICE capacity approaches its purchased limit, metadata from unused datasets can accumulate and cause parsing issues during new dataset ingestion, even when individual datasets are well within column limits. Deleting unused datasets helps resolve these silent column recognition issues by reducing metadata overhead.

We recommend enabling SPICE auto purchase capacity to avoid hard capacity limits. Additionally, monitor SPICE usage through CloudWatch metrics to identify any metadata related anomalies early.

Please refer to the below documentation this might be helpful for you.

Hi @listin

It’s been a while since we last heard from you. If you have any further questions, please let us know how we can assist you.

If we don’t hear back within the next 3 business days, we’ll proceed with close/archive this topic.

Thank you!

Hi @listin

Since we have not heard back from you, I’ll go ahead and close/archive this topic. However, if you have any additional questions, feel free to create a new topic in the community and link this discussion for relevant information.

Thank you!