Hi QS team!
We’ve been building a QuickSight Q flow on top of an existing dashboard to automate our weekly performance report. The flow has 11 nodes — it extracts data from multiple visuals, combines and formats the tables, and sends a styled HTML email.
When we first built and tested the flow, it worked well. We ran it successfully several times and the output was complete and correctly formatted. As we were about to deploy it, we noticed the performances significantly dropped this week. We then invested significant time hardening the prompts — adding row-count verification, retry logic, anti-truncation guardrails, and exhaustive key-variant lookups — to make it more robust.
However, we’re now seeing inconsistent behavior across runs using the exact same prompts. A few questions for the team:
-
Is there a known issue with LLM context window or output truncation in multi-node flows, particularly when upstream nodes pass large JSON payloads?
-
Are there any recommended patterns for building reliable multi-node flows on top of existing dashboards?
Thanks!