Working with Large Datasets and Creating a Filter

Hi QuickSight community, this is my first post and I am excited to be here.

I have a Postgres table (row count in millions) I’ve connected to and want to create an in-page filter to choose a specific company. When I try and create a filter on this company dimension the filter times out. Can I create a dim table of company names / ids in QuickSight based on my fact table, join it to my fact table and use that dim table as the filter variable? Thank you!

Hi @jl-thirdwave We are excited to have you here!
My guess is the unique number of companies is large, causing the timeout. So the best way to handle this is to create a second dataset that has only the company field with unique values. You could then use that second dataset for that analysis and create a filter using that dataset. QuickSight will automatically map fields across datasets. So that filter will work for the other visualizations. The filter should be faster since it will only be scanning a subset of the original column due to it now being unique. More on mapping fields below. I will mark this question as solved for now, please reply back if you need further help. Thanks!!

Thanks for the quick reply, I can already tell this is a strong community.

I understand that a unique table of company names is the way to go, thank you for clarifying but I am wondering if I can build that dimensions table in QuickSight as opposed to creating a new table in my database. Some unique or distinct function perhaps?

This is ideal because that is fewer tables I need to connect to and as my table adds more data the presumed QuickSight company table I’ve made would update as well.