I have created a visual that counts the amount of poles and those inspected so far.
I want to measure the amount of poles in a specific timeframe (last week, last month etc).
The “problem” I have is the ID field (which indicates the existence of the pole) as well the inspection itself have a timeframe. So lets say when I create a paramater and link it to the time frame and look at the last 7 days, it “deletes” the amount of poles and only shows those inspected.
That I presume happens becuase also the creation of the ID has a timestamp and it is previous to the last 7 days…
Can I “freeze” a measurement of the time stamp to certain fields only? So the Amount of poles will be constant and then I can measure how many poles I inspected in each timeframe.
Thanks!
To achieve your goal of fixing the total number of poles per site while varying the count of inspected poles based on a selected timeframe in Amazon QuickSight, you can use a calculated field to “freeze” the total count of poles and another to dynamically count the poles inspected within your specified timeframe.
Here’s a step-by-step guide on how to set this up:
Create a Calculated Field for Total Poles:
You need to ensure that your total count of poles remains constant regardless of the timeframe filter applied to the visualization. To do this, create a calculated field that sums the total number of poles ignoring the date filter. This can be achieved using a level-aware aggregation function or a window function depending on your dataset and database setup.
For instance:
sumOver({total_poles}, [], PRE_AGG)
This will calculate the total number of poles across all times without being affected by any filters.
Apply a Timeframe Filter for Inspected Poles:
To dynamically count the number of poles inspected in a specific timeframe, continue using the normal sum or count aggregation on the inspected field, but this time apply the date filters as desired (e.g., last 7 days, last month).
Set Up Your Visualization:
Add both fields to your visualization (e.g., a bar chart). The first calculated field will always show the total poles, and the second field will adjust based on the timeframe filter applied.
Configure the Filter:
Add a date filter to your dashboard or analysis. Configure this filter to only affect the inspected poles field. This is crucial as it allows the total poles field to remain unaffected by the timeframe selection.
Testing and Validation:
Once set up, test by selecting various timeframes using your filter. The total poles should remain constant, and the inspected poles should vary according to the date filter.
Optimization and Performance:
Keep in mind that using calculated fields, especially with large datasets or complex calculations, can impact the performance of your dashboard. Monitor and optimize if necessary.
This setup should help you track the total number of poles consistently while allowing flexibility in analyzing the poles inspected during various timeframes.
Did this answer your question? If so, please help the community out by marking this answer as “Solution!”
Thanks, but unfortunately it does not work.
Currently to count the number of assets I did distinct_count(ID), the reason is that if I do a "regular count, it aggregates (the ID has a numerical number).
Thus, when I do sumover(ID, , PRE_AGG) it gives me a “wrong count” (12 billion…)
I also tried to put the calculated filed (distinct_count(id)) in the sumover but it did not work…
Any ideas?
Is there another function were I can count with PRE_AGG?
Hi @Etai, when dealing with aggregate functions like sumover() and wanting to count distinct values, it’s indeed tricky because QuickSight handles pre-aggregation differently.
Since distinct_count(ID) gives you the count of unique IDs correctly, but sumover() is not working as expected when using it directly, you might consider an alternative approach by using calculated fields and level-aware aggregations.
Here’s a strategy you might try:
Create a New Calculated Field for Unique Identifier: Create a calculated field that returns a unique identifier for each row. If each row in your data source is unique, you might be able to use the row number as a unique identifier. Otherwise, you could concatenate multiple fields to form a unique string.
Replace [partition fields] with any fields you want to partition by.
Check Data Levels: Ensure that the calculated field is set at the correct level of detail. You may need to adjust the level of detail (LOD) to match the granularity at which you’re expecting to count distinct IDs.
If these steps still don’t provide the correct results, it might be necessary to review the data model and how data is structured in your dataset. Sometimes, restructuring the data or adjusting the joins can resolve such issues.
Also, keep in mind that as of the latest updates, some functionalities like sumover() with PRE_AGG for distinct counts might be limited, and it could be worthwhile to check the latest Amazon QuickSight documentation or their user forums for any new features or workarounds that might have been introduced.
We hope this solution worked for you. Let us know if this is resolved. And if it is, please help the community by marking this answer as a “Solution.” (click the check box under the reply)