I have a field "time_range" which holds hours data at which row was created (like 0:00 - 0:59, 13:00 - 13:59 etc.) and I want to visualize the average of count of every time range. How can I do that? Also I want to order the time range by hour

Can you explain more of what you mean by average count of every time range? The average count of these three colors per time range, or the average count over all time ranges?

In regards to each you would do a countOver but partition either by the group or nothing.

countOver({metric},[{color}]) or countOver({metric},[])

For the second part you will need to make these groups integers and then sort by that calculated field.

parseInt(substring({group_field}, 1,locate({group_field} , ':' )-1))

This will get the integer of the group and then you will sort by this field.

In my dataset, I have:
i) a field named “createdon” which stores date on which the referral was created.
ii) a field named “hours_series” which contains time range at which referral was created. For example, if user created referral at 2:34 PM (i.e. between 2PM and 3PM), then value of “hours_series” for that row will be “14:00 - 14:59” in string format.

In my visual, I want to show Average number of referrals per hour.
For that we need to divide the count of referrals of that particular hour by total number of days.
(Total number of days = most recent date in “hours_series” - first date in “hours_series”)

Right now, I am fetching this data from database using postgreSQL using the following query:

select count(*),cast(DATE_PART(‘day’, ‘2023-04-07’::timestamp - ‘2023-03-01’::timestamp)as decimal),
cast(DATE_PART(‘hour’,P.CreatedOn) as integer) AS OnHour from PatientInfo P, ReferrerDetail R, PatientDiagnosis L
where P.Id=R.PatientId and P.Id=L.PatientId and R.IsDeleted =false and P.IsDeleted=false
and P.CreatedOn>=‘2023-03-01’ and P.createdon <=‘2023-04-07’
and Urgency is not null and Urgency<>‘’ group by DATE_PART(‘hour’,P.CreatedOn);

But now I am swithcing my stats to quicksight. How can I do this on quicksight.

I am attaching output of that query and a sample of my dataset.

Kindly help and thanks for the solution of second part.


Have you tried my calculations?

Do you just need to add a division of total number of days?

After trying calculations, we got close to goal but still we have an issue.

I am using count({hours_series}) / dateDiff(min(createdon), max(createdon)) to calculate the required average but dateDiff() is returning different value for each row. As we want to divide count(hours_series) by the number of days, value of dateDiff() should be same for all the rows to calculate accurate average.

After observing the data, I am perceiving that min() and max() functions are not returning the first and latest date of whole dataset but they are returning first and latest date corresponding to each value of hours_series.

Is there any solution of this?

I am attaching the screenshot of data these calculated fields are returning.

1st column : hours_series
2nd column : count(hours_series)
3rd column : dateDiff(min(createdon), max(createdon))
4th column : count(hours_series) / dateDiff(min(createdon), max(createdon))

you can say minOver({createdon}),[],PRE_AGG)

Great, we got correct dateDiff() after using minOver() and maxOver(). Now the last issue I am facing is while dividing the count({createdon}) by dateDiff(minOver(({createdon}), , PRE_AGG), maxOver(({createdon}), , PRE_AGG)), it is showing mismatched aggregation error.

Is there any way to get over this?


Use countOver({hours_series},[],PRE_AGG)

Thanks @Max for help. Calculations are generating the expected output.