# Issue with Grouping Data and Calculating Sum per Group

Hello community,

I’m encountering a challenge while dealing with some data in my table. I have a set of data that may have repetitions of a certain type.

Example:

id_group | id_empl | pieces
1 | 1 | 1
2 | 3 | 1
2 | 4 | 1

When working with the “pieces” field and performing a sum, I get a total of 3. However, my goal is to group the data by “id_group” and calculate the sum per group. In this structure, a group can have multiple records with different “id_empl,” but if the “id_group” repeats, the “pieces” values should be counted only once.

I understand this may seem simple, but I’m having difficulty ensuring that the visualization in my analysis reflects this logic. Any suggestions on how I can approach this efficiently?

Additionally, I have attached screenshots of my real scenario for reference.

The antoher screenshot:

@felipehaag.paytrack If you display all 3 columns, those values are not considered duplicates (as they aren’t); but if you display id_group, and pieces, then you can use the `distinct sum`.

Actually, I am utilizing only the daily field (pieces in the hypothetical example) in the sum for a KPI-type visualization. I am employing summation and would like to group by duplicate IDs. This would result in the KPI returning the value 3 in my real scenario (as illustrated in the screenshots) and 2 in my hypothetical scenario (explained in the question).

It would be akin to executing:

``````SELECT SUM(pieces) as Table
FROM (SELECT DISTINCT id_group, pieces FROM test_12) as subquery;
``````

hopefully I’m not to late, you could try a distinct_count for pieces, if it’s always 1, or something like sum(max(pieces,[id])). It seems like a case for Using level-aware calculations in Amazon QuickSight - Amazon QuickSight