Why Calculated Fields missing/deleted after replacing dataset with another dataset of same Columns and values & Structure

Hi Team,

I have 2 datasets - CSV file dataset, S3 dataset of same column names & same structure. & Same values & same no of Rows also.

I have created 4 calculated fields in CSV file dataset

After changing the CSV datasets with S3 Dataset -->( Same CSV file placed in S3 & Created as a dataset in QS ) The Calculated fields are deleted or missing.

If I use any calculated fields in the visuals , it is asking like below.

Why can’t it will create those calculated fields like power BI tool does because Same Structure & same column names…

Do i need to create those Calculated fields again & again while replacing the datasets. If so, what if we have 100 or more calculated fields?

Hi @Venkat.Metadata

Based on your description it appears you create a new dataset with the S3 file. Technically, that is a separate dataset as against replacing your CSV file with a file in S3 bucket in your existing dataset.

Have you tried bring in your S3 file in your Quality_CSV dataset (using the Add Data button)? If you bring that in and then remove the CSV file from the dataset you should end up with the same structure and hopefully the calculated fields would work without issues.

Regards,
Giri

1 Like

@Giridhar.Prabhu

FYI …

Quality_CSV dataset was created using a CSV file in local machine.

Quality_Testing_S3 dataset was created from S3 bucket in which i placed the same CSV file which used in Quality_CSV dataset.

Let me Try your suggestion also

You were saying like below

i have Quality_CSV + adding Quality_Testing_S3 here

I am not able to remove Quality_CSV dataset?

I just created 2 datasets from S3 of same columns & same names.

I created a single calculated field ParseDecimal2 in one dataset and pulled into a KPI visual-- below

While replacing the dataset with another dataset

It is asking mapping for the ParseDecimal2 …

ParseDecimal2 = parseDecimal(replace(replace({ Value },‘$’,‘’),‘,’,‘’))

i just mapped ParseDecimal2 = Value, the visual is breaking

what should i do here?

any solution here?

Hi @Venkat.Metadata

I simulated your case. You are right, we are not able to remove the first table.

It may be because it is a csv? I haven’t done it in a while; but I have had cases where I had to make dataset changes that involved relational tables and I was able to remove them.

An alternative approach is to use CLI commands to update dataset properties

Personally, I haven’t tried the CLI approach. So if you are familar with CLI commands you may try this option and see if you could replace the Quality_CSV with Quality_Testing_S3 with the CLI command

Regards,
Giri

Hi @Venkat.Metadata

In this case the Value field in your new dataset is coming as a Dimension (String I suppose) and without converting into value if you have it in the value field it will fail since the aggregate function will find the text column as invalid.

Regards,
Giri

@Giridhar.Prabhu ,

If i do CLI approach, will the calculated fields will disappear after replacing the dataset with same another dataset of same structure?

the Value field in both datasets is same i.e. String… ( All data types are same for all the columns )

In Power BI, the Calculated columns & measure create automatically even if we change the data source… but the other data source must contain same column names & same data types. We don’t need to create them again & again if you do multiple times of replacing the data source.

Why the QS unable to create the same ParseDecimal2 calculated field in Qulaity_S3 dataset in which it has the String ‘Value’ Column & place in that KPI visual by avoiding mapping concept,
so that visual won’t break or avoid getting errors in the Dashboard even if we do multiple replaces of the datasets.

Hi @Giridhar.Prabhu ,

– Remove option is disabled

added another dataset

— Remove option is enabled

Both are s3 dataset.

Learning lot of things about Quicksight by the help of QS Community. Thanks!

Pls help me out to get expert in QS

Hi Venkat,

Changing the source of the underlying data should not cause the calculated field to be affected.

When I tested even by removing the CSV file and replacing a different CSV file with the same structure the calculated field within the dataset was not impacted. However, removing the old field had an impact on the analysis. Hence, I suggested the CLI approach.

If the CLI command can change the reference to your datasource from a local file to a file that is coming from your S3 bucket without issues then things should work fine.

Regards,
Giri

Hi @Venkat.Metadata

I guess conceptually replacing the datasource has the same impact in both QuickSight and Power BI.

The issue you are facing is that you are creating a different dataset and not changing the source alone.

Even in Power BI if you create a different dataset/model then you will not have the measures automatically (assuming the measure was defined in your underlying data model).

Regards,
Giri

Regards,
Giri

Hi @Venkat.Metadata

I agree this is clumsy. There may be some constraint due to which it is restricted. But why should be transparent; by providing a tool tip with explanation.

Regards,
Giri

Hi @Venkat.Metadata,
It’s been awhile since we last heard from you on this thread, did you have any additional questions regarding this post?

If we do not hear back within the next 3 business days, I’ll close out this topic.

Thank you!

i DON’T HAVE ANY QUESTIONS REGARDING THIS. STILL THE CALCULATED FIELDS MISSING WHEN WE REPLACE THE DATASETS…!!!

Hi @Venkat.Metadata,
I would suggest filing a support ticket if the calculated fields are still missing in your instance.