After changing the CSV datasets with S3 Dataset -->( Same CSV file placed in S3 & Created as a dataset in QS ) The Calculated fields are deleted or missing.
Based on your description it appears you create a new dataset with the S3 file. Technically, that is a separate dataset as against replacing your CSV file with a file in S3 bucket in your existing dataset.
Have you tried bring in your S3 file in your Quality_CSV dataset (using the Add Data button)? If you bring that in and then remove the CSV file from the dataset you should end up with the same structure and hopefully the calculated fields would work without issues.
I simulated your case. You are right, we are not able to remove the first table.
It may be because it is a csv? I haven’t done it in a while; but I have had cases where I had to make dataset changes that involved relational tables and I was able to remove them.
An alternative approach is to use CLI commands to update dataset properties
Personally, I haven’t tried the CLI approach. So if you are familar with CLI commands you may try this option and see if you could replace the Quality_CSV with Quality_Testing_S3 with the CLI command
In this case the Value field in your new dataset is coming as a Dimension (String I suppose) and without converting into value if you have it in the value field it will fail since the aggregate function will find the text column as invalid.
the Value field in both datasets is same i.e. String… ( All data types are same for all the columns )
In Power BI, the Calculated columns & measure create automatically even if we change the data source… but the other data source must contain same column names & same data types. We don’t need to create them again & again if you do multiple times of replacing the data source.
Why the QS unable to create the same ParseDecimal2 calculated field in Qulaity_S3 dataset in which it has the String ‘Value’ Column & place in that KPI visual by avoiding mapping concept,
so that visual won’t break or avoid getting errors in the Dashboard even if we do multiple replaces of the datasets.
Changing the source of the underlying data should not cause the calculated field to be affected.
When I tested even by removing the CSV file and replacing a different CSV file with the same structure the calculated field within the dataset was not impacted. However, removing the old field had an impact on the analysis. Hence, I suggested the CLI approach.
If the CLI command can change the reference to your datasource from a local file to a file that is coming from your S3 bucket without issues then things should work fine.
I guess conceptually replacing the datasource has the same impact in both QuickSight and Power BI.
The issue you are facing is that you are creating a different dataset and not changing the source alone.
Even in Power BI if you create a different dataset/model then you will not have the measures automatically (assuming the measure was defined in your underlying data model).
I agree this is clumsy. There may be some constraint due to which it is restricted. But why should be transparent; by providing a tool tip with explanation.