Row level security using dynamo db dataset

Hi, I am trying to achieve row level security in quicksight. For creating the row level dataset I am using dynamo db in the background. My DynamoDB have columns GroupArn, UserArn , colA, colB. I want to have row level security based on colA for few datasets and colB for few others. So I am using dynamo db athena connector to create a quicksight dataset with columns “GroupArn, UserArn , colA”, but when the dashbaord loads, it is taking into account the value of colB as well for filtering rows. This happens when I am creating dataset through CLI.

But if I create the row level dataset by manually creating new dataset in quicksight and using “exclude column” feature, this issue does not happen.

Can someone help here?


After the dataset has been created through CLI :

  1. what are the output columns shown when you make a describe-data-set api call ?
  2. what are the columns displayed in QuickSight data preparation mode if you edit the dataset you created through CLI ?


Output columns and columns displayed in quciksight data preparation mode are same : the columns I selected in “InputColumns”, i.e “GroupArn, UserArn and colA” in above use case.

Also an update : When I use CustomSQL approach over RelationalTable approach to create dataset, this problem does not exist.


I have tested creation of a RLS dataset in a Redshift table through CLI and it works.

Json definition of the rls dataset (RelationalTable approach ) :

The table has 3 columns ( username , product_category, cc_name ) , the Logicaltablemap with project operation only consists of columns I require ( username , product_category ) .

Note : Without LogicalTableMap>ProjectedColumns , it will bring all the columns in the table.

    "AwsAccountId": "123456789012",
    "DataSetId": "user_product_category_rls_mv_cli",
    "Name": "user_product_category_rls_mv_cli",
    "PhysicalTableMap": {
        "user-product-categoryrls-mv-cli": {
            "RelationalTable": {
                "DataSourceArn": "arn:aws:quicksight:eu-central-1:123456789012:datasource/5bc02bef-e0d7-4d89-8092-3b3e6fca8b64",
                "Schema": "public",
                "Name": "user_product_category_rls_mv",
                "InputColumns": [{
                        "Name": "username",
                        "Type": "STRING"
                        "Name": "product_category",
                        "Type": "STRING"
                        "Name": "cc_name",
                        "Type": "STRING"
     "LogicalTableMap": {
        "user-product-category-rls-mv-cli-logical": {
            "Alias": "user-product-category-rls-mv-cli-logical",
            "DataTransforms": [{
                "ProjectOperation": {
                    "ProjectedColumns": [
                        "username", "product_category"
            "Source": {
                "PhysicalTableId": "user-product-categoryrls-mv-cli"
    "ImportMode": "DIRECT_QUERY",
    "Permissions": [{
        "Principal": "arn:aws:quicksight:eu-central-1:123456789012:user/default/Admin-OneClick/xxxxx-Isengard",
        "Actions": [
    "DataSetUsageConfiguration": {
        "DisableUseAsDirectQuerySource": true,
        "DisableUseAsImportedSource": true

Data Preparation showing only the required columns :

In custom sql , you are specifying the required columns which has the same effect .

Thanks for sharing this. Though in my understanding, data preparation showed only the required columns even if we don’t use logical table map and just mention the columns in input columns of relational table. But I believe using the logical table map would solve my above problem.
One follow up question - Would you happen to know what is the purpose of input columns in that case? while creating dataset.

The input columns would provide details on the columns of the table ( InputColumn - Amazon QuickSight ) . You can then use them to reference during transform operations ( LogicalTable - Amazon QuickSight ) .