When migrating assets across accounts, does the Asset Bundle Export job produce a manifest.json file?

Hi all,

I’m working with the QuickSight asset bundle APIs to migrate dashboards between accounts using StartAssetBundleExportJob and StartAssetBundleImportJob. A few questions have come up during testing:

1. Is manifest.json a required part of a valid asset bundle export?

In one of our export jobs (using valid-looking dashboard ARNs), the resulting .qs zip file includes dashboards/, datasets/, and analyses/ folders - but no manifest.json.

When attempting to import that file, QuickSight returns a misleading error:

"Zip file size less than minimum expected zip file size. Probably not a zip file or a corrupted zip file."

2. Should the manifest.json be generated automatically by a successful export, and is it strictly required for the import to succeed?

3. Does the import source for a bundle have to be an S3 URI?

The documentation shows that AssetBundleImportSource accepts a Body parameter (for raw file bytes), but I also see examples using S3Uri.

4. Are there any functional differences or limitations when using Body vs. S3Uri, particularly for larger bundles or CI/CD workflows?

Thanks in advance for any clarification!
Dave

I was able to resolve this – answers to questions below:

BELOW ASSUMES YOU ARE USING BOTO3 - I USED A PYTHON SCRIPT FOR MY BUNDLE OPS

1. Is manifest.json a required part of a valid asset bundle export?
Not necessarily.

Many LLMs will suggest that this is required file produced by the asset bundle export - it is not.

For S3 data sources, it’s possible this is required, but I was able to bundle export / import a datasource connection without it. It is not necessarily an expected output of the exported .qs file produced by the bundle export.

2. Should the manifest.json be generated automatically by a successful export, and is it strictly required for the import to succeed?
Again, no.

For S3 datasources this might be the case, but non-S3 datasources do not require manifest.json.

3. Does the import source for a bundle have to be an S3 URI?
No. The .qs file can be imported from any location, it does not have to be in an S3 bucket (but for more complex CI/CD use cases, S3 might be the best approach).

4. Are there any functional differences or limitations when using Body vs. S3Uri, particularly for larger bundles or CI/CD workflows?
No. I will note that you have to use one or the other, not both (which makes sense).

Edge Case:
One edge case I do want to flag for anyone this may help - when I included the PrefixForAllResources tag in the below code within OverrideParameters, the bundle import operation failed with an error The provided subnets are invalid. Please make sure all subnets exist.

The documentation does not specify the proper string configs for the PrefixForAllResources tag - it’s possible the failure was caused by the trailing "-" in import- (in that scenario, the bundle job starts successfully and subsequently fails with a subnet error).

I would advise not using any special characters in PrefixForAllResources, and if the subnet error is thrown, remove the argument entirely.

qs_import_client.start_asset_bundle_import_job(
        AwsAccountId="account_id",
        AssetBundleImportJobId="IMPORT_JOB_ID",
        AssetBundleImportSource={
            "Body": ".qs file location"
        },
        OverrideParameters={
            # 'ResourceIdOverrideConfiguration': {
            #     'PrefixForAllResources': 'import-' # <--- Excluded due to error
            # },
            "VPCConnections": [
                {
                "VPCConnectionId": "VPC ID from your bundle",
                "SubnetIds": [
                    "subnet-1",
                    "subnet-2",
                    "subnet-3"
                    ],
                "SecurityGroupIds": [
                    "sg-1"
                    ]
                }
            ],
            "DataSources": [
                {
                    "DataSourceId": "Data Source ID from your bundle",
                    "Name": "New Data Source Name",
                    "DataSourceParameters": {
                        "RedshiftParameters": {
                            "Host": "Target environment host address",
                            "Database": "Target environment database",
                            "Port": Target Environment Port
                        }
                    },
                    "Credentials" : {
                        "CredentialPair":{
                            "Username": "Database Auth Username",
                            "Password": "Database Auth Password"
                        }
                    }
                }
            ]
        } 
    )
2 Likes