Unlock data insights from custom Jira projects using generative BI in Amazon QuickSight

Extracting actionable insights from project management tools like Atlassian Jira has become crucial for organizations seeking to optimize their processes and drive innovation. However, analyzing custom Jira projects can be challenging, especially when dealing with complex data structures and diverse project configurations.

Amazon QuickSight and its generative business intelligence (BI) capabilities can help in how businesses interact with and analyze their Jira data. QuickSight integration with Jira allows users to use natural language queries to explore their data, create visuals, and uncover patterns that might otherwise remain hidden. By combining the flexibility of custom Jira projects with the intuitive interface of QuickSight generative BI, organizations can now unlock valuable insights with ease and efficiency.

In this post, we explore how QuickSight generative BI can help you extract meaningful insights from your custom Jira projects. We demonstrate practical use cases, showcase the technology’s capabilities, and provide guidance on how to implement this solution in your environment.

Solution overview

This solution integrates Jira with QuickSight to provide a powerful, automated pipeline for visualizing project management data. By using AWS services such as AWS Lambda, Amazon Simple Storage Service (Amazon S3), AWS Glue, Amazon Athena, and QuickSight, this architecture extracts data from Jira, processes it, and creates real-time visualizations that offer actionable insights into project progress and team performance. With QuickSight generative BI capabilities, users can ask natural language questions about their Jira data, empowering both technical and non-technical users to uncover insights quickly.

This custom approach offers increased flexibility over Amazon AppFlow, which typically restricts extraction to standard fields and might not capture custom Jira fields or hierarchical relationships between fields. By orchestrating the data flow through Lambda, Amazon S3, AWS Glue, Athena, and QuickSight, the solution delivers a comprehensive and tailored dataset—empowering teams with updated, actionable insights for enhanced project visibility and data-driven decision-making.

The following architecture diagram shows the flow of each step of the data pipeline, from triggering a daily event to pull data from Jira to updating dashboards in QuickSight. This automation makes sure that teams have access to the latest project data without manual updates.

The workflow consists of the following steps:

  1. Amazon EventBridge is configured to trigger the data ingestion process daily. This trigger can be further customized and set to the required refresh frequency for your use case. EventBridge triggers Lambda to initiate data extraction from Jira.
  2. Lambda pulls data from Jira using its API when triggered by EventBridge. The Lambda function retrieves relevant project data from Jira based on specified parameters.
  3. The raw data from Jira is stored in the S3 raw data bucket. This bucket serves as the storage location for unprocessed data, providing a centralized repository for all incoming Jira data.
  4. AWS Glue processes and transforms the raw data stored in Amazon S3. AWS Glue performs data cleaning, structuring, and cataloging, making the data ready for further analysis.
  5. The transformed data is saved into the S3 processed data bucket. This processed bucket holds structured and cleaned data, optimized for querying and analysis.
  6. Athena is configured to query the transformed data stored in the processed S3 bucket. Athena enables SQL-based querying, and custom views can be created to support hierarchical structures like themes, epics, and tasks.
  7. QuickSight connects to Athena to visualize the processed Jira data. QuickSight dashboards allow users to explore data through charts and graphs, drill down into hierarchies, and use Amazon Q in QuickSight for natural language querying, enabling real-time insights.

The following sections walk through the steps for extracting, transforming, and visualizing Jira data using AWS services. By following these steps, you can automate data retrieval from Jira, transform it for analysis, and create meaningful visualizations in QuickSight.

Prerequisites

To enable the integration between QuickSight and Jira, you need an AWS account with access to the following services:

  • AWS Account – An active AWS account is required. If you don’t have one yet, you can sign up on the AWS console.
  • QuickSight subscription – Obtain a subscription to QuickSight
  • Give QuickSight access to S3 and Athena: Authorizing connections to Amazon Athena
  • Amazon Athena
  • Amazon EventBridge
  • AWS Glue
  • AWS Lambda
  • Amazon QuickSight
  • Amazon S3

For Jira access, you need the following resources:

  • A valid Jira account (compatible with Jira Cloud and Jira Server versions 8.0 and above). Jira Cloud and Jira Server both provide robust REST APIs, but they differ primarily in authentication and endpoint structure. Jira Cloud uses cloud-based endpoints and requires API tokens for secure access, whereas Jira Server might rely on alternative authentication methods (such as basic auth or session-based tokens) and can have subtle variations in API endpoints and functionality. Review the specific API documentation for your Jira environment to achieve a seamless integration.
  • A Jira API token with appropriate permissions.
  • API access to relevant Jira projects.

Make sure you have the necessary AWS Identity and Access Management (IAM) roles and policies set up with permissions for the following actions:

Create and manage Lambda functions:

  • lambda:CreateFunction
  • lambda:UpdateFunctionCode
  • lambda:InvokeFunction
  • lambda:DeleteFunction

Read from and write to S3 buckets:

  • s3:GetObject
  • s3:PutObject
  • s3:ListBucket
  • s3:DeleteObject

Create and run AWS Glue jobs:

  • glue:CreateJob
  • glue:StartJobRun
  • glue:GetJobRun
  • glue:DeleteJob

Execute Athena queries:

  • athena:StartQueryExecution
  • athena:GetQueryExecution
  • athena:GetQueryResults
  • athena:ListDatabases
  • athena:ListTableMetadata

Extract Jira data using Lambda

The first step in this solution is to set up a Lambda function that will periodically retrieve data from Jira and store it in an S3 bucket. You have the option of adding this code to your extract, transform, and load (ETL) script instead. This would eliminate the need for a Lambda function.

Complete the following steps:

  1. On the Lambda console, choose Functions in the navigation pane.
  2. Choose Create function.
  3. Leave the default as Author from scratch.
  4. Enter pull-jira-data for Function name, choose Python 3.13 for Runtime, and choose Create function.
  5. Use the following Lambda function code. You can update the code according your requirements. This function fetches issues from Jira using the Jira API, then saves the data in Amazon S3. Update the code with your API token, email, domain, and bucket name if different than Jira-data-raw.
  6. Choose Deploy.

The following is the sample code for the Lambda function:

# Import required libraries
import json
import boto3
import requests
import os
import time
from requests.auth import HTTPBasicAuth
from datetime import datetime
from requests.exceptions import RequestException

# Configuration variables
API_TOKEN = os.environ.get('JIRA_API_TOKEN')
EMAIL = os.environ.get('JIRA_EMAIL')
DOMAIN = os.environ.get('JIRA_DOMAIN')
PROJECT_KEY = os.environ.get('JIRA_PROJECT_KEY')
S3_BUCKET = os.environ.get('S3_BUCKET')

# Initialize AWS S3 client
s3_client = boto3.client('s3')

# Function to fetch issues from Jira API with retry logic
def get_jira_issues():
    url = f"{DOMAIN}/rest/api/3/search"
    query = {'jql': f'project = {PROJECT_KEY}', 'startAt': 0, 'maxResults': 100, 'fields': '*all'}
    headers = {'Accept': 'application/json'}
    
    # Retry configuration
    max_retries = 3
    retry_delay = 1  # Initial delay in seconds
    
    for attempt in range(max_retries):
        try:
            # Added timeout parameter to prevent resource exhaustion
            response = requests.get(
                url, 
                headers=headers, 
                params=query, 
                auth=HTTPBasicAuth(EMAIL, API_TOKEN), 
                timeout=10  # 10 second timeout
            )
            
            response.raise_for_status()  # Raise exception for 4XX/5XX responses
            
            if response.status_code == 200:
                return response.json()
        except RequestException as e:
            print(f"Request failed (attempt {attempt+1}/{max_retries}): {str(e)}")
            if attempt < max_retries - 1:
                # Exponential backoff with jitter
                sleep_time = retry_delay * (2 ** attempt) + (time.random() * 0.1)
                time.sleep(sleep_time)
            else:
                print("Max retries reached. Unable to fetch Jira issues.")
                return None
    
    print(f"Failed to fetch issues: {response.status_code}, {response.text}")
    return None

def save_to_s3(data, filename):
    # Function to save data to S3 bucket
    try:
        # Upload JSON data to specified S3 bucket and filename
        s3_client.put_object(Bucket=S3_BUCKET, Key=filename, Body=json.dumps(data), ContentType='application/json')
    except Exception as e:
        print(f"Failed to save to S3: {str(e)}")

# Main AWS Lambda function handler to fetch Jira issues
def lambda_handler(event, context):
    issues_data = get_jira_issues()
    if issues_data:
        timestamp = datetime.now().strftime('%Y-%m-%d_%H-%M-%S')
        filename = f'jira-data/raw/{timestamp}_issues.json'
        save_to_s3(issues_data, filename)
        return {'statusCode': 200, 'body': json.dumps('Jira data fetched and stored successfully!')}
    else:
        return {'statusCode': 500, 'body': json.dumps('Failed to fetch Jira data')}

Update the function role permissions and set environment variables

The next step is to enable your Lambda function to read and write to Amazon S3 by adding permission to access Amazon S3 to the function’s execution role:

  1. On the Lambda console, navigate to the function you created.
  2. On the Configuration tab, choose Permissions.
  3. Choose the function’s role (pull-jira-data-role) to open the role in IAM.
  4. Choose Add permissions and choose Create inline policy.
  5. Create a new policy using the JSON editor by choosing JSON in the Policy editor row.
  6. Enter a modified version of the following policy using the principle of least-privilege and choose Next.
  7. Name it JiraLambdaS3Policy and choose Create policy.

Update Time out and set environment variables:

  1. Navigate back to the Lambda Configuration tab and choose General configuration.
  2. Choose Edit and increase the Timeout setting to 30 seconds.
  3. Choose Save.
  4. Choose Environment variables on the Configuration tab.
  5. Add the required key-value pairs for JIRA_API_TOKEN, JIRA_EMAIL, JIRA_DOMAIN, JIRA_PROJECT_KEY, and S3_BUCKET.
  6. Choose Save.

The following is the sample IAM policy for the Lambda function:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": [
        "s3:PutObject",
        "s3:GetObject"
      ],
      "Resource": [
        "arn:aws:s3:::jiro-data-raw",
        "arn:aws:s3:::jiro-data-raw/*"
      ]
    },
    {
      "Effect": "Allow",
      "Action": [
        "logs:CreateLogGroup",
        "logs:CreateLogStream",
        "logs:PutLogEvents"
      ],
      "Resource": "arn:aws:logs:<>:<>:log-group:*:*"
    }
  ]
}

Add dependencies to your Lambda function

The requests module will need to be added as a layer or in a zip package. If you added any other dependencies to your Lambda function, you will need to do the same for each module.

  1. Open a terminal (PyCharm, Visual Studio, or another code editor of your choice) and navigate to your project directory.
  2. Create a new directory for your Lambda function jira_quicksight_lambda_directory by entering the commands mkdir jira_quicksight_lambda_directory and cd jira_quicksight_lambda_directory.
  3. Install the requests module in the function directory: pip install requests -t .
  4. Create a Python file for your Lambda function:touch lambda_function.py
  5. Open lambda_function.py in the code editor and add your function code.
  6. Zip the function and dependencies into a deployment package:zip -r ../jira_lambda.zip .
  7. Choose Upload from to upload the .zip package directly to your Lambda function.

For more information on how to add modules to your Lambda function using .zip packages, refer to Creating a .zip deployment package with dependencies.

Automate Lambda execution using EventBridge

To make sure your Lambda function automatically runs, you will need to add an EventBridge trigger:

  1. On the Lambda console, navigate to your function.
  2. Choose Add trigger.
  3. Choose EventBridge (CloudWatch Events), then choose Create a new rule.
  4. Enter Trigger-Jira-Data-Pull as the rule name (and an optional description, such as Triggers daily at 9 AM UTC)
    • This scheduled trigger can be customized further to run at a specific or desired interval.
  5. Set Rule type to Schedule expression and use cron(0 9 * * ? *).
    • Alternatively, configure a Jira webhook to send a custom event to EventBridge whenever a specific workflow update or issue change occurs, triggering the Lambda function immediately.
  6. Choose Add to save.

Test your Lambda function

Each time the Lambda function runs, it stores the raw JSON data in an S3 bucket in a structured format, such as s3://jira-data-raw-<>/{year}/{month}/{day}/issues.json. This organization helps with data retrieval and management.

In this step, we test the Lambda function to make sure it’s working properly:

  1. On the Lambda console, navigate to your function.
  2. On the Test tab, leave the Test event action as Create new event and enter a name for Event name.
  3. Enter the JSON data to mimic the event payload that triggers your function: {"trigger": "manual_test"}.
  4. Choose Save.
  5. Choose Test.
  6. Review the Execution results to check if the function ran as expected.

Transform data using AWS Glue part 1

Next, you can use AWS Glue to transform your data into a format that is ideal for visualization:

  1. On the AWS Glue console, choose Crawlers underneath Data Catalog in the navigation pane.
  2. Choose Create crawler, enter a name, and choose Next.
  3. Leave Not yet selected and choose Add a data source.
  4. Leave the default settings, choose Browse S3, choose the bucket with your raw Jira data (for example, s3://jira-data-raw-<>), choose Add an S3 data source, and choose Next.

Transform data using AWS Glue part 2

  1. Choose Create new IAM role (this role will need access to the S3 bucket) and choose Next.
  2. Choose Add a database, enter a unique name, and choose Create database.
  3. Navigate back to the Set output and scheduling tab.
  4. Choose the refresh icon and select the recently created database
  5. Change the frequency to Daily and enter a time after the EventBridge trigger for the Lambda function.
  6. Review the crawler configuration settings and choose Create crawler, then choose Run crawler.

Create an AWS Glue ETL job part 1

Now you can create an ETL job that will automatically create a cleaned version of your data and load it into an S3 bucket:

  1. On the AWS Glue console, choose ETL jobs in the navigation pane, then choose Script editor.
  2. Leave the default settings and choose Create script.
  3. For Name, enter a name for your job (for example, Jira_Transformation_sample_job).
  4. Enter the following code into the Script section.

Create an AWS Glue ETL job part 2

  1. On the Job details tab, choose the same IAM role you used for the crawler and choose Save.
  2. Choose Run.

If your job runs successfully, you will have cleaned data in the S3 bucket specified in your script. Your ETL script will need to be customized to handle your specific Jira data based on your requirements. The following code is a starting point.

Update the following starter code to include the correct bucket names for input, output, and database tables. The code flattens nested data; you can add additional actions as needed.

import sys
from awsglue.transforms import *
from awsglue.utils import getResolvedOptions
from pyspark.context import SparkContext
from awsglue.context import GlueContext
from awsglue.job import Job
from awsglue.dynamicframe import DynamicFrame
from pyspark.sql import functions as F
from datetime import datetime
import logging

# Set up logging
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)

def process_jira_data(glueContext, output_bucket):
    try:
        # Create dynamic frame from catalog
        data_source = glueContext.create_dynamic_frame.from_catalog(
            database="jira_data_db",
            table_name="jira_data"
        )

        # Convert to DataFrame and explode issues array
        df_exploded = data_source.toDF().select(
            F.explode(F.col("issues")).alias("issue")
        )

        # Flatten nested fields
        df_flattened = df_exploded.select(
            F.col("issue.id").alias("issue_id"),
            F.col("issue.key").alias("issue_key"),
            F.col("issue.fields.summary").alias("summary"),
            F.col("issue.fields.project.name").alias("project_name"),
            F.col("issue.fields.issuetype.name").alias("issue_type"),
            F.col("issue.fields.issuetype.id").alias("issue_type_id"),
            F.col("issue.fields.issuetype.hierarchyLevel").alias("hierarchy_level"),
            F.coalesce(F.col("issue.fields.parent.id"), F.lit(None)).alias("parent_id"),
            F.coalesce(F.col("issue.fields.parent.fields.summary"), F.lit(None)).alias("parent_summary"),
            F.col("issue.fields.status.name").alias("status"),
            F.col("issue.fields.created").alias("created"),
            F.col("issue.fields.updated").alias("updated"),
            F.col("issue.fields.customfield_10014").alias("epic_link"),
            F.col("issue.fields.customfield_10019").alias("rank"),
            F.lit(None).alias("initiative_summary"),
            F.lit(None).alias("initiative_id"),
            F.lit(None).alias("theme_summary"),
            F.lit(None).alias("theme_id")
        )

        # Generate output path
        output_path = f"s3://{output_bucket}/cleaned-data/{datetime.now().strftime('%Y-%m-%d_%H-%M-%S')}/"
        
        # Write to S3
        glueContext.write_dynamic_frame.from_options(
            frame=DynamicFrame.fromDF(df_flattened, glueContext, "df_flattened"),
            connection_type="s3",
            connection_options={"path": output_path},
            format="json"
        )
        
        return True

    except Exception as e:
        logger.error(f"Error processing Jira data: {str(e)}")
        raise

def main():
    try:
        # Get job arguments
        args = getResolvedOptions(sys.argv, ['JOB_NAME', 'output_bucket'])
        
        # Initialize Spark and Glue contexts
        sc = SparkContext()
        glueContext = GlueContext(sc)
        spark = glueContext.spark_session
        
        # Initialize job
        job = Job(glueContext)
        job.init(args['JOB_NAME'], args)
        
        # Process the data
        success = process_jira_data(glueContext, args['output_bucket'])
        
        # Commit the job
        job.commit()
        
    except Exception as e:
        logger.error(f"Job failed: {str(e)}")
        raise

if __name__ == "__main__":
    main()

Crawl the cleaned data part 1

Now that the cleaned data has been loaded into an S3 bucket, the next step is to crawl it to add it to the AWS Glue Data Catalog:

  1. On the AWS Glue console, choose Crawlers under Data Catalog in the navigation pane.
  2. Choose Create crawler, enter a name, and choose Next.
  3. Leave Not yet selected and choose Add a data source.
  4. Leave the default settings, choose Browse S3, choose the bucket with your cleaned Jira data (for example, s3://cleaned-jira-data-<>), choose Add an S3 data source, and choose Next.

Crawl the cleaned data part 2

  1. Choose the same IAM role that you used with the first crawler and choose Next.
  2. Choose Add a database and choose the same database (jira_data_db).
  3. Change the frequency to Daily and enter a time after the EventBridge trigger for the Lambda function.
  4. Review the crawler configuration settings, choose Create crawler, then choose Run crawler.

Query cleaned data using Athena

The next step is to create a view with the cleaned data using Athena:

  1. On the Athena console, navigate to the query editor.
  2. Leave Data source as AWSDataCatalog and choose the database you created.
  3. Enter the following SQL code into the Query section to create a schema:CREATE SCHEMA IF NOT EXISTS jira_data_db;
  4. Choose Run to execute the query.
  5. In the Query section, enter the following SQL code to create the view:
CREATE VIEW jira_data_db.jira_view_for_quicksight AS
SELECT 
    "issue_id", 
    "issue_key", 
    "summary", 
    "project_name", 
    "issue_type", 
    "issue_type_id", 
    "hierarchy_level", 
    "parent_id", 
    "parent_summary", 
    "status", 
    "created", 
    "updated", 
    "epic_summary", 
    "epic_id", 
    "initiative_summary", 
    "initiative_id", 
    "theme_summary", 
    "theme_id"
FROM jira_data_db.cleaned_date;

Create datasets in QuickSight

After you set up the Athena views, connect QuickSight to Athena using the following steps:

  1. On the QuickSight console, choose Datasets in the navigation pane.
  2. Choose New dataset.
  3. Choose Athena as the data source, enter a name for the dataset, and choose Create data source.
  4. Choose the dropdown menu Database: contain sets of tables and choose the database you created (jira_data_db).
  5. Choose the view you created (jira_view_for_quicksight).
  6. Choose Edit/Preview data.
  7. Review the dataset you created and then choose Save & Publish.
  8. The final step in automating your data ingestion pipeline is configuring your dataset to refresh automatically. For instructions, see Refreshing a dataset on a schedule. This automatic refresh makes sure your dashboards reflect the latest updates from Jira, providing timely insights without the need for manual refreshes.

Although QuickSight offers direct Jira integration for immediate data access, this approach limits access to nested data structures. One strategy is to create two datasets: one that connects directly to Jira and another that connects to the cleaned data from Athena or Amazon S3.

Create a QuickSight analysis and dashboard

For instructions to create an analysis, refer to Starting an analysis in Amazon QuickSight. After you have created the analysis, you can publish it as an interactive and sharable dashboard. For instructions, see Publishing dashboards.

For individuals who are new to QuickSight and looking to build stunning dashboards, the following QuickSight Author Workshop provides step-by-step instructions to create your dashboard from a basic to advanced level. The following screenshot is an example Summary dashboard that contains insights about the Jira projects to give you some inspiration.

The Detail dashboard shows a more granular view of the Jira data.

Use generative BI in QuickSight with Jira data

Now that you have built and published your dashboard, let’s enable Amazon Q in QuickSight to start using its generative BI capabilities.

Enable Amazon Q in QuickSight Pro features
Complete the following steps:

  1. On the QuickSight console, choose your user profile, then choose Manage QuickSight.
  2. Choose Manage Pricing in the navigation pane and check whether Amazon Q in QuickSight is marked as Active. If so, skip the next step.
  3. Choose Manage Users in the navigation pane. For the user account that you are currently using, choose the dropdown menu under Role and switch the role from Admin to Admin Pro.

This user switch will allow you to still have the same admin permissions as you would in a regular QuickSight account but with the added generative BI Author Pro features.

Now that we have Amazon Q in QuickSight enabled, let’s move to creating our first Amazon Q topic.

Create a topic

For instructions to create a topic on QuickSight, see Creating Amazon QuickSight Q topics. Make sure to enable Use new generative Q&A experience so that you can interact with the multi-visual Q&A experience of Amazon Q in QuickSight.

Make your topic natural-language-friendly

After the topic has been created, you can provide more information about your datasets and associated fields so Amazon Q can better interpret your data. For instructions, see Making Amazon QuickSight Q topics natural-language-friendly.

Link the topic to the analysis

Complete the following steps to link the topic to the analysis:

  1. On the QuickSight console, on the analysis page, choose the options menu (three dots) next to Build visual, then choose Manage Q&A.
  2. Choose whether you want to use datasets for your visual or a linked topic. For this use case, select Use a linked topic for Build visual and Q&A.
  3. Use the dropdown menu to choose the Amazon Q topic that you created.
    • The other option of Use datasets for Build visual and Q&A is called the dashboard Q&A experience, which is essentially a topic-less Q&A experience, in which you can select datasets that you want to use to gain insights from your QuickSight dashboard.
  4. Choose Apply changes.

Now that your Amazon Q topic and analysis are linked, you can explore the Amazon Q in QuickSight features. Make sure publish the dashboard before moving to the next section.

Use the multi-visual Q&A experience

To ask questions with Amazon Q in QuickSight, choose the Amazon Q bar located at the top of the dashboard. This will open the Q&A interface, where you can ask questions of your data.

For example, we ask the question “List the issue id, summary, epic summary, initiative summary and theme summary for issue type story.”

For additional best practices and more information about creating and enhancing Amazon Q topics, refer to the Explore & Enhance Topic workshop.

Use the Executive Summary feature

Let’s say that we want to extract some meaningful insights from our dashboard and discover key points that might have been overlooked. You can use the Executive Summary feature to extract key insights from your dashboard.

In the dashboard, choose the Build dropdown menu, then choose Executive Summary.

Within a few seconds, a summary response will be generated. The following is an example of an executive summary generated from the Jira dashboard.

Use the Data Story feature

Let’s say that we want to share these insights with stakeholders in the form of an email or presentation. The Data Story feature of Amazon Q will not only save time creating these documents, but also allow for effortless sharing of the same visuals from your analysis.

  1. To create a data story, choose the dropdown menu Build, then choose Data Story.
  2. This should open a new tab with a place for you to enter a prompt describing the data story that you want to create. Additionally, you can add supporting visuals from your existing dashboard that the data story will summarize as well.
  3. Choose Build and wait a few moments for the data story to completely generate.

For more information on creating and using data stories, refer to Creating a Data Story with Amazon Q in QuickSight.

In the following example, we use the prompt “Write a story to provide a comprehensive overview of the Jira project’s key statistics and team workload distribution and uncover valuable insights to help drive project management and team productivity.”

The following is a screenshot of the data story created using the prompt in the previous step.

Clean up

When you’re done using this solution, clean up the resources you created to avoid ongoing charges.  Use the following steps to remove all resources deployed by this solution:

  1. Delete QuickSight assets
    In the QuickSight console, delete the following resources:

    • Dashboards created from Jira data
    • Analyses related to Jira datasets
    • Datasets connected to Athena views
    • Athena data source connections
  2. Delete Athena resources
    Run the following SQL statements in the Athena query editor to remove Athena resources:
    DROP VIEW IF EXISTS jira_data_db.jira_view_for_quicksight;
    DROP DATABASE IF EXISTS jira_data_db CASCADE;
  3. Delete AWS Glue resources
    In the AWS Glue console, delete the following:

    • ETL job (Jira_Transformation_sample_job)
    • Crawlers used for Jira data ingestion
    • Glue database (jira_data_db)
  4. Delete Amazon S3 buckets
    Empty and delete the S3 buckets used for raw and processed Jira data.
  5. Delete AWS Lambda function
    In the Lambda console, delete the function (pull-jira-data)
  6. Delete Amazon EventBridge rule
    In the EventBridge console, delete the rule (Trigger-Jira-Data-Pull)
  7. Delete IAM roles and policies
    In the IAM console, delete:

    • Lambda execution role associated with your function (pull-jira-data-role)
    • Inline policy attached to the Lambda execution role (JiraLambdaS3Policy)

This cleanup does not affect your Jira subscription or QuickSight account settings.

Conclusion

The integration of a Jira project’s hierarchical data with QuickSight generative BI capabilities represents a significant leap forward in project management analytics. By using AWS services such as Lambda, Amazon S3, AWS Glue, and Athena, organizations can create a robust, automated pipeline that transforms raw Jira data into actionable insights.

This solution offers several key benefits:

  • Enhanced data accessibility – By making complex Jira data structures that can be queried through natural language, teams across the organization can gain valuable insights without needing advanced technical skills.
  • Real-time insights – The automated data pipeline makes sure that QuickSight dashboards reflect the most up-to-date information from Jira, enabling timely decision-making.
  • Customization and flexibility – The solution can be tailored to accommodate unique Jira configurations and project structures, making it adaptable to various organizational needs.
  • Scalability – Built on AWS services, this solution can scale to handle growing data volumes and increasing numbers of Jira projects.
  • Cost-effective analysis – By using serverless and pay-as-you-go AWS services, organizations can implement powerful analytics capabilities without significant upfront investment in infrastructure.

As businesses continue to rely on data-driven decision-making, tools that bridge the gap between complex data sources and intuitive analysis become increasingly valuable. The combination of Jira’s project management capabilities with QuickSight generative BI offers a powerful solution for organizations looking to optimize their processes, improve project outcomes, and drive innovation. Teams can spend less time wrestling with data and more time acting on insights, ultimately leading to more efficient project management, better resource allocation, and improved business outcomes. As the field of generative AI continues to evolve, we can expect even more powerful capabilities to emerge, further enhancing our ability to extract value from project management data.

In our next post, we will discuss how to deploy this solution using infrastructure as code (IaC).

Your thoughts and questions are important to us. If you have feedback on this integration or need clarification on how to use QuickSight with Jira, please leave a comment.

For more in-depth discussions, or to connect with other users facing similar challenges, we encourage you to explore the Amazon QuickSight Community. This vibrant forum is an excellent resource for sharing experiences, discovering best practices, and getting answers to your specific questions from both AWS experts and fellow QuickSight users. Stay curious, keep exploring, and happy analyzing!


About the Authors

Jacob Grant is a Solutions Architect at AWS, based in Atlanta, Georgia, with nearly four years of AWS experience. He is currently focussed on helping SMB customers build innovative solutions. Jacob has a passion for building solutions in the Machine Learning and Artificial Intelligence domain and has helped customers integrate Q in QuickSight into their analytics workloads. Passionate about empowering customers to unlock insights from their data, Jacob combines deep technical knowledge with practical business solutions. Outside of work, Jacob enjoys spending time with his wife and their two young daughters, embracing family adventures whenever possible.

Sabbah Haq is a Solutions Architect at AWS, based in Los Angeles, California, specializing in Analytics. She is currently focused on the Amazon QuickSight service and building reusable solutions that can help multiple customers and individuals become more innovative. Sabbah strives to bridge the gap between business and technical knowledge to ensure solutions are not only purpose-built but also meaningful. Outside of work, Sabbah enjoys traveling and spending time with her family.

Salim Khan is a Specialist Solutions Architect for Amazon QuickSight. Salim has over 16 years of experience implementing enterprise business intelligence (BI) solutions. Prior to AWS, Salim worked as a BI consultant catering to industry verticals like Automotive, Healthcare, Entertainment, Consumer, Publishing and Financial Services. He has delivered business intelligence, data warehousing, data integration and master data management solutions across enterprises.


This is a companion discussion topic for the original entry at https://aws.amazon.com/blogs/business-intelligence/unlock-data-insights-from-custom-jira-projects-using-generative-bi-in-amazon-quicksight/