Datagrid, a Procore Company
Pricing
Request a Demo
LoginCreate Account
Datagrid, a Procore Company

Subscribe to our newsletter

By subscribing, you agree to our Privacy Policy.

Product

  • Product
  • Agents
  • Integrations
  • Pricing
  • Download

Resources

  • Guides
  • Blog
  • Events
  • Release Notes
  • FAQ
  • Brand Assets

Get Help

  • Help Center
  • API Quickstart
  • Contact Us

Follow Us

  • LinkedIn
  • YouTube

Company

  • Careers
  • Privacy Policy
  • Terms of Use
  • Master Service Agreement
  • Adoption Agreement
  • Credit Usage Policy and Pricing Terms
  • Report a Vulnerability

© 2026 Datagrid. All rights reserved.

Connector

Amazon Aurora + Datagrid Integration

Amazon Aurora + Datagrid Integration

Connect Amazon Aurora with Datagrid to give AI agents read and write access to your relational database for autonomous data processing workflows.

Connect Amazon Aurora to Datagrid
ProductIntegrationsAmazon Aurora + Datagrid Integration

On this page

OverviewHow to integrate Amazon Aurora with DatagridWhy use Amazon Aurora with DatagridWhat you can build with Amazon Aurora and DatagridResources and documentationFrequently asked questionsSimilar integrationsBrowse by category

Overview

What is Amazon Aurora: Amazon Aurora is a fully managed relational database engine within the Amazon RDS family. It supports MySQL-Compatible and PostgreSQL-Compatible editions, replicates data six ways across three Availability Zones, and scales storage automatically. Organizations use Aurora for enterprise applications, SaaS platforms, distributed systems, and generative AI workloads using the pgvector extension for vector similarity search.

Screenshot 2026-05-10 at 1.42.01 AM

How to integrate Amazon Aurora with Datagrid

Teams that store operational, financial, or supply chain data in Aurora use the Datagrid integration to let AI agents query, transform, and write records without manual export/import cycles. Agents pull structured data from Aurora tables, cross-reference it against 50+ other connected sources, and return processed results to the database as part of automated workflows.

The setup follows four parts. First, you connect your Aurora database by providing the cluster endpoint and credentials through Datagrid's Connect Apps flow. Then you confirm your network prerequisites so Datagrid can reach the Aurora instance. Next, you authenticate with database credentials that have the required permissions. Finally, you configure your data sync settings to define how Datagrid reads from and writes to your database.

Connect your Amazon Aurora database

  1. Open Datagrid and click + Create → Connect Apps

  2. Search for the Amazon Aurora integration

  3. Select the engine type that matches your Aurora cluster (Aurora PostgreSQL or Aurora MySQL)

  4. Enter the Aurora cluster endpoint URL (e.g., mydbcluster.cluster-c7tj4example.us-east-1.rds.amazonaws.com). Use the cluster endpoint for read/write operations or the reader endpoint for read-only access.

  5. Provide the database name, port number (3306 for MySQL, 5432 for PostgreSQL), and administrator credentials

  6. Test the connection and click Save

Confirm your prerequisites

Before connecting, confirm the following in your AWS environment:

  • Your Aurora cluster is reachable from Datagrid. Aurora instances are VPC-bound by default. External connections require either public accessibility with IP-restricted security groups, VPC peering, or VPN.

  • Your database user has the required permissions for the operations you intend to run. If Datagrid reaches the endpoint but the database user lacks the required permissions, read and write operations still fail.

Authenticate with database credentials

Datagrid authenticates using the database username and password credentials. For setup, use administrator credentials or another database user with the required permissions.

You can manage the Aurora username and password outside of Datagrid using AWS services such as AWS Secrets Manager for automatic rotation or IAM database authentication for temporary tokens. Datagrid still requires a valid database username and password (or token) at connection time, regardless of how you manage those credentials on AWS.

Configure your data sync settings

Datagrid supports separate read and write access for Aurora, so teams can decide which workflows only query data and which ones update records after processing. The sync behavior is summarized below.

  • Supported engines: Aurora PostgreSQL, Aurora MySQL

  • Read direction: Aurora → Datagrid (tables, views, query results)

  • Write direction: Datagrid → Aurora (inserts, updates)

  • Sync type: Datagrid integration supports separate read and write access

  • Data objects: Tables and views; access to other objects depends on database engine and the configuration

  • Supported data types: Common MySQL and PostgreSQL scalar and JSON types, including VARCHAR, INTEGER, TIMESTAMP, BOOLEAN, DECIMAL, and JSON/JSONB

  • Trigger options: Scheduled runs and configured automations/triggers

You can use the following configuration pattern when setting up the integration in Datagrid:

  • {

  • "source": "Amazon Aurora",

  • "engine": "Aurora PostgreSQL | Aurora MySQL",

  • "endpoint": "mydbcluster.cluster-c7tj4example.us-east-1.rds.amazonaws.com",

  • "database": "<database name>",

  • "port": "5432 | 3306",

  • "authentication": "database username and password",

  • "access_mode": "read/write through cluster endpoint | read-only through reader endpoint"

  • }

This setup gives Datagrid a direct connection target and sets whether workflows send read and write traffic through the writer endpoint or send read-only traffic to Aurora replicas. Once this behavior is defined, Datagrid runs scheduled or triggered workflows against Aurora data without manual exports between systems.

Why use Amazon Aurora with Datagrid

Teams that keep operational data in Aurora use Datagrid to execute repeatable workflows against that data, connect it to other systems, and return processed outputs to the database. Datagrid's agentic AI agents execute the data movement and analysis so project teams can focus on decisions, exceptions, and next actions.

  • Database access for AI agents: Datagrid's agentic AI agents read structured data from Aurora tables and write processed results back through configured integration actions, with no manual export/import cycles required.

  • Cross-platform data blending: Agents combine Aurora records with data from cloud storage, project management, and analytics tools to build complete operational pictures.

  • Autonomous data transformation: Datagrid's agentic AI agents process, clean, enrich, and reformat Aurora data into business-ready outputs without human intervention at each step.

  • MySQL-Compatible and PostgreSQL-Compatible coverage: The integration works with both Aurora engine types, covering the full range of Aurora deployments across your organization.

  • Workflow automation on schedules and configured triggers: Agents run automated data processing flows on schedule or through configured automations, keeping downstream systems current with Aurora data.

  • Serverless-ready architecture: Aurora Serverless v2 scales capacity to match Datagrid agent activity, from idle periods to full throughput during batch processing runs.

What you can build with Amazon Aurora and Datagrid

Connect Amazon Aurora to Datagrid and put your relational data to work. Here are practical ways to use Datagrid's agentic AI agents with your transactional records, operational metadata, and cross-system workflows:

  • Automated project data reconciliation: An AI agent queries Aurora for financial records and material specifications, cross-references them against submittals and RFIs stored in Autodesk Construction Cloud, flags discrepancies, and writes reconciliation results back to a dedicated Aurora table for project managers to review.

  • Autonomous supplier record enrichment: Datagrid agents pull vendor and supplier records from Aurora, compare them against external datasets and project files in cloud storage, validate contact details and certifications, and write enriched records back to Aurora, keeping your supplier database accurate without manual data entry.

  • Scheduled operational reporting pipeline: An agent runs on a nightly schedule, queries Aurora for the day's transactional data across multiple tables, transforms raw records into structured summaries, and delivers formatted reports to project teams through communication tools, replacing manual SQL queries and spreadsheet assembly.

  • AI-driven anomaly detection on transactional data: Datagrid agents regularly read time-series data from Aurora, apply pattern detection logic to identify outliers in financial transactions or operational metrics, and route flagged records to the appropriate team through project management tools with full context attached.

Resources and documentation

  • Connecting to an Aurora DB cluster: Endpoint types, connection methods, and driver options

  • Aurora endpoint types reference: Cluster, reader, custom, and instance endpoint details

  • IAM database authentication for Aurora: Token-based auth setup

  • Aurora security best practices: VPC, encryption, and access control guidance

  • Aurora tutorials hub: step-by-step guides for common Aurora operations

Frequently asked questions

What Aurora database engines does the Datagrid integration support?

The Datagrid integration supports both Aurora PostgreSQL-Compatible and Aurora MySQL-Compatible editions. Both engines are available in provisioned and Serverless v2 configurations. Aurora PostgreSQL offers additional capabilities relevant to AI workloads, including the pgvector extension for vector similarity search.

How does Datagrid connect to an Aurora database inside a VPC?

Aurora instances are VPC-bound by default. To connect Datagrid, you can configure public accessibility with IP-restricted security groups, set up VPC peering, or use a VPN. If connections time out, check your VPC routing table and security group rules.

What authentication methods work with the Amazon Aurora integration?

Datagrid authenticates using database username and password credentials. AWS separately supports IAM database authentication and AWS Secrets Manager for managing and rotating credentials, but these are AWS-side features for credential management rather than built-in Datagrid capabilities. Datagrid still requires a valid username and password at the time of integration.

Can Datagrid both read from and write to Amazon Aurora?

Yes. Datagrid supports reading from Aurora and writing data back through configured integration actions.

Which Aurora endpoint type should I use when configuring the Datagrid integration?

Use the cluster endpoint for workflows that require both read and write access because it routes to the current primary writer instance. For read-only agent workflows, the reader endpoint load-balances queries across available Aurora Replicas. Aurora also supports up to 5 custom endpoints per cluster for workload-specific routing.

Similar integrations

  • Amazon RDS: AWS-managed relational database supporting MySQL, PostgreSQL, SQL Server, MariaDB, and Oracle, with more engine options than Aurora.

  • Amazon AWS S3: AWS object storage used as a staging layer for Aurora data exports (Parquet, CSV) and bulk imports.

  • AWS Timestream: AWS time-series database for IoT and operational metrics that complement Aurora's relational workloads.

Browse by category

  • Database

  • Data Warehouse

Related Guides

CSI Divisions and Construction Specifications (Complete Guide)

Transmittal vs. Submittal in Construction

How to Resolve Construction Submittal Stamp Ambiguity Before It Becomes Rework

Request a Demo

You've got more important things to do. Let Datagrid handle the rest.

Watch our quick demo to see how Datagrid transforms workflows. Discover the seamless integration of our AI assistants in real-time tasks.

Book a DemoLearn More