Release Notes - Daton - March 2026

Need help with something?

Talk to data expert

🛠️Fixes

Northbeam — Metrics Selection Fix

What Was the Issue?
When selecting Northbeam metrics (such as Attributed Revenue) during integration setup, the selected metrics did not appear as selected consistently when navigating back to a previous step in the setup, even though the selections were saved correctly in the backend.
What We Fixed
We corrected the underlying script that processes metric labels. The previous script was directly modifying the original metric data when cleaning special characters from labels, which caused a mismatch between what was stored and what the UI displayed, leading to selected metrics appearing unselected. The fix ensures metric labels are processed on a separate copy of the data, preserving the original selections intact.
Why This Matters
This fix ensures that the metrics you select for Northbeam are accurately reflected in the table view and remain selected when navigating between setup steps and during edit flow. You can now configure your Northbeam integration with confidence that your metric selections are saved and displayed correctly.
Do You Need to Do Anything?
No action is required. The fix has been applied automatically. If you previously experienced missing or inconsistent Northbeam metrics in your table view, they should now appear correctly. If you notice any remaining issues, please reach out to support.

FBAAmazonFulfilledShipmentsReport — Low Data Volume

What Was the Issue?
We observed a drop in data volume from the FBAAmazonFulfilledShipmentsReport table, where limited or no order data was returned. This started around March 6, 2026 and persisted for a 2 weeks.
This was identified as an issue from Amazon’s side, and they have since confirmed that it has been resolved.
What We Fixed
We rolled back and reprocessed all affected integrations
Missing data has been successfully backfilled and recovered
Why This Matters
This ensures that your shipment and order data is now complete and accurate, maintaining consistency in reporting and downstream analysis.
Do You Need to Do Anything?
No action is required from your end.
However, we recommend validating the data to confirm everything looks correct.

PanEuropeanEligibilityFBAASINs _offer_status Field Issue

What Was the Issue?
In the PanEuropeanEligibilityFBAASINs table (supported only for EU region integrations), the _offer_status field was not being populated.
This was due to a language mismatch between the expected field name and the source data. The system expected the English field _offer_status, while the source data provided the column as “Stato dell'offerta” (Italian), resulting in failed mapping.
What We Fixed
Implemented proper field mapping/alias configuration to map “Stato dell'offerta” → _offer_status
Enabled languageToMapColumns, ensuring correct handling of localized column names
Why This Matters
This fix ensures that the _offer_status field is now correctly populated, improving data completeness and accuracy for EU marketplace reporting.
Do You Need to Do Anything?
No action is required from your end. The fix has been applied, and data is now flowing correctly.

NetSuite Connector — Fixed Timeout Failures on Long-Running Jobs

What Was the Issue?
Some NetSuite integrations experienced timeout failures during long-running data operations. These timeouts occurred particularly during the result-fetching phase, where the connector's existing timeout handling did not provide sufficient coverage. The issue was more likely to surface during periods of high system load or when querying complex database objects in NetSuite.
What We Fixed
We added socket and query timeout parameters at all connection levels to better handle long-running operations. We also extended the custom timeout handling to cover the result-fetching phase, which was previously unprotected. This ensures the connector manages slow responses gracefully throughout the entire data retrieval process.
Why This Matters
NetSuite queries can take longer than usual depending on system pressure and the complexity of the data being requested. With this fix, your NetSuite syncs are less likely to fail due to timeouts, resulting in more reliable and consistent data replication without manual reruns.
Do You Need to Do Anything?
No action is required. The fix has been applied automatically to all NetSuite integrations. Your existing jobs will continue to run as expected with improved stability.

RDBMS Destinations — Fixed New Column Detection for Deeply Nested Tables

What Was the Issue?
Integrations replicating data to RDBMS destinations (such as Snowflake) were failing with invalid column errors. When the source data contained nested structures at the maximum supported depth and new columns appeared at that depth level, the system silently ignored them. This caused SQL compilation errors during data loading because the destination table was missing columns that the incoming data expected.
What We Fixed
We corrected the column detection logic for max-depth nested tables. The system now properly identifies newly detected columns at the maximum nesting depth and either updates existing columns or creates new ones in the destination table as needed. This fix applies to all connectors replicating to RDBMS destinations (Snowflake, PostgreSQL, MySQL, SQL Server, etc.).
Why This Matters
Source data schemas evolve over time — new fields can appear within deeply nested objects. Previously, these new columns at the maximum depth were silently skipped, leading to integration failures and data not being loaded. With this fix, your destination tables automatically stay in sync with schema changes in your source, even for complex nested data structures, ensuring uninterrupted data replication.
Do You Need to Do Anything?
No action is required. The fix has been applied automatically. If you previously experienced invalid column errors on tables with deeply nested data, those integrations should now run successfully on the next sync.

🚀Enhancements

Shiphero — Quantity Field Addition To LineItem Table

What We Improved:
We have added the “quantity” field and its corresponding subordinate “id” field in the LineItem table as “shipped_quantity” and “shipped_line_item_id”. Since the Shiphero API follows a GraphQL structure, both “quantity” and “id” are returned within a nested node. To avoid confusion and duplication, we have renamed these fields accordingly. You can find and select them in the UI.
How It Works:
After this addition, the warehouse data will be populated individually for each active warehouse on a daily basis, rather than being provided in an aggregated format as it was previously.
Why This Matters:
Fetch shipment-level line item details from ShipHero. The quantity field at edges → node → quantity is required because it provides the actual quantity per shipment line item, whereas line_item.quantity represents the aggregated quantity across shipments. This change ensures accurate shipment-level quantity tracking.
Do You Need to Do Anything?
Please select “shipped_quantity” and “shipped_line_item_id” both field from UI.

AppLovin — Campaign_Id Field Addition To Ecommerce_Advertiser Table

What We Improved:
After adding the campaign_id field to the connector, the data quality and usability have significantly improved. Previously, the connector only retrieved campaign_id_external, which is a hashed value and does not match the campaign ID displayed in the Applovin UI. This made it difficult to reconcile warehouse data with UI reports and created challenges during validation and debugging.
With the inclusion of campaign_id:
  • We can now directly map warehouse data to the campaign IDs shown in the Applovin UI.
  • Data validation has become more accurate and straightforward.
  • Debugging and issue resolution are faster, as we can easily identify campaigns without relying on hashed values.
  • Overall trust in reporting has improved since the data can be cross-verified with the source system.
How It Works:
  • The connector now fetches both campaign_id_external (hashed) and campaign_id from the API.
  • campaign_id matches the Applovin UI, enabling direct mapping.
  • Both fields are stored in the warehouse for flexible use in reporting and validation.
Why This Matters:
  • Enables direct matching with the Applovin UI using campaign_id
  • Simplifies data validation and reconciliation
  • Reduces dependency on hashed identifiers
  • Improves reporting accuracy and overall data trust
Do You Need to Do Anything?
Please add campaign_id field from UI.

Byrd — Upgraded to Latest API Version

What We Improved
We've upgraded the Byrd connector to use the latest version of Byrd's APIs. This update brings expanded data coverage with three supported tables — deliveries, products, and shipments, along with improved error handling, automatic token refresh, and incremental data loading for more efficient syncs.
How It Works
Authenticate using your Byrd API Key and API Secret, select from the available tables (deliveries, products, and shipments), and configure your replication frequency. The connector now supports incremental loading for deliveries and shipments based on their last updated timestamps, so only new and modified records are synced on each run. Products are fully refreshed each sync cycle. The connector also automatically handles rate limiting, server errors, and token refresh — so your pipeline runs reliably without manual intervention. Please read documentation here.
Why This Matters
With the updated API integration, you now get richer fulfillment data, including shipment tracking checkpoints, carrier cost breakdowns, container details, customs information, and packaging options, all loaded directly into your warehouse. Incremental loading reduces sync overhead, and built-in retry logic means fewer pipeline failures and less time spent troubleshooting.
Do You Need to Do Anything?
If you want to set up Byrd, you'll need your Byrd API credentials — contact Byrd support or your account manager if you don't have them yet.

Amazon Vendor Ireland Marketplace — Now Enabled

What We Improved
Support for the Amazon Vendor Ireland marketplace has been enabled in both Daton and Pulse.
How It Works
Data for the Ireland marketplace will now be automatically ingested and processed alongside your existing Amazon Vendor marketplaces.
Why This Matters
This ensures complete marketplace coverage, allowing you to track performance and reporting for Ireland without any gaps.
Do You Need to Do Anything?
No action is required from your end. The Ireland marketplace is already enabled and will start reflecting in your data automatically.

Amazon SP-API Inbound Optimization — Reduced API Usage

What We Improved
We optimized the handling of inbound APIs by eliminating redundant data fetch cycles that were causing excessive API usage.
How It Works
Previously, the same data was fetched repeatedly, leading to unnecessary API calls.
Now, we retain the data and use a flag to fetch only new or updated records, avoiding duplicate requests.
Why This Matters
  • Reduces ~50% of redundant Amazon SP-API calls
  • Optimizes Daton processing and warehouse usage
  • Improves overall efficiency and performance of data pipelines
Do You Need to Do Anything?
No action is required from your end. The optimization is automatically applied.

Amazon Sponsored Display — Report Scheduling

What we improved
We optimized how Amazon Sponsored Display ad report jobs are scheduled and retried. The system now intelligently runs data loads only when report data is actually available from Amazon, reducing unnecessary job attempts and improving overall sync speed.
 How it works
 The pipeline now starts after 2:00 AM in your source region's timezone. If Amazon's API indicates data isn't ready yet, the system automatically reschedules rather than marking the job as failed. Once all attribution data is loaded, the next full sync is scheduled for the following day.
Note:
  • If intraday refresh is enabled, reports will continue to refresh every 2 hours as usual.
  • Raw tables will continue to follow set frequency and run normally.
 Why this matters
 This means fewer skipped jobs, and better visibility into the jobs that have processed your Amazon Sponsored Display data.
Do you need to do anything?
No action required. This optimization is applied automatically.

Amazon Ads — Upgraded Store, Portfolio and PortfolioEx Tables

What we improved
We upgraded the data pipelines for the Store, Portfolio, and PortfolioEx tables across all three Amazon Sponsored Ads connectors (Sponsored Products, Sponsored Brands, and Sponsored Display). These pipelines have been migrated to Amazon's latest Ads API version, replacing the previously deprecated endpoints.
How it works
Our system now uses Amazon's latest recommended API for pulling Store, Portfolio, and PortfolioEx data. This migration was seamless as the data schema remains identical, so your existing reports, dashboards, and downstream workflows will continue to work exactly as before with no disruptions.
Why this matters
Amazon deprecated the older API version, which was causing intermittent throttling errors and occasional pipeline failures. With this upgrade, your Amazon Ads data syncs are now more reliable, faster, and future-proofed against further deprecation issues.
 Do you need to do anything?
No action required on your end. This is a backend improvement that takes effect automatically.

ClickPost — Added Shipment_TRACK_ORDER_REPORT to Connector

What We Improved
We've enhanced the ClickPost connector by adding support for a new report table — Shipment_TRACK_ORDER_REPORT. This update expands shipment tracking visibility by providing detailed order-level shipment information, including shipment status updates, courier details, tracking events, and delivery timelines. This addition helps to monitor shipment performance more effectively and improves reporting accuracy across your logistics operations.
How It Works
Using existing ClickPost credentials, we can now enable the Shipment_TRACK_ORDER_REPORT table in the connector configuration. Once enabled, the connector automatically retrieves shipment tracking updates from ClickPost and syncs them into your data warehouse. The table supports incremental data loading, meaning only new or updated shipment records are processed during each sync cycle. This ensures faster synchronization and efficient data processing without requiring manual intervention.
Why This Matters
The new Shipment_TRACK_ORDER_REPORT table provides greater visibility into shipment progress and delivery outcomes. With access to detailed tracking events and status changes, Team can better analyze delivery performance, identify delays, and improve customer communication. Incremental syncing also reduces system load and improves overall data reliability for operational and reporting needs.
Do You Need to Do Anything?
The new Shipment_TRACK_ORDER_REPORT table is now available for selection. If you would like to start using this dataset, enable the table by editing Integration.

✨New Features

Airtable — Now Live

What's New
You can now connect Airtable as a source in Daton. This connector replicates data from your Airtable tables directly into your data warehouse, no manual exports or custom scripts required.
How It Works
Authenticate using your Airtable Personal Access Token and provide your Base ID and Table ID(s). You can sync up to 5 tables per integration from a single Airtable base. The connector dynamically creates columns in your warehouse based on the fields present in your selected Airtable tables, so it adapts to any base structure, whether you're tracking inventory, CRM records, project tasks, or custom workflows. Data is replicated incrementally based on each record's last modified time, keeping your warehouse in sync with minimal overhead. Please read documentation here.
Why This Matters
Teams often manage critical operational data in Airtable, but getting that data into a warehouse for reporting and analytics previously required manual effort. This connector automates the pipeline and handles schema differences automatically, since Airtable doesn't enforce a fixed schema, the connector dynamically maps your fields so you don't have to worry about structural changes breaking your sync.
Do You Need to Do Anything?
To get started, navigate to the Sources section in Daton, select Airtable, and provide your Personal Access Token, Base ID, and Table ID(s). Choose the tables you'd like to replicate and configure your destination. No changes are needed to your existing connectors or integrations.

Facebook Pages — Now in Beta

What's New
You can now connect Facebook Pages as a source in Daton. This connector replicates your page and post-level data directly into your data warehouse
How It Works
Authenticate using your Facebook credentials and connect your Page(s). Once connected, the connector syncs data across the following tables:
Table
Description
pageFeed
Full activity feed for your page
pagePosts
Posts published on or to your page
pageDetails
Page-level metadata and configuration
pageInsights
Performance metrics at the page level
postDetails
Detailed metadata for individual posts
postInsights
Engagement and reach metrics per post
postComments
Comments on your page posts
PublishedPosts
All posts published by your page
Why This Matters
Facebook Pages is a key channel for organic content and community engagement. This connector brings all your page activity, audience engagement, and post performance data into your warehouse automatically, making it easy to build reporting, track content effectiveness, and combine Facebook data with other marketing sources.
Do You Need to Do Anything?
To get started, navigate to the Sources section in Daton, select Facebook Pages, and follow the standard integration setup. Choose the pages you'd need and tables you'd like to replicate and configure your destination.

Amazon Ads — Report Status Visibility on UI

What's new
You can now view the status of pending Amazon Ads reports directly in the Daton UI. This new feature provides real-time visibility into which reports are still being processed by Amazon, so you always know the reason behind any data delays. Click here to learn more about Amazon Ads report statuses.
How it works
Navigate to any Amazon Ads integration table and you will see a new Report Status tab.
For each report, you can view the Report ID, the Start and End date range, and the current Report Status from Amazon.
Note: The view supports pagination with options to display 5, 10, or 20 reports at a time.
Why this matters
Previously, there was no way to know if a data delay was caused by your integration or by Amazon still processing the report. Now you have full transparency, making it easier to troubleshoot and set accurate expectations for when your data will be available.
Do you need to do anything?
No action required. The Report Status tab is now available on all Amazon Ads integration tables. Simply navigate to your integration to start using it.

EdgeMeshSuite — Now Live

What’s New
You can now connect EdgemeshSuite Analytics as a source in Daton. This connector replicates performance, attribution, and user interaction data directly into your data warehouse—no manual extraction or custom pipelines required. It includes multiple analytical reports such as attribution, performance, and dimension-based breakdowns to give a complete view of website behavior and optimization opportunities.
How It Works
Authenticate using your EdgemeshSuite credentials/API access. Once connected, the connector pulls data from various analytics reports (such as attribution and performance reports) and replicates them into your warehouse.
The connector supports multiple report-based tables that capture different aspects of website analytics, including dimension-level attribution and performance insights. Data is replicated incrementally using time-based keys (such as report date or last updated timestamp), ensuring efficient and up-to-date synchronization. Read more here.
The schema is predefined based on EdgemeshSuite report structures, allowing you to seamlessly analyze dimensions like traffic source, device, geography, and performance metrics without worrying about schema management.
Why This Matters
Website performance and attribution data are critical for understanding user experience and conversion drivers, but accessing and analyzing this data across systems can be complex.
This connector automates the pipeline and consolidates all EdgemeshSuite Analytics data into your warehouse. With structured report-level data and dimension-based attribution insights, businesses can easily analyze what drives performance, optimize site speed, and improve marketing effectiveness—without manual effort or data inconsistencies.
Do You Need to Do Anything?
To get started, navigate to the Sources section in Daton, select EdgemeshSuite Analytics, and provide your authentication details. Choose the required reports/tables and configure your destination.
No changes are required for existing connectors or integrations. Once set up, your data will start syncing automatically.

Amazon Selling Partner - Automated ASIN Selection for SQP Reports

What’s New
You can now automatically select ASINs for Search Query Performance (SQP) reports using ActiveListingsReport, reducing the need for manual file uploads.
How It Works
Choose between:
  • SQP Upload ASIN File (manual upload), or
  • ActiveListingsReport (auto-select active ASINs)
If a file is already uploaded, it will continue to take precedence
Applies only to new data going forward (no historical backfill)
Why This Matters
  • Simplifies SQP setup
  • Reduces manual effort and errors
  • Ensures more accurate and up-to-date ASIN selection
Do You Need to Do Anything?
No action is required for existing setups.
If needed, you can switch to ActiveListingsReport from Advanced Configuration Options → Reference ASIN for SQP Reports.