Incident: Jan 15 / Jan 18

External Data Requests were delayed in the processing system


📘

What Happened?

Between Thursday, January 15th and Sunday, January 18th, 2026, some customers experienced delays and failures with data ingestion from certain connected platforms, specifically Google DV360, Simplifi, The Trade Desk, Nativo, Call Rail, Big Query, Google Ad Manager, Centro Basis, Snowflake Share, and Google Sheets. The issue began after a system update intended to improve how we process incoming data files. This update unexpectedly affected several data connections, and a separate processing issue over the weekend caused additional delays before full resolution on Sunday.

We sincerely apologize for the disruption this caused to your workflows and reporting schedules.

Impact Summary

What was experienced:

  • Data from Google DV360, Simplifi, The Trade Desk, Nativo, Call Rail, Big Query, Google Ad Manager, Centro Basis, Snowflake Share, and Google Sheets connections may have failed to update or experienced significant delays
  • Some Google Sheets connections with duplicate column names failed to sync
  • Processing queues backed up, causing delays across affected data sources

Timeline

Thursday, January 15

A system update was deployed to improve data file processing. Shortly after, we identified that several data connections began experiencing failures due to variations in how different platforms format their data exports.

Friday, January 16

Our data processing systems experienced a condition that caused queued requests to stall. Our team identified the issue and applied fixes to restore normal processing flow.

Sunday, January 18 (Morning)

A separate issue caused by very large data files from DV360 created temporary processing capacity constraints. Our team implemented a new streaming approach to handle these files more efficiently.

Sunday, January 18 (Evening)

All processing backlogs cleared and system performance returned to normal. Incident fully resolved.

Root Cause

This incident had multiple contributing factors. The initial trigger was a system update designed to improve how we match incoming data columns from your connected platforms. While this change solved issues with platforms that occasionally reorder their data columns, it revealed edge cases with certain data sources—specifically, platforms that include empty column headers, empty data rows, or duplicate column names in their exports.

Additionally, on Friday our processing systems encountered a networking condition that caused data requests to stall rather than completing or timing out normally. This was resolved by implementing improved timeout handling across all our external connections.

Finally, on Sunday morning, very large data files (some exceeding 150GB) from DV360 exceeded our processing time limits and accumulated faster than our cleanup systems could handle. We resolved this by implementing a new approach that streams large files directly to cloud storage rather than processing them locally first.

Our Response

Our team took immediate action to minimize customer impact:

Immediate Actions (January 15-18)

  • Connection stability: Applied timeout configurations to all external data connections to prevent processing stalls
  • Large file handling: Implemented direct cloud streaming for DV360 to eliminate processing bottlenecks
  • Provider-specific fixes: Resolved data formatting issues affecting Simplify, Call Rail, and Google Sheets connections

Ongoing Improvements (January 2026)

  • Infrastructure upgrades: Extending our cloud streaming approach to additional data providers
  • Monitoring enhancements: Improving our alerting for processing queue health and disk utilization

What We're Doing to Prevent This

We've identified several improvements to prevent similar incidents and improve our response capabilities:

  • Connection timeout handling: All external data connections now have proper timeout configurations to prevent processing stalls
  • DV360 streaming: Large DV360 files now stream directly to cloud storage, eliminating local processing constraints
  • Data format handling: Improved handling of variations in data exports from Simplify and Call Rail
  • Additional monitoring and alerting: We've added additional monitoring and alerting for this scenario that we experienced
  • 🔄 Extended streaming architecture: Applying our cloud streaming approach to additional data providers Status: In progress
  • 🔄 Testing infrastructure: Enhancing our testing environment to better simulate real-world data scenarios Status: In progress

Our Commitment to You

We understand how critical timely and accurate reporting data is to your business. This incident reinforced the importance of building resilient systems that gracefully handle variations in data from third-party platforms. We're committed to continuing to improve our infrastructure to provide you with reliable, uninterrupted service.

We're grateful for your patience and continued trust in NinjaCat.

Questions or Concerns?

If you have any questions about this incident or continue to experience any issues with your data connections, please don't hesitate to reach out to our support team. We're here to help.

Contact Support: