If a dataset referenced in a SQL Transform gets deleted, the UI now surfaces a clear, helpful error message explaining what happened — instead of throwing a generic "unknown error". You'll now know exactly what's wrong and can take action to fix the transform config.
You now have more visibility and control over how your data syncs, at both the connector and dataset level. Whether you're managing sync schedules, adjusting how far back data is pulled, or fine-tuning chunking behavior, these controls give you the flexibility to configure data ingestion the way your accounts need it.
Here's what's new:
-
Connector-level configuration — From the main connector page, you now have the ability to set the chunking size ("Max Days per Request (Chunk Size)") if needed. Guardrails are in place to prevent chunk sizes that could cause issues (minimum 5 days — contact support if you need something outside that range).
-
Dataset-level overrides — Need something different for a specific dataset? You can now override the refresh lookback window and chunking size independently. The same guardrails that apply at the connector level apply here as well.
-
More flexibility in scheduling your dataset — You now have three distinct scheduling options for your dataset. You can run your dataset on the connector's existing schedule, or select "Add Schedule (Optional)" to configure an additional time. When adding a schedule, you'll have the choice to run it in addition to your connector schedule or to override it entirely — giving you full control over when your data syncs.
-
Targeted manual backfills — In addition to the existing manual sync functionality, you can now kick off a backfill for a specific date range targeting one or multiple accounts, giving you more surgical control when you need to re-pull data. This option can be found by clicking "Manual Sync" from the hamburger menu of your dataset on the connector page.
-
Improved setup workflow for custom mapping providers (Google Sheets, SQL, Email, Snowflake Share) — We've reordered the setup flow so that Date & Account Matching now comes before the Configure Columns step. This small change makes setup more intuitive and helps reduce configuration errors.
We now have a powerful new internal diagnostic tool that gives visibility into up to 2 years of data health per advertiser, displayed as a color-coded visual grid. Instead of guessing what's missing or triggering a broad re-sync, you can now:
- See at a glance exactly which advertiser, network, and date has a gap
- Hover any cell to get the specific details
- Trigger a targeted backfill for just that gap, rather than reprocessing everything
This is a starting point for diagnosis, not a final verdict — a colored blue cell means "investigate," not automatically "broken." Gaps can be caused by auth issues, quota limits, or simply no data from the source that day. But now we have the tools to figure that out quickly and act precisely.
You can now re-run exports across a date range in one shot, no more clicking through each day individually. A new Batched Run option lets you replay an export as if it were scheduled to run on each day (or month) within a range you define. Each execution simulates a scheduled run using the export's existing configuration, incrementing automatically from start to end.
There are two things you configure to make this work: an interval and a date range — and understanding the difference between them is the key to using this feature well:
Interval = how big is each slice of data? Choose from:
- Previous Day = each chunk covers "yesterday" relative to that step in the run
- 1 Day = each chunk covers exactly 1 day of data
- Previous Month = each chunk covers the previous full calendar month
Date Range = what overall period do you want to cover? Set a Range Start and Range End using the date pickers.
Put them together and the system does the rest. For example: pick 1 Day as the interval + Jan 1 – Jan 30 as the range → 30 individual export runs, one per day, kicked off automatically.
You can now upload any file format to an Agent conversation or Agent Knowledge. Previously, only a select list of formats were allowed; we've removed the restriction on supported file types entirely.
One caveat: image file uploads are still only supported in agent conversations, not in an Agent's Knowledge.
You can now select and manipulate multiple widgets at once in the Template Editor. Click and drag to marquee-select widgets, or hold Shift to build a selection one widget at a time. Once selected, you can move, resize, copy, duplicate, delete, and reorder your widgets as a group — saving significant time when working with complex template layouts. Undo works across the entire group action, so you can confidently experiment without worrying about tedious rollbacks. Constrained movement and proportional resizing are available by holding Shift while dragging or resizing.
We've shipped Tool Masking, a behind-the-scenes improvement that helps your Agent conversations run longer without hitting context limits as often.
When an Agent uses tools to pull in data on your behalf -- such as info pulled from a web scraping tool, resulting data from the Query Executor tool -- those results can be very large and quickly eat up the conversation's limited context window space. With Tool Masking, if any individual tool result exceeds a certain amount of space, or if the conversation has hit a certain threshold of the available space, those tool results are automatically compressed in the conversation. Therefore, taking up far less space — while still remaining fully accessible if the Agent needs to reference them later - no lost context. This means fewer interruptions and more headroom in complex, multi-step conversations.
See the full documentation here: https://docs.ninjacat.io/docs/chatting-with-agents?isFramePreview=true#understanding-context-windows--conversation-limits
We've shipped Tool Masking, a behind-the-scenes improvement that helps your Agent conversations run longer without hitting context limits as often.
When an Agent uses tools to pull in data on your behalf -- such as info pulled from a web scraping tool, resulting data from the Query Executor tool -- those results can be very large and quickly eat up the conversation's limited context window space. With Tool Masking, if any individual tool result exceeds a certain amount of space, or if the conversation has hit a certain threshold of the available space, those tool results are automatically compressed in the conversation. Therefore, taking up far less space — while still remaining fully accessible if the Agent needs to reference them later - no lost context. This means fewer interruptions and more headroom in complex, multi-step conversations.
See the full documentation here: https://docs.ninjacat.io/docs/chatting-with-agents?isFramePreview=true#understanding-context-windows--conversation-limits
Agent Builder Bob can now create Knowledge Files and assign them to your agent automatically. So while you're building an Agent with Bob, you don't have to figure out what info should live in the foundational instructions vs. writing up a Knowledge file to assign.
Here's how Bob thinks about it: your agent's instructions should cover the core stuff it needs every time — its role, tone, and key rules. Everything else — reference docs, FAQs, process guides — could live in Knowledge Files, so the agent can look things up when they're relevant without being overloaded upfront.
This leaves the user with a better-organized agent with better-managed context, all managed by Bob.
Bob is able to create, read, update and delete knowledge files that he has generated. Bob, however, is not able to touch files that a user has uploaded themselves. This may come with a future iteration.
Agent Builder Bob can now create Knowledge Files and assign them to your agent automatically. So while you're building an Agent with Bob, you don't have to figure out what info should live in the foundational instructions vs. writing up a Knowledge file to assign.
Here's how Bob thinks about it: your agent's instructions should cover the core stuff it needs every time — its role, tone, and key rules. Everything else — reference docs, FAQs, process guides — could live in Knowledge Files, so the agent can look things up when they're relevant without being overloaded upfront.
This leaves the user with a better-organized agent with better-managed context, all managed by Bob.
Bob is able to create, read, update and delete knowledge files that he has generated. Bob, however, is not able to touch files that a user has uploaded themselves. This may come with a future iteration.