For the first time, we're now offering models from Google as model options for Agents: Gemini 3 Pro and Gemini 3 Flash.

Why these models are a big deal: -Massive 1M Context Window: While our current Anthropic models are great for reasoning, Gemini's 1-million-token context window is a game-changer. For those scenarios where you're consistently running into context window max, try switching to a Gemini model. -Low Cost Profile: These models offer a significantly lower price point, especially for high-volume tasks. Gemini 3 Flash is particularly optimized for scale, coming in at just $0.50 per 1M tokens—making it our most cost-effective option for speed and scale.

Our Model Selection documentation has been updated with this information here: https://docs.ninjacat.io/update/docs/model-selection

You can now programmatically monitor dataset ingestion health using the NinjaCat Management API. This feature provides visibility into when dataset ingestions fail, return no data, or are delayed, helping teams identify and respond to issues before they impact reporting or users.

With the new Ingestion History / Batches endpoints, you can:

  • Query recent ingestion runs across datasets
  • Identify failed, incomplete, or in-progress ingestions
  • Review batch-level outcomes and totals
  • Drill into execution-level error details, including credential and permission issues

This capability is designed to support a variety of workflows, including scheduled health checks, custom monitoring scripts, and automated alerts using various tools. Teams can choose how frequently to check ingestion status and which datasets are most important to monitor.

To get started, provide your Management API credentials and call the ingestion history endpoints with supported filters (for example, checking the last 24 hours for errors or no-data responses). When issues are identified, remediation—such as fixing credentials, updating permissions, or running backfills—is handled directly in the NinjaCat UI.

Documentation can be found here: https://documenter.getpostman.com/view/4308783/SVSGMpwr#91ec4a09-c561-4287-bdf2-c14988c9412b

At the chat-level, you are now able to view how much time was spent, how many tokens used and cost per individual tool call and assistant message. This is just the first step to bringing some usage visibility to the UI so that our users have more intel about how much their conversations with agents are costing them.

This information can be seen in standard conversations with Agents, as well as when working w/ Agent Builder Bob and in the Agent Builder Chat Preview.

We have updated our Agent-Level Permissions to allow more flexibility when granting users chat and edit access to Agents.

The Access settings have been moved to a new "Settings" tab in the Agent Builder where both an Organization Access can be set (Can Edit, Can Chat and Private), as well as the option to give certain users a different level of access. View all the access rules and how to manage permissions here: https://docs.ninjacat.io/docs/managing-access

This now allows users the flexibility to keep their perfected Agents as chat only for everyone in the org, except for those peers that you also want to be able to manage the settings of that Agent (or vice versa).

The current Criteo API version has been upgraded to the latest API version, 2025.10 to ensure continued functionality and data availability for our customers.

We've rebuilt our ad preview system to fix the persistent issues with broken and expired Facebook ad images in your reports and dashboards.

What's New:

  • Automatic Nightly Refresh: Ad preview images update automatically, keeping them current without manual intervention
  • Force Refresh Option: When needed, you can manually refresh images:
    • Template Editor & Reports: Right-click the widget and select "Refresh Ad Preview Images"
    • Dashboard: Hover over any preview and click the 🔄 icon
  • Improved Reliability: Ad previews now render consistently in template editor, dashboards, and PDF reports
  • Expanded Support: Includes Facebook Ads, TikTok Ads, and Facebook Insights posts

We've made several enhancements to our Data Apps feature, improving load times and security:

  1. We have consolidated the way we preview and deploy Data Apps, resulting in faster load times when saving edits to your Data App. In some cases this could take over a minute, and now this takes just a couple of seconds.

  2. Data Apps are now more secure so that you must be a NinjaCat user in order to open up a Data App (that has not been made public). See third point here...

  3. If you are interested in sharing a Data App with a non-NinjaCat user, you still have the ability to do so, but we've just made it a more formal and secure method of sharing. There is now an option in the Configure tab of the Data App Builder to "Enable Public URL." This generates a public URL so that once the Data App is saved, this URL can be shared with others (so anyone with the link is able to access your Data App). You also have the option of later disabling this toggle. In doing so, the Data App results in an 'app not found' for any user that attempts to open the link.

Our webscraping tool that can optionally be enabled for Agents to scrape external websites (driven by Firecrawl) has been updated. Now when its used by an Agent, it has the ability to return all the formats that Firecrawl supports, if necessary (such as the raw HTML and image assets).

Also, we will now save any screenshots of the website that have been pulled back by the tool so that the Agent has the ability to visually analyze those screenshots (the website). Now, the user can work on things more effectively with the agent such as "tell me how to optimize our customer's website"

OpenAI has upgraded their GPT‑5 series with the release of GPT-5.2 Instant and GPT-5.2 Thinking. We now support both as model options in the Agent Builder to optionally choose for your Agents. To learn more, see OpenAI's support docs: https://openai.com/index/introducing-gpt-5-2/

Important note: the GPT-5 series (including, 5, 5.1, 5.2) all have larger context windows than older OpenAI models as well as Anthropic models. The GPT-5 series supports conversations up to 400k tokens, although in some cases may mean a slightly higher price. For a full view of our supported models and their cost + context window support, see our ReadMe doc: https://docs.ninjacat.io/docs/model-selection