Troubleshooting

AI is a powerful tool, but it’s still evolving. To ensure your Agent is delivering accurate and reliable outputs, it’s important to test and refine its outputs. NinjaCat’s advantage is that your data is already integrated into our platform, but because we leverage Large Language Models (LLMs) (primarily OpenAI), Agent performance also depends on the latest AI advancements and reliability.

Data Accuracy

The best way to validate an Agent’s output is to compare it against the source data in NinjaCat’s Data Cloud. Here’s how:

  1. Use Data Explorer – Filter down to the relevant data in Data Cloud and compare it with the Agent’s response. Since this is the data the Agent has access to, the results should align.
  2. Avoid comparing Agent outputs to reports or templates – The Template Editor and Reports in NinjaCat may not be an accurate reference. If there’s a discrepancy between the Agent’s output and a report, check Data Cloud first.
  3. If Data Cloud results don’t match the report – This could indicate an ingestion issue, not an Agent issue. If you suspect a data ingestion problem, please submit a ticket to [email protected].

Agent Output & Formatting

Fine-tuning an Agent’s output takes time. After the initial setup, test the Agent by having a few conversations and reviewing its outputs. Ask yourself:

  • Is the Agent presenting a table with all the columns I requested?
  • Does the analysis include all expected details and recommendations?
  • Is it pulling the correct data fields (e.g., Campaign Name, CTR calculations)?

If something isn’t quite right, it’s completely normal to go back into the Agent Builder and:

  • Tweak the prompt → Test the response → Adjust again

This iterative process helps refine the Agent until you’re confident in its performance. With each refinement, the Agent gets better at delivering the insights you need.