Chatting With Agents
AI Agents in NinjaCat provide a powerful way to interact with your data and automate insights. However, understanding how conversations work—including context retention, tool usage, and troubleshooting outputs—is key to maximizing their effectiveness.
Conversation Space & Privacy
Each user has their own unique conversations with an Agent. You can:
- Access past conversations in Conversation History or start a new one.
- Rest assured that your chats are private—no other user can see your interactions, even with a Public Agent.
However, all users can see Triggered Outputs from an Agent, even if they didn’t originally set up the Trigger.
Having a Conversation with an Agent
Because NinjaCat’s Agents are powered by OpenAI’s Large Language Models (LLMs), they have a broad general knowledge base. For example, if you ask an Agent: “What is Denver, CO known for?” It might tell you it’s the Mile High City, famous for the Rocky Mountains, craft beer, and Red Rocks Amphitheater. However, that’s not why you’re using an Agent.
NinjaCat Agents are designed for specific marketing and advertising-related tasks—their effectiveness depends on:
- The instructions (prompt) given in the Agent Builder.
- The datasets and tools assigned to them.
How to Start a Conversation
- If the Agent is configured to do its task for all accounts or campaigns found in a dataset, you might only need to say: “Go do your thing!”
- If the prompt requires details, you may need to provide specifics: “Analyze performance for Account X in the last 30 days.”
Why Conversations Vary Between Agents
Each Agent behaves differently based on:
- How it was prompted—some Agents ask clarifying questions, while others execute immediately.
- The datasets and tools assigned to it.
- The way each user interacts with the Agent.
How Agents Use Tools During Conversations
During a conversation, the Agent will display the steps it is taking, including any tools being used. These tools are either:
- Locked & pre-enabled (required for basic functionality).
- Optional tools (enabled based on the Agent’s needs). (See more in Adding Tools.)
Viewing Tool Activity:
When a tool is in use, a blue expandable text box appears—clicking it will reveal what the Agent is processing in real-time, including any code or queries that it’s writing.
- String Matcher Tool: Leverage the String Matcher Tool to either find the correct Account Match or Campaign Match (or other element) that the user is inquiring about. If it finds an exact match of that Account / Advertiser / Campaign Name, it will go ahead and perform the task for that Exact Match. If it only finds partial matches then it may ask the user clarifying questions before proceeding.
- Query Executor Tool: Allows the Agent to write a SQL query in order to find the necessary data from the assigned dataset(s). When the AI writes SQL queries, it is allowed to select up to 100,000 rows. This doesn’t mean that it can’t work with datasets that are larger than 100,000 rows, the resulting set of rows that are pulled back from that query just must be under that limit. Many of the datasets that get assigned to an Agent are very large, and could pose performance risks if limitations are not put in place.
- In more complex cases, its been beneficial to include the SQL directly in the prompt of the Agent so that it produces more consistent outputs.
- Those rows that result from the Query Executor Tool then get saved to a CSV. The AI can do a few things with the CSV: it can share it with the user, it can send it to Code Interpreter for further processing, or it can look at its contents 50 rows at a time (see more about this limitation in below “Context Windows” section).
- If the CSV gets passed to Code Interpreter for further processing, then code will begin to be written in order to further analyze the results that were queried in order to produce the final output to the user.
⚠️ For complex use cases, you can include SQL directly in the Agent’s prompt for more consistent results.
Understanding Context Windows & Conversation Limits
Each conversation is a clean slate. Agents are not aware of context in other conversations.
Within a single conversation, context is retained—even if you return to it after a week.
What is a Context Window?
A Context Window is the amount of text the AI can process within one conversation, measured in tokens. This includes:
- The prompt
- All messages exchanged in the conversation
- Any tool-generated messages.
Why does this matter?
Token limits are set by the AI model that powers the Agent and each Large Language Model (LLM) has a different maximum token limit.
- Complex queries or long conversations may eventually exceed the limit. Every conversation needs to be smaller than the context window, otherwise the AI will stop working.
- Choosing the right model for your Agent can impact how much context it can retain over time.
- With that said, context windows are fairly large, but its good to know that restrictions are put in place in order to avoid exceeding the limit and ending up in a broken conversation.
Managing Large Data Sets
To avoid performance issues and exceeding the Context Window, a couple limitations are put in place to help control and manage the amount of data that passes through to an Agent.
- Query Executor Tool retrieves up to 100,000 rows at a time. This doesn’t mean that it can’t work with datasets that are larger than 100,000 rows, the resulting set of rows that are pulled back from that query just must be under that limit. Many of the datasets that get assigned to an Agent are very large, and could pose performance risks if limitations are not put in place.
- If you notice that the Query Executor Tool is displaying “100,000 Results” in the text of the expandable link, that may mean you are hitting this limit and it may not be a complete or accurate result set. The prompt will need to be adjusted in order to pull back less results (an example adjustment: pull back results from the last 14 days instead of the last 30 days).
- In more complex cases, it's been beneficial to include the SQL directly in the prompt of the Agent so that it produces more consistent outputs.
- CSV files are saved with the results from the Query Executor Tool. To get around the Context Window limits, we do not load the saved CSV into the conversation. Instead, the AI can view the data in 50-row chunks, if it wants. Or, it can share the results with the user or it can send it to Code Interpreter for further processing.
These safeguards prevent system overloads and help maintain high-speed AI responses.
Updated 14 days ago