Frequently Asked Questions about AI Copilot
Some answers to common issues when implementing Copilot in production
Q: The copilot seems to answer the question accurately but the answers are incomplete. What can I do?
A: To avoid this, you need to increase the Output Tokens from the “settings” section of your copilot.
NOTE: If your copilot use case requires a lot of memory/history of the previous chats to continue the conversation this can create huge context windows and you might get a quickly scaled up usage and higher billing.
Q: We are running a voice-based bot, the TTS model reads out the formatting syntax like asterisks, hyphens, etc. This is very troubling to the user. What can we do?
A: You can add a simple prompt instruction: “Don't include markdown” or if you want to be a lot more specific you can write a detailed prompt like this:
Q: The copilot is running on Whatsapp for text and Twilio for phone audio, I want to make sure that some prompts are specified only to Twilio, I don’t want to make two bots. What can i do?
A: For example, if you want to make sure that Twilio does not use markdown but you want formatting for your Whatsapp or Web bots. You can add “if statement” to the prompt through our Jinja templating language like this:
NOTE: This can be added in your basic prompt instructions there are no “special” settings for this.
Q: I have a WhatsApp/Slack copilot, but the text output shows up as markup language. How can I fix this?
A: By incorporating Jinja templating in the Prompt, you can adjust the prompt as needed. In the example below we are using WhatsApp and Slack clients running alongside your web client you can try this example in your AI Copilot:
Here the Jinja templating language uses an if statement
to provide the prompt with a condition to output the text in a particular way based on the platform the bot is integrated into.
Q: How many last messages (conversation history) does the copilot accept while answering the user's next question?
A: We retain up to 50 conversations in the conversation history.
Q: Can LLM "temperature" be increased beyond 1 if I want more creative responses?
A: We host several AI models, and the LLM "temperature" settings can vary. Here are the temperature settings per model:
OpenAI
0.0 to 2.0
0.0 to 1.0
0.0 to 2.0
0.0 to 2.0
0.0 to 1.0
Last updated