Chat history is silently truncated when going over max_token_limit, as we are using the LangChain class ConversationTokenBufferMemory for the chat_history object. The default max_token_limit in agent.yml is currently set to 4k, which is easily too little with current LLMs. Propose to:
- Increase the limit to say 20k, and change default
llm to gpt-4o
- Add a logger if the limit is exceeded (maybe LangChain support this now?)
Chat history is silently truncated when going over
max_token_limit, as we are using the LangChain classConversationTokenBufferMemoryfor thechat_historyobject. The defaultmax_token_limitinagent.ymlis currently set to 4k, which is easily too little with current LLMs. Propose to:llmtogpt-4o