Skip to content

Chat History

Currently, the Model Comparison Lab only saves chat history for the current session. If you refresh the screen, the session chat history will be erased.

It is possible to store your chat interactions across sessions, and while I think this functionality would be useful, I’m not sure whether there is sufficient user interest to incur the additional data storage costs. If you would like for the app to save all chat interactions for your API key, please reach out and let me know.

View Session Chat History

You can view your session’s chat history at the bottom of the page by clicking on the expander button that says “View/Export Model Comparison Session History”. The session chat history shows a table with the following columns:

  • "timestamp": OpenAI’s system time
  • "prompt": Your prompt input
  • "response": The model’s generated response to your prompt. Although the entire response text may not be visible in the table, the complete generated response will be available for export.
  • "total_cost": this is the estimated total cost of the chat completion (prompt and response), based on the total input tokens (system message + prompt) and output tokens (tokens for the generated response), and OpenAI’s stated pricing. Generally speaking, it’s common for the estimated cost to be fractions of a penny, especially for the gpt-3.5-turbo-1106 model.
  • "model": indicates whether the response was generated by the gpt-3.5-turbo-1106 model or the gpt-4-1106-preview model
  • "user_rating": a value from the set [-1, 0, 1], representing your qualitative rating selection for the model’s generated response, where 👍 equals 1, 🤷‍♀️ equals 0, and 💩 equals -1. This value should auto-populate as soon as you make your rating selection and will be replaced by pressing any other rating button.
  • "user_comment": any comment you may have saved to the generated response.
  • "prompt_tokens": the number of tokens in the prompt message. This number is computed by OpenAI and returned in the API response.
  • "completion_tokens": the number of tokens in the model’s generated response message. This number is computed by OpenAI and returned in the API response.
  • "total_tokens": the total number of tokens in the completion, including the system message, the prompt message, and the model’s generated response message. This number is computed by OpenAI and returned in the API response.

Estimated Session API Costs

Additionally, this window reports your total API costs for the session. Unless you are making many queries and generating incredibly long responses, each prompt-response chat interaction should cost fractions of a penny. In fact, if you see that the estimated total cost for a given response from gpt-3.5-turbo-1106 is 0, that’s because it really is that cheap.

Export Session Chat History

You can also export your session’s chat history to a CSV file, including your prompts, responses, ratings, and comments, for your record-keeping or further analysis.

  • By default, the app will only display and let you export your chats from the current session, so if you refresh the page, you will lose your chat history.

    • If you would like to retain an extended chat history on this app for your API key, please let me know.
  • Response ratings and comments saved to a response should automatically populate in your session history. You should verify that this is the case before exporting your session history. If your session history is not updating, you may need to refresh the page. Unofrtunately, this will wipe your chat history for the session.

    • Please report this error if it occurs, as you session history may still be recoverable.

Providing Your Email Address

You do not have to provide an email address to export your session history, but it is strongly encouraged.

Should you have issues with your download or if you would like to request a new download link after your previous download link expires, providing your email address at the time of the initial request makes it easier to verify your identity without sending your API Key over unencrypted email.

It also makes it easier to notify you in the event of a data breach involving your chat history.