đī¸ How to Add Evaluation Samples by Code When Creating a Run
When creating a run, you can add evaluation samples by inputting or copying/pasting code. This method is suitable for running single samples in bulk and for users who are accustomed to using Playgrounds to test dialogue effects.
đī¸ How to add samples via API?
You can also add samples to your sample set via API for evaluation purposes.
đī¸ How to Bulk Import Variable Values by Uploading a CSV File?
First, go to the page where you can view the variable value list of a template. Click on the "Bulk Import" button, then in the pop-up window, download the CSV template corresponding to the variable list. The first row of the CSV template contains all the variable names in that variable list. From the second row onwards, you need to add different variable values in the file, with each row representing a set of variable values.
đī¸ What is the JSONL File Format for Importing Samples?
EvalsOne also supports bulk adding samples by uploading JSONL files. The JSONL file format is compatible with the data file format defined in OpenAI Evals//github.com/openai/evals/blob/main/evals/registry/data/README.md, where each line represents a sample. Below is an example:
đī¸ How to Add Background Information in Dialogue Templates?
When performing Retrieval-Augmented Generation (RAG), we may need to add background information (context) to the dialogue, allowing the large model to reference the background information to generate answers and evaluate the quality of those answers using metrics.