Add and use custom tools
The Tool Use (Function calling) mechanism of large language models is a powerful feature that allows the model to call external tools or APIs during a conversation to obtain information or perform specific tasks. This mechanism greatly expands the capabilities of large language models, enabling them to not only answer questions based on existing knowledge but also to obtain and process external data in real-time, providing more accurate and timely responses.
Currently, some Playgrounds support debugging tool calls, but few platforms provide a complete end-to-end implementation based on tool calls. On the ConsoleX.ai platform, users can not only easily debug Tool Calls but also extend the functionality of AI conversations based on Tool Calls, such as image generation and triggering external workflows.
On the ConsoleX.ai platform, users can easily add and debug Tool Calls. Here are the basic steps to use Tool Calls on ConsoleX:
1. Define the JSON Schema Description of the Toolâ
First, in the main interface of ConsoleX.ai, turn on the "Enable Tool Calls" switch in the menu on the right side, and find the link entry "Manage My Tools". Click to enter the tool management interface. Click the "Create New Tool" button in the upper right corner, where you can add the JSON Schema structure description of the tool. The description of the tool's JSON Schema follows the OpenAI function calling format, including function name, description, parameters, and other information. Refer to: https://platform.openai.com/docs/guides/function-calling.
For example, if we want to define a tool that implements the function of querying the weather, we can describe the JSON Schema like this:
{
"name": "get_current_weather",
"description": "Get the current weather in a given location",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The city and state, e.g. San Francisco, CA"
},
"unit": {"type": "string", "enum": ["celsius", "fahrenheit"]}
},
"required": ["location"]
}
}
The defined JSON Schema can be used on ConsoleX for all models that support tool or function calls, such as OpenAI's gpt-4o, gpt-4o-mini, Anthropic's claude-3.5 series, and Google Gemini's Gemini 1.5 Pro models. If you call tool functions of different models in your own application, please ensure that the parameters passed in when defining the tool functions through JSON Schema conform to the parameter format supported by the model.
When you initiate a conversation with a model that supports tool calls, turn on the enable tool calls switch and attach the tool in the current conversation. The large model will autonomously decide whether to call the function based on your question and the tool function definition you provided. If a call is needed, the model's response will include the name of the function to be called and the required parameters.
For example, if your question is:
What is the weather in London?
If the function call is triggered, the large model will return the specific parameters required for the function call, for example:
getCurrentWeather{
"lat": "51.5074",
"lon": "-0.1278",
"units": "metric"
}
There is also a forced tool call switch in the interface. If turned on, ConsoleX will provide an option for forced tool calls when invoking the large model's conversation results and passing in the tool list.
Each large model vendor's support for forced tool calls varies. Please refer to the official documentation of each vendor for details.
2. End-to-end Function Callâ
At this point, you have successfully triggered a function call. You should see a yellow information prompt box in the ConsoleX interface. Clicking it will show the function name the large model wants to call and the arguments for this call in a pop-up interface.
However, if you want to further debug the complete process of the function call, you need to prepare a callable function and publish it on the internet, then let ConsoleX know the specific address and method of the function call.
To facilitate users in achieving a complete process of function calls and secondary result generation on ConsoleX, we allow users to include an object named extraInfo in the tool function definition. This object can contain a series of properties to specify the function's call address and method.
In this way, users can easily achieve the complete process of function calls and secondary result generation on ConsoleX. We currently support calling any web based functions, as well as executing dify's workflow and make.com's scenario. Moreover, ConsoleX also supports parallel function calls, allowing multiple functions to be called simultaneously and generating the final answer based on the results of these functions.
For example, the following is a configuration for querying the weather that includes the function call address:
{
"name": "get_current_weather",
"description": "Get the current weather in a given location",
"parameters": {
"type": "object",
"properties": {
"lat": {
"type": "string",
"description": "Latitude of the location"
},
"lon": {
"type": "string",
"description": "Longitude of the location"
},
"units": {"type": "string", "enum": ["standard", "imperial", "metric"]}
},
"required": ["location"]
},
"extraInfo": {
"method": "post",
"functionUrl": <the_url_of_the_get_current_weather_function>
"apiKey": <the_requireed_apikey_of_the_function>
}
}
If your definition of the function call address and method is correct, and the function works properly, when you ask the large model the same question again:
What is the weather in London?
The large model will not only give the name and method of the function to be called, but also automatically call the external function and generate the final result based on the function's response, for example:
The weather in London is cloudy, with a temperature of about 17.24 degrees Celsius.
This is a complete end-to-end function call process. You can also see the green information label at the bottom of the assistant message dialog, clicking it will display all the intermediate process information of the function call, including the function call parameters returned by the initial request to the model, as well as the actual return information of the function call.
The following is a list of properties that can be included in the extraInfo extension object:
Property Name | Description |
---|---|
invoke | Whether to initiate a function call, default is true. If false, the following other attribute definitions will not take effect. |
functionType | The allowed values are: - web, the function is a web-accessible URL address - dify_workflow, the function is a dify workflow - make_workflow, the function is a make.com scenario (support for Dify and Make.com workflows is implemented through the UniWorkflow open-source project) |
method | The valid values are post or get, used to specify the method of initiating a function call, the default is post |
functionUrl | The URL address of the function's access, if the value of functionType is web, you need to pass in the function call URL address. If the functionType is dify_workflow, you need to pass in the dify workflow base_url. If the functionType is make_workflow, you need to pass in the make.com scenario call URL address. |
apiKey | If this property is set, the Authorization: Bearer your-function-call-api-key authorization information will be sent in the request header when requesting the function. If not set, the Authorization authorization information will not be included in the request header when calling the function. |
process | For the processing method of the function's return result, the following three values are currently supported: - template, in this case, you need to define the template attribute, which contains the text content template for displaying information. - output, in this case, the function's return result will be displayed directly in the result in markdown format. - formulate, in this case, the JSON information returned by the external function will be used to generate the result through the large model again. |
template | When the value of process is template, you must provide it. The template requires a text string, which can contain placeholder variable names. If the variable name matches the attribute name in the JSON information returned by the function, the placeholder variable name will be replaced with the corresponding attribute value in the JSON information returned by the function. |
For example, the following is an extraInfo definition for calling a stable diffusion external tool, and rendering the result in markdown format and returning it:
{
"invoke": true,
"functionType":"web",
"method":"POST",
"functionUrl":<the_url_of_the_image_generation_function>
"apiKey":<the_api_key_required_for_calling_the_function>
"process": "template",
"template": "The following is the image generated by stable diffusion:\n![image]({imageUrl})\n"
}
Note: The extraInfo object and the properties it contains are only used for debugging function calls on ConsoleX.ai, and are not supported by large models. Therefore, when defining the function Schema in your LLM application, you should remove the extraInfo extension object.
Parallel Function Callingâ
Parallel Function Calling is an advanced tool calling mechanism that allows large language models to call multiple external tools or APIs simultaneously to obtain more comprehensive information or perform multiple tasks. This mechanism greatly improves the processing efficiency and capabilities of models, enabling them to answer complex questions or complete complex tasks more quickly and comprehensively.
On the ConsoleX.ai platform, users can easily implement parallel function calling. Here are the steps to use parallel function calling on ConsoleX:
- Define multiple tools: In the tool management interface of ConsoleX, define multiple tools that you need to call in parallel, each tool should have its unique JSON Schema description.
- Enable tool calling: In the dialog settings, ensure the "Enable Tool Calling" switch is turned on, and attach the multiple tools you defined in the current dialog.
- Initiate a dialog: Propose a question that may require multiple tools to collaborate in answering. For example, "Compare the weather in London and New York today and generate a comparison chart."
- ConsoleX will send the request to the large model that supports parallel function calling, and the large model will automatically identify the need to call multiple tools, returning the names of all tool functions and the parameter definitions required for each tool function call.
- In the dialog interface, you can see the results of multiple tool calls and the comprehensive answer generated based on these results.
The advantage of parallel function calling is that it can simultaneously obtain data from multiple information sources or perform multiple related tasks, providing a more comprehensive and efficient answer. For example, it can simultaneously query the weather in multiple cities, compare the performance of different stocks, or use multiple AI models to generate images and compare them.
Note: You can not only call multiple different functions, but also call the same function multiple times with different parameters to achieve parallel function calling.
By using parallel function calling on ConsoleX, developers can better simulate complex decision-making processes, improve the efficiency and intelligence of AI applications, and provide users with richer and more valuable interactive experiences.