You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardexpand all lines: website/docs/configurations/overview.md
+10-5
Original file line number
Diff line number
Diff line change
@@ -24,22 +24,27 @@ The following table lists the parameters in the configuration file:
24
24
|`logging.log_file`| The name of the log file. |`taskweaver.log`|
25
25
|`logging.log_folder`| The folder to store the log file. |`logs`|
26
26
|`plugin.base_path`| The folder to store plugins. |`${AppBaseDir}/plugins`|
27
-
|`planner.example_base_path`| The folder to store planner examples. |`${AppBaseDir}/planner_examples`|
27
+
|`{RoleName}.use_example`| Whether to use the example for the role. |`true`|
28
+
|`{RoleName}.example_base_path`| The folder to store the examples for the role. |`${AppBaseDir}/examples/{RoleName}_examples`|
29
+
|`{RoleName}.dynamic_example_sub_path`| Whether to enable dynamic example loading based on sub-path. |`false`|
30
+
|`{RoleName}.use_experience`| Whether to use experience summarized from the previous chat history for the role. |`false`|
31
+
|`{RoleName}.experience_dir`| The folder to store the experience for the role. |`${AppBaseDir}/experience/`|
32
+
|`{RoleName}.dynamic_experience_sub_path`| Whether to enable dynamic experience loading based on sub-path. |`false`|
28
33
|`planner.prompt_compression`| Whether to compress the chat history for planner. |`false`|
29
-
|`planner.use_experience`| Whether to use experience summarized from the previous chat history in planner. |`false`|
30
-
|`code_generator.example_base_path`| The folder to store code interpreter examples. |`${AppBaseDir}/codeinterpreter_examples`|
31
34
|`code_generator.prompt_compression`| Whether to compress the chat history for code interpreter. |`false`|
32
35
|`code_generator.enable_auto_plugin_selection`| Whether to enable auto plugin selection. |`false`|
33
-
|`code_generator.use_experience`| Whether to use experience summarized from the previous chat history in code generator. |`false`|
34
36
|`code_generator.auto_plugin_selection_topk`| The number of auto selected plugins in each round. |`3`|
35
37
|`session.max_internal_chat_round_num`| The maximum number of internal chat rounds between Planner and Code Interpreter. |`10`|
36
38
|`session.roles`| The roles included for the conversation. |["planner", "code_interpreter"]|
37
39
|`round_compressor.rounds_to_compress`| The number of rounds to compress. |`2`|
38
40
|`round_compressor.rounds_to_retain`| The number of rounds to retain. |`3`|
39
-
|`execution_service.kernel_mode`| The mode of the code executor, could be `local` or `container`. |`local`|
41
+
|`execution_service.kernel_mode`| The mode of the code executor, could be `local` or `container`. |`container`|
40
42
41
43
:::tip
42
44
$\{AppBaseDir\} is the project directory.
45
+
46
+
$\{RoleName\} is the name of the role, such as `planner` or `code_generator`. In the current implementation, the `code_interpreter` role has all code generation functions
47
+
in a "sub-role" named `code_generator`. So, the configuration for the code generation part should be set to `code_generator`.
Copy file name to clipboardexpand all lines: website/docs/llms/aoai.md
+26-24
Original file line number
Diff line number
Diff line change
@@ -8,40 +8,42 @@ description: Using LLMs from OpenAI/AOAI
8
8
1. Create an account on [Azure OpenAI](https://azure.microsoft.com/en-us/products/ai-services/openai-service) and get your API key.
9
9
2. Create a new deployment of the model and get the deployment name.
10
10
3. Add the following to your `taskweaver_config.json` file:
11
-
```jsonc showLineNumbers
12
-
{
13
-
"llm.api_base":"YOUR_AOAI_ENDPOINT", // in the format of https://<my-resource>.openai.azure.com"
14
-
"llm.api_key":"YOUR_API_KEY",
15
-
"llm.api_type":"azure",
16
-
"llm.auth_mode":"api-key",
17
-
"llm.model":"gpt-4-1106-preview", // this is known as deployment_name in Azure OpenAI
18
-
"llm.response_format":"json_object"
19
-
}
20
-
```
11
+
```jsonc showLineNumbers
12
+
{
13
+
"llm.api_base":"YOUR_AOAI_ENDPOINT", // in the format of https://<my-resource>.openai.azure.com"
14
+
"llm.api_key":"YOUR_API_KEY",
15
+
"llm.api_type":"azure",
16
+
"llm.model":"gpt-4-1106-preview", // this is known as deployment_name in Azure OpenAI
17
+
"llm.response_format":"json_object",
18
+
"llm.azure.api_version":"2024-06-01"
19
+
}
20
+
```
21
21
22
-
:::info
23
-
For model versions or after `1106`, `llm.response_format` can be set to `json_object`.
24
-
However, for the earlier models, which do not support JSON response explicitly, `llm.response_format` should be set to `null`.
25
-
:::
22
+
:::info
23
+
For model versions or after `1106`, `llm.response_format` can be set to `json_object`.
24
+
However, for the earlier models, which do not support JSON response explicitly, `llm.response_format` should be set to `null`.
25
+
:::
26
26
27
27
4. Start TaskWeaver and chat with TaskWeaver.
28
-
You can refer to the [Quick Start](../quickstart.md) for more details.
28
+
29
+
You can refer to the [Quick Start](../quickstart.md) for more details.
29
30
30
31
## Using Entra Authentication
31
32
32
33
1. Create an account on [Azure OpenAI](https://azure.microsoft.com/en-us/products/ai-services/openai-service) and
33
34
[assign the proper Azure RBAC Role](https://learn.microsoft.com/en-us/azure/ai-services/openai/how-to/role-based-access-control) to your account (or service principal).
34
35
2. Create a new deployment of the model and get the deployment name.
35
36
3. Add the following to your `taskweaver_config.json` file:
36
-
```jsonc showLineNumbers
37
-
{
38
-
"llm.api_base":"YOUR_AOAI_ENDPOINT", // in the format of https://<my-resource>.openai.azure.com"
39
-
"llm.api_type":"azure_ad",
40
-
"llm.auth_mode":"default_azure_credential",
41
-
"llm.model":"gpt-4-1106-preview", // this is known as deployment_name in Azure OpenAI
42
-
"llm.response_format":"json_object"
43
-
}
44
-
```
37
+
```jsonc showLineNumbers
38
+
{
39
+
"llm.api_base":"YOUR_AOAI_ENDPOINT", // in the format of https://<my-resource>.openai.azure.com"
40
+
"llm.api_type":"azure_ad",
41
+
"llm.model":"gpt-4-1106-preview", // this is known as deployment_name in Azure OpenAI
0 commit comments