Skip to content

Commit 40f838d

Browse files
liqulJack-Q
andauthored
fix doc mismatching (#422)
Co-authored-by: Jack-Q <[email protected]>
1 parent 5494e49 commit 40f838d

File tree

5 files changed

+57
-45
lines changed

5 files changed

+57
-45
lines changed

auto_eval/ds1000_scripts/README.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -15,8 +15,8 @@ This directory contains the scripts used to evaluate the performance of the [DS-
1515
- metadata.json: the metadata of the test case.
1616
- prompt.txt: the composed prompt of the test case.
1717
- reference_code.py: the ground truth code.
18-
4. Copy the example files from `ds1000_scritps/planner_examples` to `project/planner_examples` directory;
19-
and the example files from `ds1000_scritps/codeinterpreter_examples` to `project/codeinterpreter_examples` directory.
18+
4. Copy the example files from `ds1000_scritps/planner_examples` to `project/examples/planner_examples` directory;
19+
and the example files from `ds1000_scritps/codeinterpreter_examples` to `project/examples/code_generator_examples` directory.
2020
Disable (or discard) the original example files from the project directory. See the notes below for understanding why.
2121
5. Once the test cases are generated, follow the instructions in `auto_eval/README.md` to evaluate the performance of the benchmark.
2222

taskweaver/llm/openai.py

-5
Original file line numberDiff line numberDiff line change
@@ -51,11 +51,6 @@ def _configure(self) -> None:
5151

5252
# openai specific config
5353
self.api_version = self._get_str("api_version", "2024-06-01")
54-
self.api_auth_type = self._get_enum(
55-
"api_auth_type",
56-
["openai", "azure", "azure_ad"],
57-
"openai",
58-
)
5954
is_azure_ad_login = self.api_type == "azure_ad"
6055
self.aad_auth_mode = self._get_enum(
6156
"aad_auth_mode",

website/blog/experience.md

+19-9
Original file line numberDiff line numberDiff line change
@@ -98,15 +98,25 @@ def reply(self, memory: Memory, **kwargs: ...) -> Post:
9898
In a role that needs to set the experience subdirectory, we can get the experience subdirectory from the shared memory.
9999

100100
```python
101-
exp_sub_paths = memory.get_shared_memory_entries(
102-
entry_type="experience_sub_path",
103-
)
104-
105-
if exp_sub_paths:
106-
exp_sub_path = exp_sub_paths[0].content
107-
else:
108-
exp_sub_path = ""
109-
selected_experiences = self.role_load_experience(query=query, sub_path=exp_sub_path)
101+
def reply(
102+
self,
103+
memory: Memory,
104+
post_proxy: Optional[PostEventProxy] = None,
105+
prompt_log_path: Optional[str] = None,
106+
**kwargs: ...,
107+
) -> Post:
108+
...
109+
rounds = memory.get_role_rounds(
110+
role=self.alias,
111+
include_failure_rounds=False,
112+
)
113+
114+
# obtain the query from the last round
115+
query = rounds[-1].post_list[-1].message
116+
117+
# retrieve the experience based on the query
118+
self.role_load_experience(query=query, memory=memory)
119+
...
110120
```
111121

112122
:::tip

website/docs/configurations/overview.md

+10-5
Original file line numberDiff line numberDiff line change
@@ -24,22 +24,27 @@ The following table lists the parameters in the configuration file:
2424
| `logging.log_file` | The name of the log file. | `taskweaver.log` |
2525
| `logging.log_folder` | The folder to store the log file. | `logs` |
2626
| `plugin.base_path` | The folder to store plugins. | `${AppBaseDir}/plugins` |
27-
| `planner.example_base_path` | The folder to store planner examples. | `${AppBaseDir}/planner_examples` |
27+
| `{RoleName}.use_example` | Whether to use the example for the role. | `true` |
28+
| `{RoleName}.example_base_path` | The folder to store the examples for the role. | `${AppBaseDir}/examples/{RoleName}_examples` |
29+
| `{RoleName}.dynamic_example_sub_path` | Whether to enable dynamic example loading based on sub-path. | `false` |
30+
| `{RoleName}.use_experience` | Whether to use experience summarized from the previous chat history for the role. | `false` |
31+
| `{RoleName}.experience_dir` | The folder to store the experience for the role. | `${AppBaseDir}/experience/` |
32+
| `{RoleName}.dynamic_experience_sub_path` | Whether to enable dynamic experience loading based on sub-path. | `false` |
2833
| `planner.prompt_compression` | Whether to compress the chat history for planner. | `false` |
29-
| `planner.use_experience` | Whether to use experience summarized from the previous chat history in planner. | `false` |
30-
| `code_generator.example_base_path` | The folder to store code interpreter examples. | `${AppBaseDir}/codeinterpreter_examples` |
3134
| `code_generator.prompt_compression` | Whether to compress the chat history for code interpreter. | `false` |
3235
| `code_generator.enable_auto_plugin_selection` | Whether to enable auto plugin selection. | `false` |
33-
| `code_generator.use_experience` | Whether to use experience summarized from the previous chat history in code generator. | `false` |
3436
| `code_generator.auto_plugin_selection_topk` | The number of auto selected plugins in each round. | `3` |
3537
| `session.max_internal_chat_round_num` | The maximum number of internal chat rounds between Planner and Code Interpreter. | `10` |
3638
| `session.roles` | The roles included for the conversation. | ["planner", "code_interpreter"] |
3739
| `round_compressor.rounds_to_compress` | The number of rounds to compress. | `2` |
3840
| `round_compressor.rounds_to_retain` | The number of rounds to retain. | `3` |
39-
| `execution_service.kernel_mode` | The mode of the code executor, could be `local` or `container`. | `local` |
41+
| `execution_service.kernel_mode` | The mode of the code executor, could be `local` or `container`. | `container` |
4042

4143
:::tip
4244
$\{AppBaseDir\} is the project directory.
45+
46+
$\{RoleName\} is the name of the role, such as `planner` or `code_generator`. In the current implementation, the `code_interpreter` role has all code generation functions
47+
in a "sub-role" named `code_generator`. So, the configuration for the code generation part should be set to `code_generator`.
4348
:::
4449

4550
:::tip

website/docs/llms/aoai.md

+26-24
Original file line numberDiff line numberDiff line change
@@ -8,40 +8,42 @@ description: Using LLMs from OpenAI/AOAI
88
1. Create an account on [Azure OpenAI](https://azure.microsoft.com/en-us/products/ai-services/openai-service) and get your API key.
99
2. Create a new deployment of the model and get the deployment name.
1010
3. Add the following to your `taskweaver_config.json` file:
11-
```jsonc showLineNumbers
12-
{
13-
"llm.api_base":"YOUR_AOAI_ENDPOINT", // in the format of https://<my-resource>.openai.azure.com"
14-
"llm.api_key":"YOUR_API_KEY",
15-
"llm.api_type":"azure",
16-
"llm.auth_mode":"api-key",
17-
"llm.model":"gpt-4-1106-preview", // this is known as deployment_name in Azure OpenAI
18-
"llm.response_format": "json_object"
19-
}
20-
```
11+
```jsonc showLineNumbers
12+
{
13+
"llm.api_base":"YOUR_AOAI_ENDPOINT", // in the format of https://<my-resource>.openai.azure.com"
14+
"llm.api_key":"YOUR_API_KEY",
15+
"llm.api_type":"azure",
16+
"llm.model":"gpt-4-1106-preview", // this is known as deployment_name in Azure OpenAI
17+
"llm.response_format": "json_object",
18+
"llm.azure.api_version": "2024-06-01"
19+
}
20+
```
2121

22-
:::info
23-
For model versions or after `1106`, `llm.response_format` can be set to `json_object`.
24-
However, for the earlier models, which do not support JSON response explicitly, `llm.response_format` should be set to `null`.
25-
:::
22+
:::info
23+
For model versions or after `1106`, `llm.response_format` can be set to `json_object`.
24+
However, for the earlier models, which do not support JSON response explicitly, `llm.response_format` should be set to `null`.
25+
:::
2626

2727
4. Start TaskWeaver and chat with TaskWeaver.
28-
You can refer to the [Quick Start](../quickstart.md) for more details.
28+
29+
You can refer to the [Quick Start](../quickstart.md) for more details.
2930

3031
## Using Entra Authentication
3132

3233
1. Create an account on [Azure OpenAI](https://azure.microsoft.com/en-us/products/ai-services/openai-service) and
3334
[assign the proper Azure RBAC Role](https://learn.microsoft.com/en-us/azure/ai-services/openai/how-to/role-based-access-control) to your account (or service principal).
3435
2. Create a new deployment of the model and get the deployment name.
3536
3. Add the following to your `taskweaver_config.json` file:
36-
```jsonc showLineNumbers
37-
{
38-
"llm.api_base":"YOUR_AOAI_ENDPOINT", // in the format of https://<my-resource>.openai.azure.com"
39-
"llm.api_type":"azure_ad",
40-
"llm.auth_mode":"default_azure_credential",
41-
"llm.model":"gpt-4-1106-preview", // this is known as deployment_name in Azure OpenAI
42-
"llm.response_format": "json_object"
43-
}
44-
```
37+
```jsonc showLineNumbers
38+
{
39+
"llm.api_base":"YOUR_AOAI_ENDPOINT", // in the format of https://<my-resource>.openai.azure.com"
40+
"llm.api_type":"azure_ad",
41+
"llm.model":"gpt-4-1106-preview", // this is known as deployment_name in Azure OpenAI
42+
"llm.response_format": "json_object",
43+
"llm.azure_ad.api_version": "2024-06-01",
44+
"llm.azure_ad.aad_auth_mode": "default_azure_credential"
45+
}
46+
```
4547
4. Install extra dependencies:
4648
```bash
4749
pip install azure-identity

0 commit comments

Comments
 (0)