Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug] 默认模型列表不正常 #6464

Open
ShadyAfterCheer opened this issue Feb 24, 2025 · 18 comments
Open

[Bug] 默认模型列表不正常 #6464

ShadyAfterCheer opened this issue Feb 24, 2025 · 18 comments
Labels
🐛 Bug Something isn't working | 缺陷

Comments

@ShadyAfterCheer
Copy link
Contributor

📦 Platform

Self hosting Docker

📦 Deploymenet mode

client db (lobe-chat image)

📌 Version

1.63.1

💻 Operating System

macOS

🌐 Browser

Chrome

🐛 Bug Description

Docker 部署 LobeChat 后,第一次进入的模型列表显示异常。

Image

点击任意模型后,再次点击模型列表显示正常。

Image

本地 pnpm dev 运行无此问题。

📷 Recurrence Steps

No response

🚦 Expected Behavior

No response

📝 Additional Information

No response

@ShadyAfterCheer ShadyAfterCheer added the 🐛 Bug Something isn't working | 缺陷 label Feb 24, 2025
@github-project-automation github-project-automation bot moved this to Roadmap - Chat 1.x in Lobe Chat Routine Feb 24, 2025
@lobehubbot
Copy link
Member

👀 @ShadyAfterCheer

Thank you for raising an issue. We will investigate into the matter and get back to you as soon as possible.
Please make sure you have given us as much context as possible.
非常感谢您提交 issue。我们会尽快调查此事,并尽快回复您。 请确保您已经提供了尽可能多的背景信息。

Copy link

dosubot bot commented Feb 24, 2025

Hey @ShadyAfterCheer! I'm here to help you with any bugs, questions, or contribution you might have. Let's work together to resolve this issue!

I found a similar issue that was previously discussed and resolved. The issue was related to the model list displaying abnormally upon first entry when deploying LobeChat via Docker, but not when running locally with pnpm dev. This was due to a bug in earlier versions of LobeChat, specifically version 1.51.8, which was fixed in version 1.56.2 [1][2][3].

In your case, since you are using version 1.63.1, the issue might be related to the configuration of the ENABLED_OPENAI and ENABLED_OLLAMA settings. Ensure that your DEFAULT_AGENT_CONFIG is correctly set to avoid defaulting to models like "gpt-4o" when both ENABLED_OPENAI and ENABLED_OLLAMA are set to false. You can specify the provider and model explicitly in the DEFAULT_AGENT_CONFIG environment variable using the format: DEFAULT_AGENT_CONFIG=provider=<provider>/model=<model_name> [4][5].

If the issue persists, double-check your Docker environment variables and ensure they are correctly configured and taking effect.

To continue talking to Dosu, mention @dosu.


Help Dosu learn! Give it feedback: Great Response | Irrelevant Answer | Incorrect Sources | Too Verbose | Hallucination | Bug Report | Other

@ShadyAfterCheer
Copy link
Contributor Author

ShadyAfterCheer commented Feb 24, 2025

P.S. 点击模型,表现正常后,再次强刷页面,仍然显示异常。推测与初始化模型列表相关?从请求上看,再次请求 serverConfig 后为正常表现。

@lobehubbot
Copy link
Member

Bot detected the issue body's language is not English, translate it automatically. 👯👭🏻🧑‍🤝‍🧑👫🧑🏿‍🤝‍🧑🏻👩🏾‍🤝‍👨🏿👬🏿


After P.S., after normal, I force the page to browse again, and the exception is still displayed. Is it speculated that it is related to the initialization model list? From the request point of view, it is normal to request serverConfig again.

@arvinxx
Copy link
Contributor

arvinxx commented Feb 24, 2025

@ShadyAfterCheer 确认下 pglite 镜像部署是否正常?

@lobehubbot
Copy link
Member

Bot detected the issue body's language is not English, translate it automatically. 👯👭🏻🧑‍🤝‍🧑👫🧑🏿‍🤝‍🧑🏻👩🏾‍🤝‍👨🏿👬🏿


@ShadyAfterCheer Confirm whether the pglite image deployment is normal?

@ShadyAfterCheer
Copy link
Contributor Author

ShadyAfterCheer commented Feb 24, 2025

@arvinxx 应该是没有问题的,我这边并没有主动开启 pglite 能力,不知道有相关吗?

以下是我的 config 配置:

FEATURE_FLAGS=-knowledge_base,-welcome_suggest,-check_updates,-language_model_settings,-webrtc_sync,-speech_to_text,-clerk_sign_up,-changelog
PLUGINS_INDEX_URL=xxx
DEFAULT_AGENT_CONFIG=chatConfig.enableHistoryCount=true;provider=azure;model=gpt-35-turbo;

ENABLED_AZURE=1
ENABLED_AZUREAI=1
ENABLED_AWS_BEDROCK=1
ENABLED_GOOGLE=1
ENABLED_OLLAMA=0
ENABLED_OPENAI=0

AZURE_API_KEY=xxx
AZURE_API_VERSION=xxx
AZURE_MODEL_LIST=-all,+xxx

GOOGLE_API_KEY=xxx
GOOGLE_MODEL_LIST=-all,+xxx



// 自定义部分
AZUREAI_ENDPOINT=xxx
AZUREAI_ENDPOINT_KEY=xxx
AZUREAI_MODEL_LIST=DeepSeek-R1

@lobehubbot
Copy link
Member

Bot detected the issue body's language is not English, translate it automatically. 👯👭🏻🧑‍🤝‍🧑👫🧑🏿‍🤝‍🧑🏻👩🏾‍🤝‍👨🏿👬🏿


@arvinxx There should be no problem. I don’t actively enable pglite. I don’t know if there is any related one?
Here is my config configuration:

FEATURE_FLAGS=-knowledge_base,-welcome_suggest,-check_updates,-language_model_settings,-webrtc_sync,-speech_to_text,-clerk_sign_up,-changelog
PLUGINS_INDEX_URL=xxx
DEFAULT_AGENT_CONFIG=chatConfig.enableHistoryCount=true;provider=azure;model=gpt-35-turbo;

ENABLED_AZURE=1
ENABLED_AZUREAI=1
ENABLED_AWS_BEDROCK=1
ENABLED_GOOGLE=1
ENABLED_OLLAMA=0
ENABLED_OPENAI=0

AZURE_API_KEY=xxx
AZURE_API_VERSION=xxx
AZURE_MODEL_LIST=-all,+xxx

GOOGLE_API_KEY=xxx
GOOGLE_MODEL_LIST=-all,+xxx



// Custom section
AZUREAI_ENDPOINT=xxx
AZUREAI_ENDPOINT_KEY=xxx
AZUREAI_MODEL_LIST=DeepSeek-R1

@coder2z
Copy link

coder2z commented Feb 24, 2025

+1

我这里强行修改了默认值 暂时才解决,感觉像是 在无 db 部署的时候 ,拉取 getConfig 是异步拉取,拉取后未刷新配置变量导致。

@lobehubbot
Copy link
Member

Bot detected the issue body's language is not English, translate it automatically. 👯👭🏻🧑‍🤝‍🧑👫🧑🏿‍🤝‍🧑🏻👩🏾‍🤝‍👨🏿👬🏿


+1

I forced the modification here, and the default value is temporarily solved. It feels like when deploying without db, pulling getConfig is async pull, and the configuration variable is not refreshed after pulling.

@ShadyAfterCheer
Copy link
Contributor Author

+1

我这里强行修改了默认值 暂时才解决,感觉像是 在无 db 部署的时候 ,拉取 getConfig 是异步拉取,拉取后未刷新配置变量导致。

我也想尝试用这种办法,但是我目前的表现很奇怪:默认值看起来是通过每个模型的 enabled 字段判断是否显示该模型,但是我看到有很多个模型其实都没有开启 enabled,譬如 ollama,但是也显示在我的模型列表上

@lobehubbot
Copy link
Member

Bot detected the issue body's language is not English, translate it automatically. 👯👭🏻🧑‍🤝‍🧑👫🧑🏿‍🤝‍🧑🏻👩🏾‍🤝‍👨🏿👬🏿


+1

I forced the default value to be modified here and solved for the time being. It feels like when deploying without db, pulling getConfig is async pull, and the configuration variable is not refreshed after pulling.

I also want to try this method, but my current performance is very strange: the default value seems to be to determine whether the model is displayed through the enabled field of each model, but I see that many models are not actually enabled enabled, such as ollama, but also displayed on my model list

@coder2z
Copy link

coder2z commented Feb 24, 2025

+1
我这里强行修改了默认值 暂时才解决,感觉像是 在无 db 部署的时候 ,拉取 getConfig 是异步拉取,拉取后未刷新配置变量导致。

我也想尝试用这种办法,但是我目前的表现很奇怪:默认值看起来是通过每个模型的 enabled 字段判断是否显示该模型,但是我看到有很多个模型其实都没有开启 enabled,譬如 ollama,但是也显示在我的模型列表上

export const DEFAULT_LLM_CONFIG = genUserLLMConfig({

你需要修改这个地方 才会生效。

Image

@ShadyAfterCheer
Copy link
Contributor Author

+1
我这里强行修改了默认值 暂时才解决,感觉像是 在无 db 部署的时候 ,拉取 getConfig 是异步拉取,拉取后未刷新配置变量导致。

我也想尝试用这种办法,但是我目前的表现很奇怪:默认值看起来是通过每个模型的 enabled 字段判断是否显示该模型,但是我看到有很多个模型其实都没有开启 enabled,譬如 ollama,但是也显示在我的模型列表上

export const DEFAULT_LLM_CONFIG = genUserLLMConfig({

你需要修改这个地方 才会生效。

Image

感谢分享,但是我再继续追踪,发现另外一个函数才是控制这里的主因:genServerLLMConfig

我最终是改动了这个函数 getServerGlobalConfig 内的内容才解决了我的问题

@lobehubbot
Copy link
Member

Bot detected the issue body's language is not English, translate it automatically. 👯👭🏻🧑‍🤝‍🧑👫🧑🏿‍🤝‍🧑🏻👩🏾‍🤝‍👨🏿👬🏿


+1
I forced the default value here to solve it for the time being. It feels like when deploying without db, pulling getConfig is async pull, and the configuration variable is not refreshed after pulling.

I also want to try this method, but my current performance is very strange: the default value looks like it is to use the enabled field of each model to determine whether the model is displayed, but I see that many models do not actually have it. Turn on enabled, such as ollama, but it also appears on my model list

export const DEFAULT_LLM_CONFIG = genUserLLMConfig({

You need to modify this place to take effect.

Image

Thanks for sharing, but I continued to track it and found that another function is the main reason for controlling here: genServerLLMConfig

I finally changed the contents of this function getServerGlobalConfig to solve my problem

@coder2z
Copy link

coder2z commented Mar 16, 2025

+1 最新版本依旧有问题

@lobehubbot
Copy link
Member

Bot detected the issue body's language is not English, translate it automatically. 👯👭🏻🧑‍🤝‍🧑👫🧑🏿‍🤝‍🧑🏻👩🏾‍🤝‍👨🏿👬🏿


+1 There are still problems with the latest version

@coder2z
Copy link

coder2z commented Mar 16, 2025

+1
我这里强行修改了默认值 暂时才解决,感觉像是 在无 db 部署的时候 ,拉取 getConfig 是异步拉取,拉取后未刷新配置变量导致。

我也想尝试用这种办法,但是我目前的表现很奇怪:默认值看起来是通过每个模型的 enabled 字段判断是否显示该模型,但是我看到有很多个模型其实都没有开启 enabled,譬如 ollama,但是也显示在我的模型列表上

export const DEFAULT_LLM_CONFIG = genUserLLMConfig({export const DEFAULT_LLM_CONFIG = genUserLLMConfig({
你需要修改这个地方 才会生效。
Image

感谢分享,但是我再继续追踪,发现另外一个函数才是控制这里的主因:genServerLLMConfig

我最终是改动了这个函数 getServerGlobalConfig 内的内容才解决了我的问题

具体修改了哪些地方呢?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
🐛 Bug Something isn't working | 缺陷
Projects
Status: Roadmap - Chat 1.x
Development

No branches or pull requests

4 participants