Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

✨ feat: add Cohere provider support #7016

Open
wants to merge 8 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -157,6 +157,8 @@ ENV \
BAICHUAN_API_KEY="" BAICHUAN_MODEL_LIST="" \
# Cloudflare
CLOUDFLARE_API_KEY="" CLOUDFLARE_BASE_URL_OR_ACCOUNT_ID="" CLOUDFLARE_MODEL_LIST="" \
# Cohere
COHERE_API_KEY="" COHERE_MODEL_LIST="" COHERE_PROXY_URL="" \
# DeepSeek
DEEPSEEK_API_KEY="" DEEPSEEK_MODEL_LIST="" \
# Fireworks AI
Expand Down
2 changes: 2 additions & 0 deletions Dockerfile.database
Original file line number Diff line number Diff line change
Expand Up @@ -200,6 +200,8 @@ ENV \
BAICHUAN_API_KEY="" BAICHUAN_MODEL_LIST="" \
# Cloudflare
CLOUDFLARE_API_KEY="" CLOUDFLARE_BASE_URL_OR_ACCOUNT_ID="" CLOUDFLARE_MODEL_LIST="" \
# Cohere
COHERE_API_KEY="" COHERE_MODEL_LIST="" COHERE_PROXY_URL="" \
# DeepSeek
DEEPSEEK_API_KEY="" DEEPSEEK_MODEL_LIST="" \
# Fireworks AI
Expand Down
2 changes: 2 additions & 0 deletions Dockerfile.pglite
Original file line number Diff line number Diff line change
Expand Up @@ -158,6 +158,8 @@ ENV \
BAICHUAN_API_KEY="" BAICHUAN_MODEL_LIST="" \
# Cloudflare
CLOUDFLARE_API_KEY="" CLOUDFLARE_BASE_URL_OR_ACCOUNT_ID="" CLOUDFLARE_MODEL_LIST="" \
# Cohere
COHERE_API_KEY="" COHERE_MODEL_LIST="" COHERE_PROXY_URL="" \
# DeepSeek
DEEPSEEK_API_KEY="" DEEPSEEK_MODEL_LIST="" \
# Fireworks AI
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,7 @@ import {
Ai360ProviderCard,
AnthropicProviderCard,
BaichuanProviderCard,
CohereProviderCard,
DeepSeekProviderCard,
FireworksAIProviderCard,
GiteeAIProviderCard,
Expand Down Expand Up @@ -82,6 +83,7 @@ export const useProviderList = (): ProviderItem[] => {
XAIProviderCard,
JinaProviderCard,
SambaNovaProviderCard,
CohereProviderCard,
QwenProviderCard,
WenxinProviderCard,
HunyuanProviderCard,
Expand Down
243 changes: 243 additions & 0 deletions src/config/aiModels/cohere.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,243 @@
import { AIChatModelCard } from '@/types/aiModel';

const cohereChatModels: AIChatModelCard[] = [
{
abilities: {
functionCall: true,
},
contextWindowTokens: 256_000,
description:
'Command A 是我们迄今为止性能最强的模型,在工具使用、代理、检索增强生成(RAG)和多语言应用场景方面表现出色。Command A 具有 256K 的上下文长度,仅需两块 GPU 即可运行,并且相比于 Command R+ 08-2024,吞吐量提高了 150%。',
displayName: 'Command A 03-2025',
enabled: true,
id: 'command-a-03-2025',
maxOutput: 8000,
pricing: {
input: 2.5,
output: 10
},
type: 'chat'
},
{
abilities: {
functionCall: true,
},
contextWindowTokens: 128_000,
description:
'command-r-plus 是 command-r-plus-04-2024 的别名,因此如果您在 API 中使用 command-r-plus,实际上指向的就是该模型。',
displayName: 'Command R+',
enabled: true,
id: 'command-r-plus',
maxOutput: 4000,
pricing: {
input: 2.5,
output: 10
},
type: 'chat'
},
{
abilities: {
functionCall: true,
},
contextWindowTokens: 128_000,
description:
'Command R+ 是一个遵循指令的对话模型,在语言任务方面表现出更高的质量、更可靠,并且相比以往模型具有更长的上下文长度。它最适用于复杂的 RAG 工作流和多步工具使用。',
displayName: 'Command R+ 04-2024',
id: 'command-r-plus-04-2024',
maxOutput: 4000,
pricing: {
input: 3,
output: 15
},
type: 'chat'
},
{
abilities: {
functionCall: true,
},
contextWindowTokens: 128_000,
description:
'command-r 是 command-c-03-2024 的别名,因此如果您在 API 中使用 command-r,实际上指向的就是该模型。',
displayName: 'Command R',
enabled: true,
id: 'command-r',
maxOutput: 4000,
pricing: {
input: 0.15,
output: 0.6
},
type: 'chat'
},
{
abilities: {
functionCall: true,
},
contextWindowTokens: 128_000,
description:
'command-r-08-2024 是 Command R 模型的更新版本,于 2024 年 8 月发布。',
displayName: 'Command R 08-2024',
id: 'command-r-08-2024',
maxOutput: 4000,
pricing: {
input: 0.15,
output: 0.6
},
type: 'chat'
},
{
abilities: {
functionCall: true,
},
contextWindowTokens: 128_000,
description:
'Command R 是一个遵循指令的对话模型,在语言任务方面表现出更高的质量、更可靠,并且相比以往模型具有更长的上下文长度。它可用于复杂的工作流程,如代码生成、检索增强生成(RAG)、工具使用和代理。',
displayName: 'Command R 03-2024',
id: 'command-r-03-2024',
maxOutput: 4000,
pricing: {
input: 0.5,
output: 1.5
},
type: 'chat'
},
{
abilities: {
functionCall: true,
},
contextWindowTokens: 128_000,
description:
'command-r7b-12-2024 是一个小型且高效的更新版本,于 2024 年 12 月发布。它在 RAG、工具使用、代理等需要复杂推理和多步处理的任务中表现出色。',
displayName: 'Command R7B 12-2024',
enabled: true,
id: 'command-r7b-12-2024',
maxOutput: 4000,
pricing: {
input: 0.0375,
output: 0.15
},
type: 'chat'
},
{
contextWindowTokens: 4000,
description:
'一个遵循指令的对话模型,在语言任务中表现出高质量、更可靠,并且相比我们的基础生成模型具有更长的上下文长度。',
displayName: 'Command',
enabled: true,
id: 'command',
maxOutput: 4000,
pricing: {
input: 1,
output: 2
},
type: 'chat'
},
{
abilities: {
functionCall: true,
},
contextWindowTokens: 128_000,
description:
'为了缩短主要版本发布之间的时间间隔,我们推出了 Command 模型的每夜版本。对于 Command 系列,这一版本称为 command-cightly。请注意,command-nightly 是最新、最具实验性且(可能)不稳定的版本。每夜版本会定期更新,且不会提前通知,因此不建议在生产环境中使用。',
displayName: 'Command Nightly',
id: 'command-nightly',
maxOutput: 4000,
pricing: {
input: 1,
output: 2
},
type: 'chat'
},
{
contextWindowTokens: 4000,
description:
'一个更小、更快的 Command 版本,几乎同样强大,但速度更快。',
displayName: 'Command Light',
enabled: true,
id: 'command-light',
maxOutput: 4000,
pricing: {
input: 0.3,
output: 0.6
},
type: 'chat'
},
{
contextWindowTokens: 4000,
description:
'为了缩短主要版本发布之间的时间间隔,我们推出了 Command 模型的每夜版本。对于 command-light 系列,这一版本称为 command-light-nightly。请注意,command-light-nightly 是最新、最具实验性且(可能)不稳定的版本。每夜版本会定期更新,且不会提前通知,因此不建议在生产环境中使用。',
displayName: 'Command Light Nightly',
id: 'command-light-nightly',
maxOutput: 4000,
pricing: {
input: 0.3,
output: 0.6
},
type: 'chat'
},
{
contextWindowTokens: 128_000,
description:
'Aya Expanse 是一款高性能的 32B 多语言模型,旨在通过指令调优、数据套利、偏好训练和模型合并的创新,挑战单语言模型的表现。它支持 23 种语言。',
displayName: 'Aya Expanse 32B',
enabled: true,
id: 'c4ai-aya-expanse-32b',
maxOutput: 4000,
pricing: {
input: 0.5,
output: 1.5
},
type: 'chat'
},
{
contextWindowTokens: 8000,
description:
'Aya Expanse 是一款高性能的 8B 多语言模型,旨在通过指令调优、数据套利、偏好训练和模型合并的创新,挑战单语言模型的表现。它支持 23 种语言。',
displayName: 'Aya Expanse 8B',
enabled: true,
id: 'c4ai-aya-expanse-8b',
maxOutput: 4000,
pricing: {
input: 0.5,
output: 1.5
},
type: 'chat'
},
{
abilities: {
vision: true,
},
contextWindowTokens: 16_000,
description:
'Aya Vision 是一款最先进的多模态模型,在语言、文本和图像能力的多个关键基准上表现出色。它支持 23 种语言。这个 320 亿参数的版本专注于最先进的多语言表现。',
displayName: 'Aya Vision 32B',
enabled: true,
id: 'c4ai-aya-vision-32b',
maxOutput: 4000,
pricing: {
input: 0.5,
output: 1.5
},
type: 'chat'
},
{
abilities: {
vision: true,
},
contextWindowTokens: 16_000,
description:
'Aya Vision 是一款最先进的多模态模型,在语言、文本和图像能力的多个关键基准上表现出色。这个 80 亿参数的版本专注于低延迟和最佳性能。',
displayName: 'Aya Vision 8B',
enabled: true,
id: 'c4ai-aya-vision-8b',
maxOutput: 4000,
pricing: {
input: 0.5,
output: 1.5
},
type: 'chat'
},
]

export const allModels = [...cohereChatModels];

export default allModels;
3 changes: 3 additions & 0 deletions src/config/aiModels/index.ts
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,7 @@ import { default as azureai } from './azureai';
import { default as baichuan } from './baichuan';
import { default as bedrock } from './bedrock';
import { default as cloudflare } from './cloudflare';
import { default as cohere } from './cohere';
import { default as deepseek } from './deepseek';
import { default as doubao } from './doubao';
import { default as fireworksai } from './fireworksai';
Expand Down Expand Up @@ -77,6 +78,7 @@ export const LOBE_DEFAULT_MODEL_LIST = buildDefaultModelList({
baichuan,
bedrock,
cloudflare,
cohere,
deepseek,
doubao,
fireworksai,
Expand Down Expand Up @@ -127,6 +129,7 @@ export { default as azureai } from './azureai';
export { default as baichuan } from './baichuan';
export { default as bedrock } from './bedrock';
export { default as cloudflare } from './cloudflare';
export { default as cohere } from './cohere';
export { default as deepseek } from './deepseek';
export { default as doubao } from './doubao';
export { default as fireworksai } from './fireworksai';
Expand Down
6 changes: 6 additions & 0 deletions src/config/llm.ts
Original file line number Diff line number Diff line change
Expand Up @@ -150,6 +150,9 @@ export const getLLMConfig = () => {

ENABLED_PPIO: z.boolean(),
PPIO_API_KEY: z.string().optional(),

ENABLED_COHERE: z.boolean(),
COHERE_API_KEY: z.string().optional(),
},
runtimeEnv: {
API_KEY_SELECT_MODE: process.env.API_KEY_SELECT_MODE,
Expand Down Expand Up @@ -298,6 +301,9 @@ export const getLLMConfig = () => {

ENABLED_PPIO: !!process.env.PPIO_API_KEY,
PPIO_API_KEY: process.env.PPIO_API_KEY,

ENABLED_COHERE: !!process.env.COHERE_API_KEY,
COHERE_API_KEY: process.env.COHERE_API_KEY,
},
});
};
Expand Down
19 changes: 19 additions & 0 deletions src/config/modelProviders/cohere.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,19 @@
import { ModelProviderCard } from '@/types/llm';

const Cohere: ModelProviderCard = {
chatModels: [],
checkModel: 'command-r7b-12-2024',
description: 'Cohere 为您带来最前沿的多语言模型、先进的检索功能以及为现代企业量身定制的 AI 工作空间 — 一切都集成在一个安全的平台中。',
id: 'cohere',
modelsUrl: 'https://docs.cohere.com/v2/docs/models',
name: 'Cohere',
settings: {
proxyUrl: {
placeholder: 'https://api.cohere.ai/compatibility/v1',
},
sdkType: 'openai',
},
url: 'https://cohere.com',
};

export default Cohere;
4 changes: 4 additions & 0 deletions src/config/modelProviders/index.ts
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,7 @@ import AzureAIProvider from './azureai';
import BaichuanProvider from './baichuan';
import BedrockProvider from './bedrock';
import CloudflareProvider from './cloudflare';
import CohereProvider from './cohere';
import DeepSeekProvider from './deepseek';
import DoubaoProvider from './doubao';
import FireworksAIProvider from './fireworksai';
Expand Down Expand Up @@ -75,6 +76,7 @@ export const LOBE_DEFAULT_MODEL_LIST: ChatModelCard[] = [
XAIProvider.chatModels,
JinaProvider.chatModels,
SambaNovaProvider.chatModels,
CohereProvider.chatModels,
ZeroOneProvider.chatModels,
StepfunProvider.chatModels,
NovitaProvider.chatModels,
Expand Down Expand Up @@ -124,6 +126,7 @@ export const DEFAULT_MODEL_PROVIDER_LIST = [
XAIProvider,
JinaProvider,
SambaNovaProvider,
CohereProvider,
QwenProvider,
WenxinProvider,
TencentcloudProvider,
Expand Down Expand Up @@ -164,6 +167,7 @@ export { default as AzureAIProviderCard } from './azureai';
export { default as BaichuanProviderCard } from './baichuan';
export { default as BedrockProviderCard } from './bedrock';
export { default as CloudflareProviderCard } from './cloudflare';
export { default as CohereProviderCard } from './cohere';
export { default as DeepSeekProviderCard } from './deepseek';
export { default as DoubaoProviderCard } from './doubao';
export { default as FireworksAIProviderCard } from './fireworksai';
Expand Down
Loading