Skip to content

Commit b05ab37

Browse files
authored
Merge pull request github#36552 from github/repo-sync
Repo sync
2 parents 79e4c6e + e4e9c77 commit b05ab37

10 files changed

+54
-29
lines changed

content/copilot/using-github-copilot/ai-models/changing-the-ai-model-for-copilot-chat.md

+40-3
Original file line numberDiff line numberDiff line change
@@ -24,7 +24,20 @@ Changing the model that's used by {% data variables.product.prodname_copilot_cha
2424
2525
## AI models for {% data variables.product.prodname_copilot_chat_short %}
2626

27-
{% data reusables.copilot.copilot-chat-models-list %}
27+
The following models are currently available in the immersive mode of {% data variables.product.prodname_copilot_chat_short %}:
28+
29+
* {% data reusables.copilot.model-description-gpt-4o %}
30+
* {% data reusables.copilot.model-description-claude-sonnet-37 %}
31+
* {% data reusables.copilot.model-description-claude-sonnet-35 %}
32+
* {% data reusables.copilot.model-description-gemini-flash %}
33+
* {% data reusables.copilot.model-description-o1 %}
34+
* {% data reusables.copilot.model-description-o3-mini %}
35+
36+
For more information about these models, see:
37+
38+
* **OpenAI's GPT-4o, o1, and o3-mini models**: [Models](https://platform.openai.com/docs/models/models) in the OpenAI Platform documentation.
39+
* **Anthropic's {% data variables.copilot.copilot_claude_sonnet %} models**: [AUTOTITLE](/copilot/using-github-copilot/ai-models/using-claude-sonnet-in-github-copilot).
40+
* **Google's {% data variables.copilot.copilot_gemini_flash %} model**: [AUTOTITLE](/copilot/using-github-copilot/ai-models/using-gemini-flash-in-github-copilot).
2841

2942
### Limitations of AI models for {% data variables.product.prodname_copilot_chat_short %}
3043

@@ -53,7 +66,20 @@ These instructions are for {% data variables.product.prodname_copilot_short %} o
5366
5467
## AI models for {% data variables.product.prodname_copilot_chat_short %}
5568

56-
{% data reusables.copilot.copilot-chat-models-list %}
69+
The following models are currently available through multi-model {% data variables.product.prodname_copilot_chat_short %}:
70+
71+
* {% data reusables.copilot.model-description-gpt-4o %}
72+
* {% data reusables.copilot.model-description-claude-sonnet-37 %}
73+
* {% data reusables.copilot.model-description-claude-sonnet-35 %}
74+
* {% data reusables.copilot.model-description-gemini-flash %}
75+
* {% data reusables.copilot.model-description-o1 %}
76+
* {% data reusables.copilot.model-description-o3-mini %}
77+
78+
For more information about these models, see:
79+
80+
* **OpenAI's GPT-4o, o1, and o3-mini models**: [Models](https://platform.openai.com/docs/models/models) in the OpenAI Platform documentation.
81+
* **Anthropic's {% data variables.copilot.copilot_claude_sonnet %} models**: [AUTOTITLE](/copilot/using-github-copilot/ai-models/using-claude-sonnet-in-github-copilot).
82+
* **Google's {% data variables.copilot.copilot_gemini_flash %} model**: [AUTOTITLE](/copilot/using-github-copilot/ai-models/using-gemini-flash-in-github-copilot).
5783

5884
## Changing your AI model
5985

@@ -74,7 +100,18 @@ These instructions are for {% data variables.product.prodname_vscode_shortname %
74100
75101
## AI models for {% data variables.product.prodname_copilot_chat_short %}
76102

77-
{% data reusables.copilot.copilot-chat-models-list-visual-studio %}
103+
The following models are currently available through multi-model {% data variables.product.prodname_copilot_chat_short %}:
104+
105+
* {% data reusables.copilot.model-description-gpt-4o %}
106+
* {% data reusables.copilot.model-description-claude-sonnet-37 %}
107+
* {% data reusables.copilot.model-description-claude-sonnet-35 %}
108+
* {% data reusables.copilot.model-description-o1 %}
109+
* {% data reusables.copilot.model-description-o3-mini %}
110+
111+
For more information about these models, see:
112+
113+
* **OpenAI's GPT-4o, o1, and o3-mini models**: [Models](https://platform.openai.com/docs/models/models) in the OpenAI Platform documentation.
114+
* **Anthropic's {% data variables.copilot.copilot_claude_sonnet %} models**: [AUTOTITLE](/copilot/using-github-copilot/ai-models/using-claude-sonnet-in-github-copilot).
78115

79116
## Changing the AI model for {% data variables.product.prodname_copilot_chat_short %}
80117

content/migrations/using-github-enterprise-importer/migrating-from-azure-devops-to-github-enterprise-cloud/overview-of-a-migration-from-azure-devops-to-github-enterprise-cloud.md

+8-2
Original file line numberDiff line numberDiff line change
@@ -37,14 +37,20 @@ This guide will guide you through completing the first phase, migrating reposito
3737

3838
### How soon do we need to complete the migration?
3939

40-
{% data reusables.enterprise-migration-tool.timeline-intro %}
40+
Determine your timeline, which will largely dictate your approach. The first step for determining your timeline is to get an inventory of what you need to migrate.
4141

4242
* Number of repositories
4343
* Number of pull requests
4444

45+
Migrating from Azure DevOps, we recommend the `inventory-report` command in the {% data variables.product.prodname_ado2gh_cli %}. The `inventory-report` command will connect with the Azure DevOps API, then build a simple CSV with some of the fields suggested above. To install the {% data variables.product.prodname_ado2gh_cli %} and authenticate, follow steps 1 to 3 in [AUTOTITLE](/migrations/using-github-enterprise-importer/migrating-from-azure-devops-to-github-enterprise-cloud/migrating-repositories-from-azure-devops-to-github-enterprise-cloud).
46+
4547
Migration timing is largely based on the number of pull requests in a repository. If you want to migrate 1,000 repositories, and each repository has 100 pull requests on average, and only 50 users have contributed to the repositories, your migration will likely be very quick. If you want to migrate only 100 repositories, but the repositories each have 75,000 pull requests on average, and 5,000 users, the migration will take much longer and require much more planning and testing.
4648

47-
{% data reusables.enterprise-migration-tool.timeline-tasks %}
49+
After you take inventory of the repositories you need to migrate, you can weigh your inventory data against your desired timeline. If your organization can withstand a higher degree of change, then you might be able to migrate all your repositories at once, completing your migration efforts in a few days. However, you may have various teams that are not able to migrate at the same time. In this case, you might want to batch and stagger your migrations to fit the teams' timelines, extending your migration effort.
50+
51+
1. Determine how many repositories and pull requests you need to migrate.
52+
1. To understand when teams can be ready to migrate, interview stakeholders.
53+
1. Fully review the rest of this guide, then decide on a migration timeline.
4854

4955
### Do we understand what will be migrated?
5056

data/reusables/copilot/copilot-chat-models-list-visual-studio.md

-10
This file was deleted.

data/reusables/copilot/copilot-chat-models-list.md

-14
This file was deleted.
Original file line numberDiff line numberDiff line change
@@ -0,0 +1 @@
1+
**{% data variables.copilot.copilot_claude_sonnet_35 %}:** This model excels at coding tasks across the entire software development lifecycle, from initial design to bug fixes, maintenance to optimizations. Learn more about the [model's capabilities](https://www.anthropic.com/claude/sonnet) or read the [model card](https://assets.anthropic.com/m/61e7d27f8c8f5919/original/Claude-3-Model-Card.pdf). {% data variables.product.prodname_copilot %} uses {% data variables.copilot.copilot_claude_sonnet %} hosted on Amazon Web Services.
Original file line numberDiff line numberDiff line change
@@ -0,0 +1 @@
1+
**{% data variables.copilot.copilot_claude_sonnet_37 %}:** This model, like its predecessor, excels across the software development lifecycle, from initial design to bug fixes, maintenance to optimizations. It also has thinking capabilities which can be enabled by selecting the thinking version of the model, which can be particularly useful in agentic scenarios. Learn more about the [model's capabilities](https://www.anthropic.com/claude/sonnet) or read the [model card](https://assets.anthropic.com/m/785e231869ea8b3b/original/claude-3-7-sonnet-system-card.pdf). {% data variables.product.prodname_copilot %} uses {% data variables.copilot.copilot_claude_sonnet %} hosted on Amazon Web Services.
Original file line numberDiff line numberDiff line change
@@ -0,0 +1 @@
1+
**{% data variables.copilot.copilot_gemini_flash %}:** This model has strong coding, math, and reasoning capabilities that makes it well suited to assist with software development. {% data reusables.copilot.gemini-model-info %}
Original file line numberDiff line numberDiff line change
@@ -0,0 +1 @@
1+
**GPT-4o:** This is the default {% data variables.product.prodname_copilot_chat_short %} model. It is a versatile, multimodal model that excels in both text and image processing and is designed to provide fast, reliable responses. It also has superior performance in non-English languages. Learn more about the [model's capabilities](https://platform.openai.com/docs/models/gpt-4o) and review the [model card](https://openai.com/index/gpt-4o-system-card/). GPT-4o is hosted on Azure.
Original file line numberDiff line numberDiff line change
@@ -0,0 +1 @@
1+
**o1:** This model is focused on advanced reasoning and solving complex problems, in particular in math and science. It responds more slowly than the GPT-4o model. You can make 10 requests to this model per day. Learn more about the [model's capabilities](https://platform.openai.com/docs/models/o1) and review the [model card](https://openai.com/index/openai-o1-system-card/). o1 is hosted on Azure.
Original file line numberDiff line numberDiff line change
@@ -0,0 +1 @@
1+
**o3-mini:** This model is the next generation of reasoning models, following from o1 and o1-mini. The o3-mini model outperforms o1 on coding benchmarks with response times that are comparable to o1-mini, providing improved quality at nearly the same latency. It is best suited for code generation and small context operations. You can make 50 requests to this model every 12 hours. Learn more about the [model's capabilities](https://platform.openai.com/docs/models#o3-mini) and review the [model card](https://openai.com/index/o3-mini-system-card/). o3-mini is hosted on Azure.

0 commit comments

Comments
 (0)