Skip to content

Commit a6603ca

Browse files
authored
Update getting_started readme
1 parent 5879dac commit a6603ca

File tree

1 file changed

+10
-8
lines changed

1 file changed

+10
-8
lines changed

docs/getting_started/index.md

+10-8
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,7 @@ Giskard provides a platform for testing all AI models, from tabular ML to LLMs.
1515

1616
## Giskard Library (open-source)
1717

18-
An **open-source** library to detect hallucinations (RAGET) and security issues (scan) to turn them into test suites that you can automatically execute.
18+
An **open-source** library to detect hallucinations and security issues to turn them into test suites that you can automatically execute.
1919

2020
Testing Machine Learning applications can be tedious. Since AI models depend on data, quality testing scenarios depend on
2121
**domain specificities** and are often **infinite**. Besides, detecting security vulnerabilities on LLM applications requires specialized knowledge that most AI teams don't possess.
@@ -38,13 +38,15 @@ such as performance bias, hallucination, prompt injection, data leakage, spuriou
3838

3939
Get started **now** with our [quickstart notebooks](../getting_started/quickstart/index.md)! ⚡️
4040

41-
## The Giskard Enterprise LLM Hub
4241

43-
The LLM Hub is an enterprise solution offering a broader range of features such as a:
44-
- **Continuous testing**: Detect new hallucinations and security issues during production
45-
- **Annotation studio**: Easily write the right requirements for your LLM as a judge setup
46-
- **Alerting**: Get alerted with regular vulnerability reports by emails during production
47-
- **Red-teaming playground**: Collaboratively craft new test cases with high domain specificity
42+
Want to take Giskard's features to the next level? Discover the LLM Hub below!
43+
## LLM Evaluation Hub (for entreprises)
44+
45+
The [LLM Hub](https://www.giskard.ai/products/llm-evaluation-hub) is an enterprise solution offering a broader range of features such as a:
46+
- **Continuous testing**: Detect new hallucinations and security issues during production.
47+
- **Annotation studio**: Easily write the right requirements for your LLM as a judge setup.
48+
- **Alerting**: Get alerted with regular vulnerability reports by emails during production.
49+
- **Red-teaming playground**: Collaboratively craft new test cases with high domain specificity.
4850

4951

50-
For a complete overview of LLM Hub’s features, check the documentation of the LLM Hub.
52+
For a complete overview of LLM Hub’s features, check the documentation of the LLM Hub or directly [contact us](https://www.giskard.ai/contact).

0 commit comments

Comments
 (0)