Skip to content

Commit 9795f3b

Browse files
Merge branch 'main' into feature/move-test-data-s3
2 parents 0fbf312 + a719f65 commit 9795f3b

File tree

1 file changed

+16
-5
lines changed

1 file changed

+16
-5
lines changed

docs/getting_started/index.md

+16-5
Original file line numberDiff line numberDiff line change
@@ -6,25 +6,23 @@ It addresses the following challenges in AI testing:
66

77
- Edge cases in AI are **domain-specific** and often seemingly **infinite**.
88
- The AI development process is an experimental, **trial-and-error** process where quality KPIs are multi-dimensional.
9-
- Generative AI introduces new **security vulnerabilities** which requires constant vigilance and adversarial red-teaming.
10-
- AI compliance with new regulations necessitate that data scientists write **extensive documentation**.
9+
- Generative AI introduces new **security vulnerabilities** which requires constant vigilance and continuous red-teaming.
1110

1211
Giskard provides a platform for testing all AI models, from tabular ML to LLMs. This enables AI teams to:
1312

1413
1. **Reduce AI risks** by enhancing the test coverage on quality & security dimensions.
1514
2. **Save time** by automating testing, evaluation and debugging processes.
16-
3. **Automate compliance** with the EU AI Act and upcoming AI regulations & standards.
1715

1816
## Giskard Library (open-source)
1917

20-
An **open-source** library to scan your AI models for vulnerabilities and generate test suites automatically to aid in the Quality & Security evaluation process of ML models and LLMs.
18+
An **open-source** library to detect hallucinations and security issues to turn them into test suites that you can automatically execute.
2119

2220
Testing Machine Learning applications can be tedious. Since AI models depend on data, quality testing scenarios depend on
2321
**domain specificities** and are often **infinite**. Besides, detecting security vulnerabilities on LLM applications requires specialized knowledge that most AI teams don't possess.
2422

2523
To help you solve these challenges, Giskard library helps to:
2624

27-
- **Scan your model to find hidden vulnerabilities automatically**: The `giskard` scan automatically detects vulnerabilities
25+
- **Detect hallucinations and security issues automatically**: The `giskard` RAGET and SCAN automatically identify vulnerabilities
2826
such as performance bias, hallucination, prompt injection, data leakage, spurious correlation, overconfidence, etc.
2927
<br><br>
3028
<iframe src="https://htmlpreview.github.io/?https://gist.githubusercontent.com/AbSsEnT/a67354621807f3c3a332fca7d8b9a5c8/raw/588f027dc6b14c88c7393c50ff3086fe1122e2e9/LLM_QA_IPCC_scan_report.html" width="700" height="400"></iframe>
@@ -39,3 +37,16 @@ such as performance bias, hallucination, prompt injection, data leakage, spuriou
3937
<img src="../assets/gh_discussion.png" width="650">
4038

4139
Get started **now** with our [quickstart notebooks](../getting_started/quickstart/index.md)! ⚡️
40+
41+
42+
Want to take Giskard's features to the next level? Discover the LLM Hub below!
43+
## LLM Evaluation Hub (for entreprises)
44+
45+
The [LLM Hub](https://www.giskard.ai/products/llm-evaluation-hub) is an enterprise solution offering a broader range of features such as a:
46+
- **Continuous testing**: Detect new hallucinations and security issues during production.
47+
- **Annotation studio**: Easily write the right requirements for your LLM as a judge setup.
48+
- **Alerting**: Get alerted with regular vulnerability reports by emails during production.
49+
- **Red-teaming playground**: Collaboratively craft new test cases with high domain specificity.
50+
51+
52+
For a complete overview of LLM Hub’s features, check the documentation of the LLM Hub or directly [contact us](https://www.giskard.ai/contact).

0 commit comments

Comments
 (0)