Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

rocm containers #11

Open
wants to merge 2 commits into
base: dev
Choose a base branch
from
Open

rocm containers #11

wants to merge 2 commits into from

Conversation

JarbasAl
Copy link
Member

@JarbasAl JarbasAl commented Mar 12, 2025

Summary by CodeRabbit

  • Documentation

    • Improved the argument table layout with enhanced formatting and corrected default values.
    • Updated descriptions for available speech-to-text plugins.
  • New Features

    • Introduced new container image configurations with support for both Nvidia CUDA and AMD ROCm, boosting service performance.
    • Restructured service build processes to use local directories for improved deployment consistency.
    • Added JSON-based configuration files for tuning model parameters and resource usage in speech-to-text services.

@JarbasAl JarbasAl requested a review from goldyfruit March 12, 2025 14:28
Copy link

coderabbitai bot commented Mar 12, 2025

Walkthrough

This pull request updates documentation and overhauls the Docker setup for various speech-to-text services. The README table is reformatted and corrected, while new Dockerfiles and Docker Compose files are introduced or modified for ROCm and CUDA support. Multiple STT projects (FasterWhisper, MyNorthAI, Nemo, Hitz, Project AINA Whisper, and Whisper) now include tailored build arguments, metadata labels, environment variables, configuration file copies, and updated entrypoints. New JSON configuration files have also been added to define runtime settings for CPU and GPU deployments.

Changes

File(s) Change Summary
README.md Updated table formatting; corrected default value for BUILD_DATE (from unkown to unknown); replaced ovos-stt-plugin-nemo with ovos-stt-plugin-citrinet.
base/Dockerfile.rocm New Dockerfile for building a ROCm-based STT base image from rocm/pytorch:latest with build arguments, labels, package installations, and user/environment setup.
docker-compose.cuda.yml,
docker-compose.rocm.yml,
docker-compose.yml
Modified Docker Compose configurations: Added new volumes and services (e.g., ovos_stt_mynorthai), defined GPU resource reservations, introduced new build contexts for services (FasterWhisper, Vosk, Chromium, Citrinet) and removed Nemo service/volumes.
fasterwhisper/Dockerfile.rocm,
fasterwhisper/Dockerfile,
fasterwhisper/Dockerfile.cuda,
fasterwhisper/cpu.conf,
fasterwhisper/gpu.conf
FasterWhisper project updates: Changed base image to a ROCm version; updated labels and plugin installation (switched to ovos-stt-plugin-fasterwhisper); added steps to copy config files (cpu.conf and gpu.conf) with respective JSON settings for CPU and GPU.
mynorthai/Dockerfile,
mynorthai/Dockerfile.cuda,
mynorthai/Dockerfile.rocm,
mynorthai/cpu.conf,
mynorthai/gpu.conf
MyNorthAI project adjustments: Updated base image (moved from CUDA-specific to standard for one file), fixed labeling (Portuguese language), removed one plugin and switched the entrypoint to use the Whisper engine; introduced separate Dockerfiles for CUDA and ROCm with appropriate ARGs, ENV settings, and new configuration files for CPU and GPU.
nemo/Dockerfile.rocm,
nemo/gpu.conf
Nemo project modifications: Updated the base image to a ROCm variant; refreshed metadata labels (title updated, description cleared); revised the entrypoint to use ovos-stt-plugin-nemo and added a step to copy gpu.conf with CUDA settings.
hitz/Dockerfile.rocm,
hitz/gpu.conf
Hitz project: New Dockerfile with ROCm support including build arguments, labels, conditional package installation, ENV setup, and GPU configuration copy; a new gpu.conf sets Nemo plugin parameters with CUDA enabled.
project-aina-whisper/Dockerfile.cuda,
project-aina-whisper/Dockerfile.rocm,
project-aina-whisper/gpu.conf
Project AINA Whisper: New Dockerfiles for both CUDA and ROCm support with build arguments, labels, configuration file copy, and entrypoint using the Whisper plugin; new gpu.conf specifies the model projecte-aina/whisper-large-v3-ca-3catparla with CUDA.
whisper/Dockerfile,
whisper/Dockerfile.cuda,
whisper/Dockerfile.rocm,
whisper/cpu.conf,
whisper/gpu.conf
Whisper project: Introduced new Dockerfiles for standard, CUDA, and ROCm images with build arguments, labels, config file copies (cpu.conf or gpu.conf), conditional package installations, ENV settings, entrypoints invoking the Whisper plugin, and exposure of port 8080; new configuration files define STT settings with differing CUDA usage.

Sequence Diagram(s)

sequenceDiagram
    participant Dev as Developer/CI
    participant DC as Docker Compose
    participant DF as Docker Build Process
    participant CNT as Container Runtime

    Dev->>DC: Trigger build/up (e.g., docker-compose up)
    DC->>DF: Initiate build using updated Dockerfiles (with ARGs, ENV, configs)
    DF->>DF: Execute steps: set base image, install packages, copy config files
    DF->>CNT: Create container with defined entrypoint (STT server with specific plugin)
    CNT-->>DC: Service starts, exposing ports and configured for GPU/CPU
Loading

Poem

I'm a happy hopper, full of code and cheer,
Each Dockerfile a carrot, so crisp and clear.
With ROCm and CUDA, my tray's brimming delight,
New services and configs make my future bright.
I nibble on changes, with a jovial twitch,
Celebrating every update—I’m your coding rabbit, rich!
🥕🐇 Happy hops to the new build pitch!


📜 Recent review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Free

📥 Commits

Reviewing files that changed from the base of the PR and between 3745fc2 and 978e784.

📒 Files selected for processing (27)
  • README.md (2 hunks)
  • base/Dockerfile.rocm (1 hunks)
  • docker-compose.cuda.yml (1 hunks)
  • docker-compose.rocm.yml (1 hunks)
  • docker-compose.yml (4 hunks)
  • fasterwhisper/Dockerfile (1 hunks)
  • fasterwhisper/Dockerfile.cuda (1 hunks)
  • fasterwhisper/Dockerfile.rocm (2 hunks)
  • fasterwhisper/cpu.conf (1 hunks)
  • fasterwhisper/gpu.conf (1 hunks)
  • hitz/Dockerfile.rocm (1 hunks)
  • hitz/gpu.conf (1 hunks)
  • mynorthai/Dockerfile (2 hunks)
  • mynorthai/Dockerfile.cuda (1 hunks)
  • mynorthai/Dockerfile.rocm (1 hunks)
  • mynorthai/cpu.conf (1 hunks)
  • mynorthai/gpu.conf (1 hunks)
  • nemo/Dockerfile.rocm (2 hunks)
  • nemo/gpu.conf (1 hunks)
  • project-aina-whisper/Dockerfile.cuda (1 hunks)
  • project-aina-whisper/Dockerfile.rocm (1 hunks)
  • project-aina-whisper/gpu.conf (1 hunks)
  • whisper/Dockerfile (1 hunks)
  • whisper/Dockerfile.cuda (1 hunks)
  • whisper/Dockerfile.rocm (1 hunks)
  • whisper/cpu.conf (1 hunks)
  • whisper/gpu.conf (1 hunks)

🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (12)
nemo/Dockerfile.cuda (1)

20-27: Improve Pip Install Efficiency
Consider appending the --no-cache-dir flag to your pip install commands. This prevents pip from caching intermediate files, which can effectively reduce the image size. For example:

- RUN pip3 install aiohttp \
-  && if [ "${ALPHA}" == "true" ]; then \
-  pip3 install ovos-stt-http-server SpeechRecognition setuptools ovos-stt-plugin-nemo --pre; \
-  else \
-  pip3 install ovos-stt-http-server SpeechRecognition setuptools ovos-stt-plugin-nemo; \
-  fi \
-  && mkdir -p ${HOME}/flagged \
-  && rm -rf ${HOME}/.cache/*
+ RUN pip3 install --no-cache-dir aiohttp \
+  && if [ "${ALPHA}" == "true" ]; then \
+  pip3 install --no-cache-dir ovos-stt-http-server SpeechRecognition setuptools ovos-stt-plugin-nemo --pre; \
+  else \
+  pip3 install --no-cache-dir ovos-stt-http-server SpeechRecognition setuptools ovos-stt-plugin-nemo; \
+  fi \
+  && mkdir -p ${HOME}/flagged \
+  && rm -rf ${HOME}/.cache/*
nemo/Dockerfile.rocm (2)

7-8: Review Label Description Consistency
While the title clearly indicates AMD ROCm support, the description still begins with “NVIDIA NeMo”. To avoid confusion, consider updating the description so it accurately reflects AMD ROCm support if that is the intent.


20-27: Optimize Pip Package Installation
Similar to the CUDA Dockerfile, using the --no-cache-dir flag for pip installs is recommended here to reduce the image footprint. For example:

- RUN pip3 install aiohttp \
-  && if [ "${ALPHA}" == "true" ]; then \
-  pip3 install ovos-stt-http-server SpeechRecognition setuptools ovos-stt-plugin-nemo --pre; \
-  else \
-  pip3 install ovos-stt-http-server SpeechRecognition setuptools ovos-stt-plugin-nemo; \
-  fi \
-  && mkdir -p ${HOME}/flagged \
-  && rm -rf ${HOME}/.cache/*
+ RUN pip3 install --no-cache-dir aiohttp \
+  && if [ "${ALPHA}" == "true" ]; then \
+  pip3 install --no-cache-dir ovos-stt-http-server SpeechRecognition setuptools ovos-stt-plugin-nemo --pre; \
+  else \
+  pip3 install --no-cache-dir ovos-stt-http-server SpeechRecognition setuptools ovos-stt-plugin-nemo; \
+  fi \
+  && mkdir -p ${HOME}/flagged \
+  && rm -rf ${HOME}/.cache/*
base/Dockerfile.rocm (1)

21-30: Efficient System Setup and Cleanup
The RUN command chains package installation, user creation, and virtual environment setup with apt-get update and cleanup commands. Consider adding the --no-install-recommends flag to the apt-get install command to limit unnecessary packages. Also, ensure that the use of ${HOME} (e.g. in the cache removal command) is consistent with the subsequent usage of /home/${USER}.

mynorthai/Dockerfile (1)

20-27: Optimize Pip Package Installation in MyNorthAI
As with the other Dockerfiles, adding the --no-cache-dir flag to pip install commands could help reduce the final image size. For example:

- RUN pip3 install aiohttp \
-  && if [ "${ALPHA}" == "true" ]; then \
-  pip3 install ovos-stt-http-server SpeechRecognition git+https://github.com/TigreGotico/ovos-stt-plugin-whisper.git git+https://github.com/TigreGotico/ovos-stt-plugin-MyNorthAI.git torch --pre; \
-  else \
-  pip3 install ovos-stt-http-server SpeechRecognition ovos-stt-plugin-MyNorthAI; \
-  fi \
-  && mkdir -p ${HOME}/flagged \
-  && rm -rf ${HOME}/.cache/*
+ RUN pip3 install --no-cache-dir aiohttp \
+  && if [ "${ALPHA}" == "true" ]; then \
+  pip3 install --no-cache-dir ovos-stt-http-server SpeechRecognition git+https://github.com/TigreGotico/ovos-stt-plugin-whisper.git git+https://github.com/TigreGotico/ovos-stt-plugin-MyNorthAI.git torch --pre; \
+  else \
+  pip3 install --no-cache-dir ovos-stt-http-server SpeechRecognition ovos-stt-plugin-MyNorthAI; \
+  fi \
+  && mkdir -p ${HOME}/flagged \
+  && rm -rf ${HOME}/.cache/*
fasterwhisper/Dockerfile.rocm (1)

20-27: Conditional Package Installation in RUN Command
The RUN block conditionally installs packages using an if construct dependent on the ALPHA argument. This conditional installation of pre-release packages versus stable packages is well implemented.
Consider verifying that the pip installs do not require additional version pinning for production stability (optional improvement).

docker-compose.rocm.yml (2)

15-33: Volume Definitions and Naming Consistency
All volumes are defined with the local driver to facilitate caching.
Note: The volume name ovos_stt_fasterwshiper_gradio_cache (lines 19-21) appears to have a typographical error. For consistency with the service name “fasterwhisper”, consider renaming it to ovos_stt_fasterwhisper_gradio_cache.

Proposed diff:

-  ovos_stt_fasterwshiper_gradio_cache:
-    name: ovos_stt_fasterwshiper_gradio_cache
+  ovos_stt_fasterwhisper_gradio_cache:
+    name: ovos_stt_fasterwhisper_gradio_cache

88-114: Service Configuration: ovos_stt_fasterwhisper
The block for ovos_stt_fasterwhisper is overall well structured.
Note: The same potential typo in the volume name appears again on line 113 with ovos_stt_fasterwshiper_gradio_cache. Consistency across volume names is important.

Proposed diff:

-      - ovos_stt_fasterwshiper_gradio_cache:/home/${OVOS_USER}/gradio_cached_examples
+      - ovos_stt_fasterwhisper_gradio_cache:/home/${OVOS_USER}/gradio_cached_examples
docker-compose.cuda.yml (2)

15-33: Volume Definitions Consistency
The volumes are defined similarly to the ROCm file.
Note: The volume ovos_stt_fasterwshiper_gradio_cache is again present on lines 19-21; consider correcting it to ovos_stt_fasterwhisper_gradio_cache for consistency across configurations.

Proposed diff:

-  ovos_stt_fasterwshiper_gradio_cache:
-    name: ovos_stt_fasterwshiper_gradio_cache
+  ovos_stt_fasterwhisper_gradio_cache:
+    name: ovos_stt_fasterwhisper_gradio_cache

88-114: Service Configuration: ovos_stt_fasterwhisper (CUDA)
The fasterwhisper service is correctly set up.
Note: As with the ROCm file, the volume mount on line 113 still uses ovos_stt_fasterwshiper_gradio_cache. Renaming it to ovos_stt_fasterwhisper_gradio_cache would enhance overall consistency.

Proposed diff:

-      - ovos_stt_fasterwshiper_gradio_cache:/home/${OVOS_USER}/gradio_cached_examples
+      - ovos_stt_fasterwhisper_gradio_cache:/home/${OVOS_USER}/gradio_cached_examples
mynorthai/Dockerfile.rocm (1)

20-27: Conditional RUN Command for Package Installation
The RUN command employs a conditional structure to install pre-release packages when ALPHA is true, including additional dependencies (e.g., plugin installations and torch in pre-release mode). This is a thoughtful setup for environments requiring experimental features.

mynorthai/Dockerfile.cuda (1)

20-27: Conditional Package Installation in RUN Command (CUDA)
The RUN block mirrors the ROCm version’s logic by conditionally installing pre-release packages (including installation of torch in pre-release mode) when ALPHA is true. This setup is consistent and meets the requirements for enabling optional features.

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 3745fc2 and ca5c5d6.

📒 Files selected for processing (10)
  • README.md (1 hunks)
  • base/Dockerfile.rocm (1 hunks)
  • docker-compose.cuda.yml (1 hunks)
  • docker-compose.rocm.yml (1 hunks)
  • fasterwhisper/Dockerfile.rocm (1 hunks)
  • mynorthai/Dockerfile (1 hunks)
  • mynorthai/Dockerfile.cuda (1 hunks)
  • mynorthai/Dockerfile.rocm (1 hunks)
  • nemo/Dockerfile.cuda (1 hunks)
  • nemo/Dockerfile.rocm (1 hunks)
🔇 Additional comments (40)
nemo/Dockerfile.cuda (3)

1-2: Base Image and TAG Argument Configuration
The ARG declaration and FROM statement clearly set the base image using the default tag alpha. Verify that using the alpha tag suits your release and testing process.


26-27: Consistent Use of Home Directory
The RUN command refers to ${HOME} (e.g. ${HOME}/flagged and ${HOME}/.cache), whereas the ENV settings later hardcode /home/${USER}. Ensure that the environment variable $HOME is reliably set—or consider using /home/${USER} consistently throughout the Dockerfile.


29-33: Environment Setup and Entry Point
The ENV configurations for PATH and VIRTUAL_ENV, along with the ENTRYPOINT and EXPOSE directives, are well defined and clear.

README.md (2)

62-62: Enhanced Table Header Formatting
The updated header row using | ----------- improves the clarity and alignment of the table columns in the arguments section.


64-64: Correcting BUILD_DATE Default Value
The typo in the default value for BUILD_DATE has been corrected from unkown to unknown, ensuring the documentation accurately reflects the build argument's value.

nemo/Dockerfile.rocm (3)

1-2: Base Image and TAG Setup for ROCm
The ARG declaration and FROM statement properly reference the ROCm base image (smartgic/ovos-stt-server-base-rocm:${TAG}). Confirm that the default alpha tag is appropriate for your ROCm builds.


26-27: Ensure Consistent Home Directory Reference
As with the CUDA variant, verify that using ${HOME} in this section aligns with subsequent hardcoded references such as /home/${USER}.


29-33: Final Environment Configuration
The PATH and VIRTUAL_ENV variables, as well as the ENTRYPOINT and EXPOSE directives, are configured appropriately for the ROCm container.

base/Dockerfile.rocm (2)

1-13: Base Image and Metadata Labels
The Dockerfile effectively sets up the base image from rocm/dev-ubuntu-24.04:latest and uses build arguments to assign clear metadata labels. The provided information is complete and well formatted.


32-37: Final Environment Configuration
Switching to the newly created user, setting the PATH and VIRTUAL_ENV, and establishing the WORKDIR are correctly handled, ensuring a secure and predictable runtime environment.

mynorthai/Dockerfile (3)

1-2: Updated Base Image for MyNorthAI
The base image has been changed from the CUDA-enabled variant to the standard image. Please confirm that this change is intentional and that the standard base image meets your performance and compatibility requirements.


7-9: Accurate Metadata and Documentation
The label correction for the description now reads “MyNorthAI is a STT specialized with Portuguese language,” which corrects the previous spelling error. This improves the clarity of the image metadata.


29-34: Environment and Entrypoint Configuration
The ENV settings for PATH and VIRTUAL_ENV, along with the ENTRYPOINT and EXPOSE directives, are correctly configured to launch the MyNorthAI STT service.

fasterwhisper/Dockerfile.rocm (7)

1-2: Base Image and Tag Argument Usage
The use of an ARG TAG with a default of "alpha" and its application in the FROM instruction is clear and allows for flexible image versioning.


4-6: Build Metadata Arguments
Defining ARG BUILD_DATE and ARG VERSION with reasonable defaults helps embed build-time information into the image.


7-13: Image Labeling for Metadata
The LABEL directives provide comprehensive metadata (title, description, version, creation date, documentation, source, vendor) which improves traceability and image documentation.


15-17: User and Feature Toggle Arguments
Introducing ARG ALPHA (defaulting to false) and ARG USER allows conditionally installing pre-release packages and configuring user-specific paths. This approach is consistent with Docker best practices.


18-18: Shell Declaration
Switching the shell to Bash explicitly via the SHELL directive is ideal for writing complex multi-line commands and ensures consistent behavior.


29-30: Environment Variables for Virtual Environment
Setting the PATH and VIRTUAL_ENV using the USER argument provides a clear way to manage the Python virtual environment. Verify that ${HOME} is configured as expected in the base image.


32-34: Entrypoint and Port Exposure
The ENTRYPOINT command correctly starts the STT server with the designated plugin, and exposing port 8080 aligns with the service requirements.

docker-compose.rocm.yml (3)

1-14: YAML Header and Reusable Configuration Definitions
Version declaration, YAML anchors for x-podman and x-logging are well structured. This reuse of configuration improves maintainability across services.


35-61: Service Configuration: ovos_stt_mynorthai
The ovos_stt_mynorthai service block correctly uses the podman anchor, sets container/hostname, restart policy, and mounts the appropriate volumes. Environment variables and port mappings are also properly configured.


62-87: Service Configuration: ovos_stt_nemo
The setup for ovos_stt_nemo is consistent with the previous service, with proper resource reservation (e.g., NVIDIA GPU) and volume mounts.

docker-compose.cuda.yml (3)

1-14: YAML Header and Anchors for CUDA Setup
The header mirrors the rocM configuration accurately, with YAML anchors for podman and logging ensuring consistency.


35-61: Service Configuration: ovos_stt_mynorthai (CUDA)
The configuration for the MyNorthAI service under CUDA is well defined and mirrors the analogous ROCm setup (with image changes reflecting CUDA support).


62-87: Service Configuration: ovos_stt_nemo (CUDA)
The nemo service replicates the proper pattern for container configuration with GPU reservations and volume mounting.

mynorthai/Dockerfile.rocm (7)

1-2: Base Image and Tag Argument
Using the ROCm-based image from smartgic/ovos-stt-server-base-rocm with an argumentized tag allows for flexible versioning of the base image.


4-6: Build Information Arguments
Including ARG BUILD_DATE and ARG VERSION ensures that build metadata is injected into the image, aiding in traceability.


7-13: Labels for Metadata
The LABEL directives provide clear, useful metadata including a tailored title and description for the MyNorthAI image.


15-17: User and Feature Toggle Arguments
Defining ARG ALPHA and ARG USER maintains consistency with other Dockerfiles, enabling conditional dependency management and user-specific configurations.


18-18: Shell Configuration
Setting the shell to Bash is appropriate for the multi-line RUN command that follows.


29-30: Environment Variable Setup
The ENV directives for PATH and VIRTUAL_ENV construct the expected virtual environment path, ensuring that installed Python packages are accessible at runtime.


32-34: Entrypoint and Exposure
The ENTRYPOINT command correctly invokes the STT server with the MyNorthAI plugin, and exposing port 8080 makes the service available externally.

mynorthai/Dockerfile.cuda (7)

1-2: CUDA Base Image and Tag Parameterization
Using smartgic/ovos-stt-server-base-cuda with an ARG for the tag provides consistency and flexibility for CUDA-based deployments.


4-6: Metadata Arguments
The arguments for build date and version are set appropriately to allow embedding of build information.


7-13: Image Metadata Labeling
The LABEL directives correctly document the image by including title, description (tailored for MyNorthAI and CUDA support), version, creation date, documentation link, source, and vendor information.


15-17: Feature-Control and User Parameters
The use of ARG ALPHA and ARG USER helps conditionally install additional packages and set the proper user context.


18-18: Shell Directive
Specifying the shell as Bash enables robust handling of the multi-line RUN command.


29-30: Environment Variable Definitions
Setting PATH and VIRTUAL_ENV ensures that the Python virtual environment is correctly configured for runtime command resolution.


32-34: Entrypoint and Port Exposure
The ENTRYPOINT consistently starts the STT server with MyNorthAI support and EXPOSE makes port 8080 available for external access.

@JarbasAl
Copy link
Member Author

@goldyfruit I will let you take over now, but the ROCM images are all working in my home server now :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant