Skip to content

Simple example of using llama.cpp with kotlin (JVM)

License

Notifications You must be signed in to change notification settings

Komdosh/kLLaMa-jvm

Repository files navigation

kLLaMa-jvm

Kotlin Version License llama.cpp

Simple example of using llama.cpp with kotlin (JVM)

Prerequisites 📋

  • JDK 21
  • Kotlin 2.1.0
  • CMake 3.31
  • C++ compiler (Clang++/GCC/MSVC)

Installation 🛠️

For now llama is built only for Mac M processors. You can tweak build options in

llama-library/CMakeLists.txt

Compile llama:

gradle :llama-library:compileNative

Example 🚀

import kotlinx.coroutines.runBlocking
import pro.tabakov.kllama.InferenceFactory

fun main() {
    System.loadLibrary("llama")

    runBlocking {
        val kLLaMa = InferenceFactory.loadModel(
            "/path/to/model.gguf", // Path to model
            0.0f, // Temperature
            0L // Context Size
        )
        println(kLLaMa.getContextSizeUsed())

        kLLaMa.ask("HI!").collect { message ->
            print(message)
        }
    }
}

Just put a path to model in kotlin/pro/atabakov/App.kt and simply run this example with gradle

gradle examples:kotlin-jvm-app:run

About

Simple example of using llama.cpp with kotlin (JVM)

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published