Simple example of using llama.cpp with kotlin (JVM)
- JDK 21
- Kotlin 2.1.0
- CMake 3.31
- C++ compiler (Clang++/GCC/MSVC)
For now llama is built only for Mac M processors. You can tweak build options in
llama-library/CMakeLists.txt
Compile llama:
gradle :llama-library:compileNative
import kotlinx.coroutines.runBlocking
import pro.tabakov.kllama.InferenceFactory
fun main() {
System.loadLibrary("llama")
runBlocking {
val kLLaMa = InferenceFactory.loadModel(
"/path/to/model.gguf", // Path to model
0.0f, // Temperature
0L // Context Size
)
println(kLLaMa.getContextSizeUsed())
kLLaMa.ask("HI!").collect { message ->
print(message)
}
}
}
Just put a path to model in kotlin/pro/atabakov/App.kt
and simply run this example with gradle
gradle examples:kotlin-jvm-app:run