MacOS 11 and Windows ROCm wheels are unavailable for 0.2.22+. This is due to build issues with llama.cpp that are not yet resolved. ROCm builds for AMD GPUs: https ...
The main goal of llama.cpp is to enable LLM inference with minimal setup and state-of-the-art performance on a wide range of hardware - locally and in the cloud.
一些您可能无法访问的结果已被隐去。
显示无法访问的结果