mirror of
https://github.com/open-webui/llama-cpp-runner
synced 2025-05-10 14:51:23 +00:00
chore: format
This commit is contained in:
parent
b803e507d5
commit
40c5cc4b0f
17
README.md
17
README.md
@ -2,8 +2,6 @@
|
||||
|
||||
`llama-cpp-runner` is the ultimate Python library for running [llama.cpp](https://github.com/ggerganov/llama.cpp) with zero hassle. It automates the process of downloading prebuilt binaries from the upstream repo, keeping you always **up to date** with the latest developments. All while requiring no complicated setups—everything works **out-of-the-box**.
|
||||
|
||||
---
|
||||
|
||||
## Key Features 🌟
|
||||
|
||||
1. **Always Up-to-Date**: Automatically fetches the latest prebuilt binaries from the upstream llama.cpp GitHub repo. No need to worry about staying current.
|
||||
@ -12,7 +10,6 @@
|
||||
4. **Built-in HTTP Server**: Automatically spins up a server for chat interactions and manages idle timeouts to save resources.
|
||||
5. **Cross-Platform Support**: Works on **Windows**, **Linux**, and **macOS** with automatic detection for AVX/AVX2/AVX512/ARM architectures.
|
||||
|
||||
---
|
||||
|
||||
## Why Use `llama-cpp-runner`?
|
||||
|
||||
@ -20,8 +17,6 @@
|
||||
- **Streamlined Model Serving**: Effortlessly manage multiple models and serve them with an integrated HTTP server.
|
||||
- **Fast Integration**: Use prebuilt binaries from upstream so you can spend more time building and less time troubleshooting.
|
||||
|
||||
---
|
||||
|
||||
## Installation 🚀
|
||||
|
||||
Installing `llama-cpp-runner` is quick and easy! Just use pip:
|
||||
@ -30,8 +25,6 @@ Installing `llama-cpp-runner` is quick and easy! Just use pip:
|
||||
pip install llama-cpp-runner
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Usage 📖
|
||||
|
||||
### Initialize the Runner
|
||||
@ -58,38 +51,28 @@ response = llama_runner.chat_completion({
|
||||
print(response)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## How It Works 🛠️
|
||||
|
||||
1. Automatically detects your system architecture (e.g., AVX, AVX2, ARM) and platform.
|
||||
2. Downloads and extracts the prebuilt llama.cpp binaries from the official repo.
|
||||
3. Spins up a lightweight HTTP server for chat interactions.
|
||||
|
||||
---
|
||||
|
||||
## Advantages 👍
|
||||
|
||||
- **Hassle-Free**: No need to compile binaries or manage system-specific dependencies.
|
||||
- **Latest Features, Always**: Stay up to date with llama.cpp’s improvements with every release.
|
||||
- **Optimized for Your System**: Automatically fetches the best binary for your architecture.
|
||||
|
||||
---
|
||||
|
||||
## Supported Platforms 🖥️
|
||||
|
||||
- Windows
|
||||
- macOS
|
||||
- Linux
|
||||
|
||||
---
|
||||
|
||||
## Contributing 💻
|
||||
|
||||
We’d love your contributions! Bug reports, feature requests, and pull requests are all welcome.
|
||||
|
||||
---
|
||||
|
||||
## License 📜
|
||||
|
||||
This library is open-source and distributed under the MIT license.
|
||||
|
Loading…
Reference in New Issue
Block a user