🤖 Self-hosted, community-driven, local OpenAI-compatible API. Drop-in replacement for OpenAI running LLMs on consumer-grade hardware. LocalAI is a RESTful API to run ggml compatible models: llama.cpp, alpaca.cpp, gpt4all.cpp, rwkv.cpp, whisper.cpp, vicuna, koala, gpt4all-j, cerebras and many others! https://github.com/go-skynet/LocalAI

天问 ac1fa6fc74 Update 'README.md' 3 weeks ago
README.md ac1fa6fc74 Update 'README.md' 3 weeks ago

README.md

LocalAI

已天迁移到 https://github.com/mudler/LocalAI

网站:https://localai.io/

go语言开发,基于开源本地模型 llama.cpp 后端,可以 docker 部署,有webui。

See examples on how to integrate LocalAI.

💻 Getting started

For a detailed step-by-step introduction, refer to the Getting Started guide.

For those in a hurry, here's a straightforward one-liner to launch a LocalAI AIO(All-in-one) Image using docker:

docker run -ti --rm --name local-ai -p 8080:8080 localai/localai:latest-aio-cpu
# or, if you have an Nvidia GPU:
# docker run -ti --name local-ai -p 8080:8080 --gpus all localai/localai:latest-aio-gpu-nvidia-cuda-12

访问: http://127.0.0.1:8080/swagger

🚀 Features

💻 Usage

Check out the Getting started section in our documentation.

🔗 Community and integrations

Build and deploy custom containers:

WebUIs:

Model galleries

Other: