🤖 Self-hosted, community-driven, local OpenAI-compatible API. Drop-in replacement for OpenAI running LLMs on consumer-grade hardware. LocalAI is a RESTful API to run ggml compatible models: llama.cpp, alpaca.cpp, gpt4all.cpp, rwkv.cpp, whisper.cpp, vicuna, koala, gpt4all-j, cerebras and many others! https://github.com/go-skynet/LocalAI
|
1 year ago | |
---|---|---|
README.md | 1 year ago |
已天迁移到 https://github.com/mudler/LocalAI
go语言开发,基于开源本地模型 llama.cpp 后端,可以 docker 部署,有webui。
See examples on how to integrate LocalAI.
For a detailed step-by-step introduction, refer to the Getting Started guide.
For those in a hurry, here's a straightforward one-liner to launch a LocalAI AIO(All-in-one) Image using docker
:
docker run -ti --rm --name local-ai -p 8080:8080 localai/localai:latest-aio-cpu
# or, if you have an Nvidia GPU:
# docker run -ti --name local-ai -p 8080:8080 --gpus all localai/localai:latest-aio-gpu-nvidia-cuda-12
访问: http://127.0.0.1:8080/swagger
llama.cpp
, gpt4all.cpp
, ... :book: and more)whisper.cpp
)Check out the Getting started section in our documentation.
Build and deploy custom containers:
WebUIs:
Model galleries
Other: