llma2 模型管理工具,Llama 3, Mistral, Gemma, and other large language models https://github.com/jmorganca/ollama
|
1 week ago | |
---|---|---|
README.md | 1 week ago | |
docker-compose.yml | 1 year ago |
llma2 模型管理工具,在本地启动并运行大型语言模型。
https://github.com/jmorganca/ollama 地址已迁移到 https://github.com/ollama/ollama
# 安装 ollama
wget https://github.com/jmorganca/ollama/releases/download/v0.1.11/ollama-linux-amd64
chmod +x ollama-linux-amd64
# 或者
curl -fsSL https://ollama.com/install.sh | sh
ollama pull llava #
ollama push llava # 发布一个镜像
ollama list
ollama cp
ollama show
ollama create # 通过 Modelfile 创建模型
ollama rm
ollama run mistral
ollama run codellama
ollama run llama2
ollama run llama2-uncensored
ollama run llama2:13b
ollama run llama2:70b
ollama run deepseek-r1:7b
ollama run deepseek-r7b
ollama run deepseek-r8b
ollama run deepseek-r14b
ollama run deepseek-r32b
ollama run orca-mini
ollama run vicuna
./ollama serve # 或 ollama serve,端口 8434
相关GUI工具:
chatbox
chatbot-ollama
export DEFAULT_MODEL="deepseek-r1:1.5b" && export OLLAMA_HOST="http://localhost:8434" && cd /opt/chatbot-ollama && npm run dev # web 应用的启动命令
open-webui
export ENABLE_OPENAI_API=false && export OLLAMA_API_BASE_URL="http://localhost:8434" && open-webui serve # open-webui 的启动命令
anythingllm-server
cd /opt/anythingllm-template && yarn setup && export OLLAMA_HOST="http://localhost:8434" && env LLM_PROVIDER='ollama' OLLAMA_BASE_PATH='http://127.0.0.1:8434' OLLAMA_MODEL_PREF='deepseek-r1:1.5b' OLLAMA_MODEL_TOKEN_LIMIT=4096 EMBEDDING_ENGINE='ollama' EMBEDDING_BASE_PATH='http://127.0.0.1:8434' EMBEDDING_MODEL_PREF='bge-m3:latest' yarn dev:server # anythingllm server 的启动命令
安装基础依赖 git build-essential cmake go1.21
go build
或在容器中构建, Dockerfile.build