Official repo for the paper: Visual ChatGPT: Talking, Drawing and Editing with Visual Foundation Models https://git.yoqi.me/microsoft/visual-chatgpt
天问 0211554ef8 Update 'README.md' | 1 year ago | |
---|---|---|
LICENSE.md | 1 year ago | |
README.md | 1 year ago | |
download.sh | 1 year ago | |
requirement.txt | 1 year ago | |
visual_chatgpt.py | 1 year ago |
Visual ChatGPT connects ChatGPT and a series of Visual Foundation Models to enable sending and receiving images during chatting.
论文: Visual ChatGPT: Talking, Drawing and Editing with Visual Foundation Models
模型文件下载:
bash download.sh
代码:
python visual_chatgpt.py
# create a new environment
conda create -n visgpt python=3.8
# activate the new environment
conda activate visgpt
# prepare the basic environments
pip install -r requirement.txt
# download the visual foundation models
bash download.sh
# prepare your private openAI private key
export OPENAI_API_KEY={Your_Private_Openai_Key}
# create a folder to save images
mkdir ./image
# Start Visual ChatGPT !
python visual_chatgpt.py
Here we list the GPU memory usage of each visual foundation model, one can modify self.tools
with fewer visual foundation models to save your GPU memory:
Foundation Model | Memory Usage (MB) |
---|---|
ImageEditing | 6667 |
ImageCaption | 1755 |
T2I | 6677 |
canny2image | 5540 |
line2image | 6679 |
hed2image | 6679 |
scribble2image | 6679 |
pose2image | 6681 |
BLIPVQA | 2709 |
seg2image | 5540 |
depth2image | 6677 |
normal2image | 3974 |
InstructPix2Pix | 2795 |
We appreciate the open source of the following projects:
Hugging Face LangChain Stable Diffusion ControlNet InstructPix2Pix CLIPSeg BLIP