# ai

Repo that allows me to build AI tools on top of Hugging Face

## How to do fast inference using API

* [Use hugging face inference api](https://gradio.app/using_hugging_face_integrations/#using-hugging-face-inference-api)
* [How to use inference](https://huggingface.co/docs/huggingface_hub/how-to-inference)

## Verify GPU working

* [Reference PyTorch site](https://pytorch.org/get-started/locally/)
* `numba -s | grep cuda`
* run `python utils/verify_cuda_pytorch.py`
* run `nvidia-smi` should show a GPU


## Dev Container

用一个python3环境即可,安装aconda,jupyter notebook,tensorflow,pytorch等, VSCode安装相应的扩展,Nvidia
```bash
ai\.devcontainer\Dockerfile
```
FROM mcr.microsoft.com/vscode/devcontainers/python:0-${VARIANT}

```

## AI Image

镜像一般都比较大


环境配置

环境一般通过 requirements.txt(pip), pyproject.toml (poetry), environment.yml(conda) 管理依赖。