In the article “AI Learning Record – Deploying Models with Docker“, I began to try using Nvidia Docker to deploy different models to ensure future availability. When deploying the first model, chatGLM2, it went smoothly, but when deploying Langchain-Chatchat, I encountered some minor problems. The later stable diffusion was completely unsuccessful and could only be run on the CPU. Currently, I have accepted that running in Docker is limited to the CPU, as I found that the installed programs are also different… compared to having to research how to install each time, using the CPU is still acceptable for light use.
So I opened up a separate article to record the installation of these special cases. However, as the version is improved, some problems may no longer exist in the new version, or they may produce new problems. Please keep this in mind for readers.
Also, you don’t need to install Anaconda in Docker because there is no issue with the unique environment being damaged.
Landchain-ChatChat
The landchain-chatchat project should be installed in Docker following the normal procedure. When running the last command, an error may occur.
“Removing the transformer-engine can solve the problem.”
1 |
pip uninstall transformer_engine |
Stable Diffusion
The steps are quite extensive, please refer to the following instructions for reference.
1 2 3 4 5 6 7 8 9 10 11 12 |
docker pull ubuntu:22.04 docker run -p 7860:7860 --gpus all --ipc=host --ulimit memlock=-1 --ulimit stack=67108864 --name genimg -it ubuntu:22.04 cd apt-get update apt install wget git python3 python3-venv libgl1 libglib2.0-0 vim python3-pip pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cpu wget -q https://raw.githubusercontent.com/AUTOMATIC1111/stable-diffusion-webui/master/webui.sh chmod a+x webui.sh # 將下面這一行加入webui.sh比較前面的地方 #export COMMANDLINE_ARGS="--skip-torch-cuda-test --precision full --no-half" #修改 webui.sh 把 can_run_as_root 改成 1 ./webui.sh --listen |