AI Learning Record – Troubleshooting Issues with Installing Models Using Docker

In the article “AI Learning Record – Deploying Models with Docker“, I began to try using Nvidia Docker to deploy different models to ensure future availability. When deploying the first model, chatGLM2, it went smoothly, but when deploying Langchain-Chatchat, I encountered some minor problems. The later stable diffusion was completely unsuccessful and could only be run on the CPU. Currently, I have accepted that running in Docker is limited to the CPU, as I found that the installed programs are also different… compared to having to research how to install each time, using the CPU is still acceptable for light use.

So I opened up a separate article to record the installation of these special cases. However, as the version is improved, some problems may no longer exist in the new version, or they may produce new problems. Please keep this in mind for readers.

Also, you don’t need to install Anaconda in Docker because there is no issue with the unique environment being damaged.


The landchain-chatchat project should be installed in Docker following the normal procedure. When running the last command, an error may occur.

“Removing the transformer-engine can solve the problem.”

Stable Diffusion

The steps are quite extensive, please refer to the following instructions for reference.


Leave a Reply(Name請以user_開頭,否則會被判定會垃圾息)

請輸入答案 96 ÷ = 16