nvidia-docker2.0 GPU 隔离

本贴最后更新于 2550 天前,其中的信息可能已经时异事殊

通过环境变量来隔离

NVIDIA_VISIBLE_DEVICES

容器可以使用哪些 GPU

  • 0,1,2 GPU 编号,多个逗号隔开
  • all: 所有 GPU,nvidia 官方的镜像默认是这个选项
  • none: 没有 GPU,但是容器内部会映射 GPU 驱动
  • empty: nvidia-container-runtime will have the same behavior as runc.

NVIDIA_DRIVER_CAPABILITIES

This option controls which driver libraries/binaries will be mounted inside the container.

  • compute: required for CUDA and OpenCL applications,
  • compat32: required for running 32-bit applications,
  • graphics: required for running OpenGL and Vulkan applications,
  • utility: required for using nvidia-smi and NVML,
  • video: required for using the Video Codec SDK.

还有几个环境变量未做测试,可以点击进入 GitHub 查看。

几个感兴趣的问题可以参考下:

Is OpenGL supported?

No, OpenGL is not supported at the moment and there is no plan to support OpenGL+GLX in the near future.
OpenGL+EGL however will be supported and this issue will be updated accordingly.
If you are a NGC subscriber and require GLX for your workflow, please fill out a feature request for support consideration.

Do you support CUDA Multi Process Service (a.k.a. MPS)?

No, MPS is not supported at the moment. However we plan on supporting this feature in the future, and this issue will be updated accordingly.

Do you support running a GPU-accelerated X server inside the container?

No, running a X server inside the container is not supported at the moment and there is no plan to support it in the near future (see also OpenGL support).

Why is nvidia-smi inside the container not listing the running processes?

nvidia-smi and NVML are not compatible with PID namespaces.
We recommend monitoring your processes on the host or inside a container using --pid=host.

Can I limit the GPU resources (e.g. bandwidth, memory, CUDA cores) taken by a container?

No. Your only option is to set the GPU clocks at a lower frequency before starting the container.

What do I have to install in my container images?

Library dependencies vary from one application to another. In order to make things easier for developers, we provide a set of official images to base your images on.

Can I use the GPU during a container build (i.e. docker build)?

Yes, as long as you configure your Docker daemon to use the nvidia runtime as the default, you will be able to have build-time GPU support. However, be aware that this can render your images non-portable (see also invalid device function).

The official CUDA images are too big, what do I do?

The devel image tags are large since the CUDA toolkit ships with many libraries, a compiler and various command-line tools.
As a general rule of thumb, you shouldn’t ship your application with its build-time dependencies. We recommend to use multi-stage builds for this purpose. Your final container image should use our runtime or base images.
As of CUDA 9.0 we now ship a base image tag which bundles the strict minimum of dependencies.

Do you support Kubernetes?

Since Kubernetes 1.8, the recommended way is to use our official device plugin. Note that this is still alpha support.

  • Docker

    Docker 是一个开源的应用容器引擎,让开发者可以打包他们的应用以及依赖包到一个可移植的容器中,然后发布到任何流行的操作系统上。容器完全使用沙箱机制,几乎没有性能开销,可以很容易地在机器和数据中心中运行。

    491 引用 • 917 回帖
  • GPU
    10 引用 • 11 回帖

相关帖子

欢迎来到这里!

我们正在构建一个小众社区,大家在这里相互信任,以平等 • 自由 • 奔放的价值观进行分享交流。最终,希望大家能够找到与自己志同道合的伙伴,共同成长。

注册 关于
请输入回帖内容 ...