Nvidia cuda download windows

Скачать бесплатно NVIDIA CUDA Toolkit 10.2.89

Среда разработки NVIDIA CUDA Toolkit поможет в создании программного обеспечения, использующего специальные алгоритмы вычислений. Разрабатываемые программы работают по технологии CUDA – особая архитектура параллельных вычислений, которые могут выполняться на видеочипах от NVIDIA.

Фирменный API

Главное отличие и преимущество программного обеспечения на CUDA – повышенная скорость выполнения задач и более эффективное использование ресурсов ПК. При наличии видеокарты (или встроенного GPU), поддерживающей «Куда», можно использовать ее для разгрузки центрального процессора. Графический чип принимает на себя часть исполняемых операций ЦП.

Кроме этого, NVIDIA CUDA Toolkit незаменима для разработки системных инструкций и драйверов, так как позволяет правильно организовать двусторонний доступ CPU-GPU, GPU-видеопамять.

Работа на языках С и С++

Входящий в состав среды компилятор позволяет работать с кодом, написанном на упрощенных диалектах языков C и C++. В ПО для разработчиков также входит множество библиотек с инструкциями для ускоренного выполнения графических и математических задач силами GPU, и отладчик/оптимизатор написанных приложений.


• создание быстрых и оптимизированных приложений по технологии CUDA;
• универсальный язык написания кода C и C++;
• большое количество инструментов, входящих в пакет;
• ускоренный обмен данными между кэшем ЦП и памятью видеочипа;
• поддержка на аппаратном уровне целочисленных и побитовых операторов.


• нет существенных недостатков.

Скачать бесплатно виртуальную среду разработки NVIDIA CUDA Toolkit вы можете по ссылке внизу статьи.


Accelerated Computing Tools

A suite of tools, libraries, and technologies for developing applications with breakthrough levels of performance.

Combined with the performance of GPUs, these tools help developers start immediately accelerating applications on NVIDIA’s embedded, PC, workstation, server, and cloud datacenter platforms.

CUDA Toolkit

A development environment for building GPU-accelerated applications, including libraries, debugging and optimization tools, a C/C++ compiler, and a runtime library.


A comprehensive suite of C, C++, and Fortran compilers, libraries, and tools for GPU-accelerating HPC applications. Supports GPU programming with standard C++ and Fortran, OpenACC directives, and CUDA.


A 3D volumetric interactive visualization SDK for visualizing and interacting with massive data sets, making real-time modifications, and navigating to the most pertinent parts of the data, all in real-time, to gather better insights faster.

Optimized Libraries

GPU accelerated libraries — CUDA-X- deliver dramatically higher performance—compared to CPU-only alternatives— across a wide variety of application domains.

HPC Compilers

C++, C and Fortran compilers for NVIDIA GPUs and AMD, Intel, OpenPOWER, and Arm Server CPUs support GPU programming using ISO standard C++ and Fortran, OpenACC and CUDA.

Development Tools

A set of applications, spanning desktop and mobile targets, enabling developers to build, debug, profile, and optimize class-leading and cutting-edge software.

Data Center Tools

A collection of software tools for developers and DevOps to utilize at every step of the AI and HPC software life cycle.

GPU-Accelerated Software Hub

Quickly utilize GPU-optimized software for deep learning, machine learning, and HPC with containers, pre-trained models, model scripts, helm charts, industry-specific SDKs, and
much more.


CUDA 7.0 Downloads

Please Note: There is a recommended patch for CUDA 7.0 which resolves an issue in the cuFFT library that can lead to incorrect results for certain inputs sizes less than or equal to 1920 in any dimension when cufftSetStream() is passed a non-blocking stream (e.g., one created using the cudaStreamNonBlocking flag of the CUDA Runtime API or the CU_STREAM_NON_BLOCKING flag of the CUDA Driver API).

Version Network Installer Local Installer
Windows 8.1
Windows 7
Win Server 2012 R2
Win Server 2008 R2
EXE (8.0MB) EXE (939MB)
cuFFT Patch ZIP (52MB) , README
Windows Getting Started Guide

Q: Where is the notebook installer?
A: Previous releases of the CUDA Toolkit had separate installation packages for notebook and desktop systems. Beginning with CUDA 7.0, these packages have been merged into a single package that is capable of installing on all supported platforms.

Q: What is the difference between the Network Installer and the Local Installer?
A: The Local Installer has all of the components embedded into it (toolkit, driver, samples). This makes the installer very large, but once downloaded, it can be installed without an internet connection. The Network Installer is a small executable that will only download the necessary components dynamically during the installation so an internet connection is required.

Q: Where do I get the GPU Deployment Kit (GDK) for Windows?
A: The installers give you an option to install the GDK. If you only want to install the GDK, then you should use the network installer, for efficiency.

Q: Where can I find old versions of the CUDA Toolkit?
A: Older versions of the toolkit can be found on the Legacy CUDA Toolkits page.

Q: Is cuDNN included as part of the CUDA Toolkit?
A: cuDNN is our library for Deep Learning frameworks, and can be downloaded separately from the cuDNN home page.

Version Network Installer Local Package Installer Runfile Installer
Fedora 21 RPM (3KB) RPM (1GB) RUN (1.1GB)
OpenSUSE 13.2 RPM (3KB) RPM (1GB) RUN (1.1GB)
OpenSUSE 13.1 RPM (3KB) RPM (1GB) RUN (1.1GB)
CentOS 7
RPM (10KB) RPM (1GB) RUN (1.1GB)
CentOS 6
RPM (18KB) RPM (1GB) RUN (1.1GB)
SLES 12 RPM (3KB) RPM (1.1GB) RUN (1.1GB)
SLES 11 (SP3) RPM (3KB) RPM (1.1GB) RUN (1.1GB)
SteamOS 1.0-beta RUN (1.1GB)
Ubuntu 14.10 DEB (3KB) DEB (1.5GB) RUN (1.1GB)
Ubuntu 14.04 * DEB (10KB) DEB (902MB) RUN (1.1GB)
Ubuntu 12.04 DEB (3KB) DEB (1.3GB) RUN (1.1GB)
GPU Deployment Kit Included in Installer Included in Installer RUN (4MB)
cuFFT Patch TAR (122MB) , README
Linux Getting Started Guide

* Includes POWER8 cross-compilation tools.

Q: Where can I find the CUDA 7 Toolkit for my Jetson TK1?
A: Jetson TK1 is not supported by the CUDA 7 Toolkit. Please download the CUDA 6.5 Toolkit for Jetson TK1 instead.

Q: What is the difference between the Network Installer and the Local Installer?
A: The Local Installer has all of the components embedded into it (toolkit, driver, samples). This makes the installer very large, but once downloaded, it can be installed without an internal internet connection. The Network Installer is a small executable that will only download the necessary components dynamically during the installation so an internet connection is required to use this installer.

Q: Is cuDNN included as part of the CUDA Toolkit?
A: cuDNN is our library for Deep Learning frameworks, and can be downloaded separately from the cuDNN home page.

Version Network Installer Local Package Installer Runfile Installer
Ubuntu 14.10 DEB (3KB) DEB (588MB)
Ubuntu 14.04 DEB (3KB) DEB (588MB)
GPU Deployment Kit n/a n/a RUN (1.7MB)
cuFFT Patch TAR (105MB) , README
Linux Getting Started Guide

Q: What is the difference between the Network Installer and the Local Installer?
A: The Local Installer has all of the components embedded into it (toolkit, driver, samples). This makes the installer very large, but once downloaded, it can be installed without an internal internet connection. The Network Installer is a small executable that will only download the necessary components dynamically during the installation so an internet connection is required to use this installer.

Q: Is cuSOLVER available for the POWER8 architecture?
A: The initial release of the CUDA 7.0 toolkit omitted the cuSOLVER library from the installer. On May 29, 2015, new CUDA 7.0 installers were posted for the POWER8 architecture that included the cuSOLVER library. If you downloaded the CUDA 7.0 toolkit for POWER8 on or earlier than this date, and you need to use cuSOLVER, you will need to download the latest installer and re-install.

Version Network Installer Local Installer
DMG (0.4MB) PKG (977MB)
cuFFT Patch TAR (104MB) , README
Mac Getting Started Guide

Q: What is the difference between the Network Installer and the Local Installer?
A: The Local Installer has all of the components embedded into it (toolkit, driver, samples). This makes the installer very large, but once downloaded, it can be installed without an internal connection. The Network Installer is a small executable that will only download the necessary components dynamically during the installation so an internet connection is required to use this installer.

Q: Is cuDNN included as part of the CUDA Toolkit?
A: cuDNN is our library for Deep Learning frameworks, and can be downloaded separately from the cuDNN home page.

Q: What do I do if the Network Installer fails to run with the error message «The package is damaged and can’t be opened. You should eject the disk image»?
A: Check that your security preferences are set to allow apps downloaded from anywhere to run. This setting can be found under: System Preferences > Security & Privacy > General


Nvidia cuda download windows

The installation instructions for the CUDA Toolkit on MS-Windows systems.

1. Introduction

CUDA В® is a parallel computing platform and programming model invented by NVIDIA. It enables dramatic increases in computing performance by harnessing the power of the graphics processing unit (GPU).

This guide will show you how to install and check the correct operation of the CUDA development tools.

1.1. System Requirements

The next two tables list the currently supported Windows operating systems and compilers.

Table 1. Windows Operating System Support in CUDA 11.7
Operating System Native x86_64 Cross (x86_32 on x86_64)
Windows 11 YES NO
Windows 10 YES NO
Windows Server 2022 YES NO
Windows Server 2019 YES NO
Windows Server 2016 YES NO
Table 2. Windows Compiler Support in CUDA 11.7
Compiler* IDE Native x86_64 Cross (x86_32 on x86_64)
MSVC Version 193x Visual Studio 2022 17.0 YES YES
MSVC Version 192x Visual Studio 2019 16.x YES YES
MSVC Version 191x Visual Studio 2017 15.x (RTW and all updates) YES YES

* Support for Visual Studio 2015 is deprecated in release 11.1.

x86_32 support is limited. See the x86 32-bit Support section for details.

For more information on MSVC versions, Visual Studio product versions, visit https://dev.to/yumetodo/list-of-mscver-and-mscfullver-8nd.

1.2. x86 32-bit Support

Native development using the CUDA Toolkit on x86_32 is unsupported. Deployment and execution of CUDA applications on x86_32 is still supported, but is limited to use with GeForce GPUs. To create 32-bit CUDA applications, use the cross-development capabilities of the CUDA Toolkit on x86_64.

1.3. About This Document

This document is intended for readers familiar with Microsoft Windows operating systems and the Microsoft Visual Studio environment. You do not need previous experience with CUDA or experience with parallel computation.

2. Installing CUDA Development Tools

Basic instructions can be found in the Quick Start Guide. Read on for more detailed instructions.

2.1. Verify You Have a CUDA-Capable GPU

You can verify that you have a CUDA-capable GPU through the Display Adapters section in the Windows Device Manager. Here you will find the vendor name and model of your graphics card(s). If you have an NVIDIA card that is listed in http://developer.nvidia.com/cuda-gpus, that GPU is CUDA-capable. The Release Notes for the CUDA Toolkit also contain a list of supported products.

2.2. Download the NVIDIA CUDA Toolkit

The CUDA Toolkit installs the CUDA driver and tools needed to create, build and run a CUDA application as well as libraries, header files, and other resources.

Download Verification

The download can be verified by comparing the MD5 checksum posted at https://developer.download.nvidia.com/compute/cuda/11.6.2/docs/sidebar/md5sum.txt with that of the downloaded file. If either of the checksums differ, the downloaded file is corrupt and needs to be downloaded again.

To calculate the MD5 checksum of the downloaded file, follow the instructions at https://support.microsoft.com/kb/889768.

2.3. Install the CUDA Software

Before installing the toolkit, you should read the Release Notes , as they provide details on installation and software functionality.

Graphical Installation

Install the CUDA Software by executing the CUDA installer and following the on-screen prompts.

Silent Installation

The installer can be executed in silent mode by executing the package with the -s flag. Additional parameters can be passed which will install specific subpackages instead of all packages. See the table below for a list of all the subpackage names.

Table 3. Possible Subpackage Names
Subpackage Name Subpackage Description
Toolkit Subpackages (defaults to C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v 11.7 )
cudart_ 11.7 CUDA Runtime libraries.
cuobjdump_ 11.7 Extracts information from cubin files.
cupti_ 11.7 The CUDA Profiling Tools Interface for creating profiling and tracing tools that target CUDA applications.
cuxxfilt_ 11.7 The CUDA cu++ filt demangler tool.
demo_suite_ 11.7 Prebuilt demo applications using CUDA.
documentation_ 11.7 CUDA HTML and PDF documentation files including the CUDA C++ Programming Guide, CUDA C++ Best Practices Guide, CUDA library documentation, etc.
memcheck_ 11.7 Functional correctness checking suite.
nvcc_ 11.7 CUDA compiler.
nvdisasm_ 11.7 Extracts information from standalone cubin files.
nvml_dev_ 11.7 NVML development libraries and headers.
nvprof_ 11.7 Tool for collecting and viewing CUDA application profiling data from the command-line.
nvprune_ 11.7 Prunes host object files and libraries to only contain device code for the specified targets.
nvrtc_ 11.7

NVRTC runtime libraries. nvtx_ 11.7 NVTX on Windows. visual_profiler_ 11.7 Visual Profiler. sanitizer_ 11.7 Compute Sanitizer API. thrust_ 11.7 CUDA Thrust. cublas_ 11.7

cuBLAS runtime libraries. cufft_ 11.7

cuFFT runtime libraries. curand_ 11.7

cuRAND runtime libraries. cusolver_ 11.7

cuSOLVER runtime libraries. cusparse_ 11.7

cuSPARSE runtime libraries. npp_ 11.7

NPP runtime libraries. nvjpeg_ 11.7

nvJPEG libraries. nsight_compute_ 11.7 Nsight Compute. nsight_nvtx_ 11.7 Older v1.0 version of NVTX. nsight_systems_ 11.7 Nsight Systems. nsight_vse_ 11.7 Installs the Nsight Visual Studio Edition plugin in all VS. visual_studio_integration_ 11.7 Installs CUDA project wizard and builds customization files in VS. occupancy_calculator_ 11.7 Installs the CUDA_Occupancy_Calculator.xls tool. Driver Subpackages Display.Driver The NVIDIA Display Driver. Required to run CUDA applications.

Use the -n option if you do not want to reboot automatically after install or uninstall, even if reboot is required.

Extracting and Inspecting the Files Manually

Sometimes it may be desirable to extract or inspect the installable files directly, such as in enterprise deployment, or to browse the files before installation. The full installation package can be extracted using a decompression tool which supports the LZMA compression method, such as 7-zip or WinZip.

Once extracted, the CUDA Toolkit files will be in the CUDAToolkit folder, and similarily for CUDA Visual Studio Integration. Within each directory is a .dll and .nvi file that can be ignored as they are not part of the installable files.

2.3.1. Uninstalling the CUDA Software

All subpackages can be uninstalled through the Windows Control Panel by using the Programs and Features widget.

2.4. Using Conda to Install the CUDA Software

This section describes the installation and configuration of CUDA when using the Conda installer. The Conda packages are available at https://anaconda.org/nvidia.

2.4.1. Conda Overview

2.4.2. Installation

To perform a basic install of all CUDA Toolkit components using Conda, run the following command:

2.4.3. Uninstallation

To uninstall the CUDA Toolkit using Conda, run the following command:

2.4.4. Installing Previous CUDA Releases

All Conda packages released under a specific CUDA version are labeled with that release version. To install a previous version, include that label in the install command such as:

Some CUDA releases do not move to new versions of all installable components. When this is the case these components will be moved to the new label, and you may need to modify the install command to include both labels such as:

This example will install all packages released as part of CUDA 11.3.0.

2.5. Use a Suitable Driver Model

On Windows 10 and later, the operating system provides two under which the NVIDIA Driver may operate:

  • The driver model is used for display devices.
  • The mode of the NVIDIA Driver is available for non-display devices such as NVIDIA Tesla GPUs and the GeForce GTX Titan GPUs; it uses the Windows driver model.

TCC is enabled by default on most recent NVIDIA Tesla GPUs. To check which driver mode is in use and/or to switch driver modes, use the nvidia-smi tool that is included with the NVIDIA Driver installation (see nvidia-smi -h for details).

2.6. Verify the Installation

Before continuing, it is important to verify that the CUDA toolkit can find and communicate correctly with the CUDA-capable hardware. To do this, you need to compile and run some of the included sample programs.

2.6.1. Running the Compiled Examples

The version of the CUDA Toolkit can be checked by running nvcc -V in a Command Prompt window. You can display a Command Prompt window by going to:

Start > All Programs > Accessories > Command Prompt

CUDA Samples are located in https://github.com/nvidia/cuda-samples. To use the samples, clone the project, build the samples, and run them using the instructions on the Github page.

To verify a correct configuration of the hardware and software, it is highly recommended that you build and run the deviceQuery sample program. The sample can be built using the provided VS solution files in the deviceQuery folder.

This assumes that you used the default installation directory structure. If CUDA is installed and configured correctly, the output should look similar to Figure 1.

The exact appearance and the output lines might be different on your system. The important outcomes are that a device was found, that the device(s) match what is installed in your system, and that the test passed.

If a CUDA-capable device and the CUDA Driver are installed but deviceQuery reports that no CUDA-capable devices are present, ensure the deivce and driver are properly installed.

Running the bandwidthTest program, located in the same directory as deviceQuery above, ensures that the system and the CUDA-capable device are able to communicate correctly. The output should resemble Figure 2.

The device name (second line) and the bandwidth numbers vary from system to system. The important items are the second line, which confirms a CUDA device was found, and the second-to-last line, which confirms that all necessary tests passed.

If the tests do not pass, make sure you do have a CUDA-capable NVIDIA GPU on your system and make sure it is properly installed.

3. Pip Wheels

NVIDIA provides Python Wheels for installing CUDA through pip, primarily for using CUDA with Python. These packages are intended for runtime use and do not currently include developer tools (these can be installed separately).

Please note that with this installation method, CUDA installation environment is managed via pip and additional care must be taken to set up your host environment to use CUDA outside the pip environment.

4. Compiling CUDA Programs

4.1. Compiling Sample Projects

The bandwidthTest project is a good sample project to build and run. It is located in https://github.com/NVIDIA/cuda-samples/tree/master/Samples/bandwidthTest.

If you elected to use the default installation location, the output is placed in CUDA Samples\v 11.7 \bin\win64\Release . Build the program using the appropriate solution file and run the executable. If all works correctly, the output should be similar to Figure 2.

4.2. Sample Projects

The sample projects come in two configurations: debug and release (where release contains no debugging information) and different Visual Studio projects.

A few of the example projects require some additional setup.

These sample projects also make use of the $CUDA_PATH environment variable to locate where the CUDA Toolkit and the associated .props files are.

The environment variable is set automatically using the Build Customization CUDA 11.7 .props file, and is installed automatically as part of the CUDA Toolkit installation process.

Table 4. CUDA Visual Studio .props locations
Visual Studio CUDA 11.7 .props file Install Directory
Visual Studio 2015 (deprecated) C:\Program Files (x86)\MSBuild\Microsoft.Cpp\v4.0\V140\BuildCustomizations
Visual Studio 2017 \Common7\IDE\VC\VCTargets\BuildCustomizations
Visual Studio 2019 C:\Program Files (x86)\Microsoft Visual Studio\2019\Professional\MSBuild\Microsoft\VC\v160\BuildCustomizations
Visual Studio 2022 C:\Program Files\Microsoft Visual Studio\2022\Professional\MSBuild\Microsoft\VC\v170\BuildCustomizations

You can reference this CUDA 11.7 .props file when building your own CUDA applications.

4.3. Build Customizations for New Projects

When creating a new CUDA application, the Visual Studio project file must be configured to include CUDA build customizations. To accomplish this, click File-> New | Project. NVIDIA-> CUDA->, then select a template for your CUDA Toolkit version. For example, selecting the «CUDA 11.7 Runtime» template will configure your project for use with the CUDA 11.7 Toolkit. The new project is technically a C++ project (.vcxproj) that is preconfigured to use NVIDIA’s Build Customizations. All standard capabilities of Visual Studio C++ projects will be available.

4.4. Build Customizations for Existing Projects

While Option 2 will allow your project to automatically use any new CUDA Toolkit version you may install in the future, selecting the toolkit version explicitly as in Option 1 is often better in practice, because if there are new CUDA configuration options added to the build customization rules accompanying the newer toolkit, you would not see those new options using Option 2.

Files which contain CUDA code must be marked as a CUDA C/C++ file. This can done when adding the file by right clicking the project you wish to add the file to, selecting Add\New Item , selecting NVIDIA CUDA 11.7 \Code\CUDA C/C++ File , and then selecting the file you wish to add.

5. Additional Considerations

Now that you have CUDA-capable hardware and the NVIDIA CUDA Toolkit installed, you can examine and enjoy the numerous included programs. To begin using CUDA to accelerate the performance of your own applications, consult the CUDA C Programming Guide , located in the CUDA Toolkit documentation directory.

A number of helpful development tools are included in the CUDA Toolkit or are available for download from the NVIDIA Developer Zone to assist you as you develop your CUDA programs, such as NVIDIA В® Nsightв„ў Visual Studio Edition, NVIDIA Visual Profiler, and cuda-memcheck.

For technical support on programming questions, consult and participate in the developer forums at http://developer.nvidia.com/cuda/.



This document is provided for information purposes only and shall not be regarded as a warranty of a certain functionality, condition, or quality of a product. NVIDIA Corporation (“NVIDIA”) makes no representations or warranties, expressed or implied, as to the accuracy or completeness of the information contained in this document and assumes no responsibility for any errors contained herein. NVIDIA shall have no liability for the consequences or use of such information or for any infringement of patents or other rights of third parties that may result from its use. This document is not a commitment to develop, release, or deliver any Material (defined below), code, or functionality.

NVIDIA reserves the right to make corrections, modifications, enhancements, improvements, and any other changes to this document, at any time without notice.

Customer should obtain the latest relevant information before placing orders and should verify that such information is current and complete.

NVIDIA products are sold subject to the NVIDIA standard terms and conditions of sale supplied at the time of order acknowledgement, unless otherwise agreed in an individual sales agreement signed by authorized representatives of NVIDIA and customer (“Terms of Sale”). NVIDIA hereby expressly objects to applying any customer general terms and conditions with regards to the purchase of the NVIDIA product referenced in this document. No contractual obligations are formed either directly or indirectly by this document.

NVIDIA products are not designed, authorized, or warranted to be suitable for use in medical, military, aircraft, space, or life support equipment, nor in applications where failure or malfunction of the NVIDIA product can reasonably be expected to result in personal injury, death, or property or environmental damage. NVIDIA accepts no liability for inclusion and/or use of NVIDIA products in such equipment or applications and therefore such inclusion and/or use is at customer’s own risk.

NVIDIA makes no representation or warranty that products based on this document will be suitable for any specified use. Testing of all parameters of each product is not necessarily performed by NVIDIA. It is customer’s sole responsibility to evaluate and determine the applicability of any information contained in this document, ensure the product is suitable and fit for the application planned by customer, and perform the necessary testing for the application in order to avoid a default of the application or the product. Weaknesses in customer’s product designs may affect the quality and reliability of the NVIDIA product and may result in additional or different conditions and/or requirements beyond those contained in this document. NVIDIA accepts no liability related to any default, damage, costs, or problem which may be based on or attributable to: (i) the use of the NVIDIA product in any manner that is contrary to this document or (ii) customer product designs.

No license, either expressed or implied, is granted under any NVIDIA patent right, copyright, or other NVIDIA intellectual property right under this document. Information published by NVIDIA regarding third-party products or services does not constitute a license from NVIDIA to use such products or services or a warranty or endorsement thereof. Use of such information may require a license from a third party under the patents or other intellectual property rights of the third party, or a license from NVIDIA under the patents or other intellectual property rights of NVIDIA.

Reproduction of information in this document is permissible only if approved in advance by NVIDIA in writing, reproduced without alteration and in full compliance with all applicable export laws and regulations, and accompanied by all associated conditions, limitations, and notices.



Читайте также:  Мобильные процессоры tiger lake