High-performance computing in the context of CUDA


High-performance computing in the context of CUDA

High-performance computing Study page number 1 of 1

Play TriviaQuestions Online!

or

Skip to study material about High-performance computing in the context of "CUDA"


HINT:

👉 High-performance computing in the context of CUDA

CUDA (Compute Unified Device Architecture) is a proprietary parallel computing platform and application programming interface (API) that allows software to use certain types of graphics processing units (GPUs) for accelerated general-purpose processing, significantly broadening their utility in scientific and high-performance computing. CUDA was created by Nvidia starting in 2004 and was officially released in 2007. When it was first introduced, the name was an acronym for Compute Unified Device Architecture, but Nvidia later dropped the common use of the acronym and now rarely expands it.

CUDA is both a software layer that manages data, giving direct access to the GPU and CPU as necessary, and a library of APIs that enable parallel computation for various needs. In addition to drivers and runtime kernels, the CUDA platform includes compilers, libraries and developer tools to help programmers accelerate their applications.

↓ Explore More Topics
In this Dossier

High-performance computing in the context of Parallel computing

Parallel computing is a type of computation in which many calculations or processes are carried out simultaneously. Large problems can often be divided into smaller ones, which can then be solved at the same time. There are several different forms of parallel computing: bit-level, instruction-level, data, and task parallelism. Parallelism has long been employed in high-performance computing, but has gained broader interest due to the physical constraints preventing frequency scaling. As power consumption (and consequently heat generation) by computers has become a concern in recent years, parallel computing has become the dominant paradigm in computer architecture, mainly in the form of multi-core processors.

In computer science, parallelism and concurrency are two different things: a parallel program uses multiple CPU cores, each core performing a task independently. On the other hand, concurrency enables a program to deal with multiple tasks even on a single CPU core; the core switches between tasks (i.e. threads) without necessarily completing each one. A program can have both, neither or a combination of parallelism and concurrency characteristics.

View the full Wikipedia page for Parallel computing
↑ Return to Menu

High-performance computing in the context of RIKEN

Riken (Japanese: 理研; English: /ˈrɪkɛn/; stylized in all caps as RIKEN) is a national scientific research institute in Japan. Founded in 1917, it now has about 3,000 scientists on seven campuses across Japan, including the main site at Wakō, Saitama Prefecture, on the outskirts of Tokyo. Riken is a Designated National Research and Development Institute, and was formerly an Independent Administrative Institution.

Riken conducts research in various fields of science, including physics, chemistry, biology, genomics, medical science, engineering, high-performance computing and computational science, and ranging from basic research to practical applications with 485 partners worldwide. It is almost entirely funded by the Japanese government, with an annual budget of ¥100 billion (US$750 million) in FY2023.

View the full Wikipedia page for RIKEN
↑ Return to Menu

High-performance computing in the context of Computational scientist

A computational scientist is a person skilled in scientific computing. This person is usually a scientist, a statistician, an applied mathematician, or an engineer who applies high-performance computing and sometimes cloud computing in different ways to advance the state-of-the-art in their respective applied discipline; physics, chemistry, social sciences and so forth. Thus scientific computing has increasingly influenced many areas such as economics, biology, law, and medicine to name a few. Because a computational scientist's work is generally applied to science and other disciplines, they are not necessarily trained in computer science specifically, though concepts of computer science are often used. Computational scientists are typically researchers at academic universities, national labs, or tech companies.

One of the tasks of a computational scientist is to analyze large amounts of data, often from astrophysics or related fields, as these can often generate huge amounts of data. Computational scientists often have to clean up and calibrate the data to a usable form for an effective analysis. Computational scientists are also tasked with creating artificial data through computer models and simulations.

View the full Wikipedia page for Computational scientist
↑ Return to Menu

High-performance computing in the context of Oak Ridge National Laboratory

Oak Ridge National Laboratory (ORNL) is a federally funded research and development center in Oak Ridge, Tennessee, United States. Founded in 1943, the laboratory is sponsored by the United States Department of Energy and administered by UT–Battelle, LLC.

Established in 1943, ORNL is the largest science and energy national laboratory in the Department of Energy system by size and third largest by annual budget. It is located in the Roane County section of Oak Ridge. Its scientific programs focus on materials, nuclear science, neutron science, energy, high-performance computing, environmental science, systems biology and national security, sometimes in partnership with the state of Tennessee, universities and other industries.

View the full Wikipedia page for Oak Ridge National Laboratory
↑ Return to Menu

High-performance computing in the context of Concurrency (computer science)

In computer science, concurrency refers to the ability of a system to execute multiple tasks through simultaneous execution or time-sharing (context switching), sharing resources and managing interactions. Concurrency improves responsiveness, throughput, and scalability in modern computing, including:

View the full Wikipedia page for Concurrency (computer science)
↑ Return to Menu

High-performance computing in the context of Nvidia

Nvidia Corporation (/ɛnˈvɪdiə/ en-VID-ee-ə) is an American technology company headquartered in Santa Clara, California. Founded in 1993 by Jensen Huang, Chris Malachowsky, and Curtis Priem, it develops graphics processing units (GPUs), systems on chips (SoCs), and application programming interfaces (APIs) for data science, high-performance computing, and mobile and automotive applications. Nvidia has been described as a Big Tech company.

Originally focused on GPUs for video gaming, Nvidia broadened their use into other markets, including artificial intelligence (AI), professional visualization, and supercomputing. The company's product lines include GeForce GPUs for gaming and creative workloads, and professional GPUs for edge computing, scientific research, and industrial applications. As of the first quarter of 2025, Nvidia held a 92% share of the discrete desktop and laptop GPU market.

View the full Wikipedia page for Nvidia
↑ Return to Menu

High-performance computing in the context of Silicon Graphics

Silicon Graphics, Inc. (stylized as SiliconGraphics before 1999, later rebranded SGI, historically known as Silicon Graphics Computer Systems or SGCS) was an American high-performance computing manufacturer, producing computer hardware and software. Founded in Mountain View, California, in November 1981 by James H. Clark, the computer scientist and entrepreneur perhaps best known for founding Netscape (with Marc Andreessen). Its initial market was 3D graphics computer workstations, but its products, strategies and market positions developed significantly over time.

Early systems were based on the Geometry Engine that Clark and Marc Hannah had developed at Stanford University, and were derived from Clark's broader background in computer graphics. The Geometry Engine was the first very-large-scale integration (VLSI) implementation of a geometry pipeline, specialized hardware that accelerated the "inner-loop" geometric computations needed to display three-dimensional images. For much of its history, the company focused on 3D imaging and was a major supplier of both hardware and software in this market.

View the full Wikipedia page for Silicon Graphics
↑ Return to Menu

High-performance computing in the context of AMD

Advanced Micro Devices, Inc. (AMD) is an American multinational technology company headquartered in Santa Clara, California, with significant operations in Austin, Texas. It develops central processing units (CPUs), graphics processing units (GPUs), field-programmable gate arrays (FPGAs), system-on-chips (SoCs), and high-performance computer components. AMD serves a wide range of business and consumer markets, including gaming, data centers, artificial intelligence (AI), and embedded systems.

AMD's main products include microprocessors, chipsets for motherboards, embedded processors, and graphics processors for servers, workstations, personal computers (PCs), and embedded system applications. The company has also expanded into new markets, such as data centers, gaming, and high-performance computing. AMD's processors are used in a wide range of computing devices, including PCs, servers, laptops, and gaming consoles. Initially manufacturing its own processors, the company outsourced its manufacturing after GlobalFoundries was spun off in 2009. Through its Xilinx acquisition in 2022, AMD offers field-programmable gate array (FPGA) products.

View the full Wikipedia page for AMD
↑ Return to Menu

High-performance computing in the context of Fortran

Fortran (/ˈfɔːrtræn/; formerly FORTRAN) is a third-generation, compiled, imperative programming language designed for numeric computation and scientific computing.

Fortran was originally developed by IBM with a reference manual being released in 1956; however, the first compilers only began to produce accurate code two years later. Fortran computer programs have been written to support scientific and engineering applications, such as numerical weather prediction, finite element analysis, computational fluid dynamics, plasma physics, geophysics, computational physics, crystallography and computational chemistry. It is a popular language for high-performance computing and is used for programs that benchmark and rank the world's fastest supercomputers.

View the full Wikipedia page for Fortran
↑ Return to Menu