OpenCL in the context of Field-programmable gate array


OpenCL in the context of Field-programmable gate array

OpenCL Study page number 1 of 1

Play TriviaQuestions Online!

or

Skip to study material about OpenCL in the context of "Field-programmable gate array"


⭐ Core Definition: OpenCL

OpenCL (Open Computing Language) is a framework for writing programs that execute across heterogeneous platforms consisting of central processing units (CPUs), graphics processing units (GPUs), digital signal processors (DSPs), field-programmable gate arrays (FPGAs) and other processors or hardware accelerators. OpenCL specifies a programming language (based on C99) for programming these devices and application programming interfaces (APIs) to control the platform and execute programs on the compute devices. OpenCL provides a standard interface for parallel computing using task- and data-based parallelism.

OpenCL is an open standard maintained by the Khronos Group, a non-profit, open standards organisation. Conformant implementations (passed the Conformance Test Suite) are available from a range of companies including AMD, Arm, Cadence, Google, Imagination, Intel, Nvidia, Qualcomm, Samsung, SPI and Verisilicon.

↓ Menu
HINT:

In this Dossier

OpenCL in the context of Graphics card

A graphics card (also called a video card, display card, graphics accelerator, graphics adapter, VGA card/VGA, video adapter, or display adapter GPU) is a computer expansion card that generates a feed of graphics output to a display device such as a monitor. Graphics cards are sometimes called discrete or dedicated graphics cards to emphasize their distinction to an integrated graphics processor on the motherboard or the central processing unit (CPU). A graphics processing unit (GPU) that performs the necessary computations is the main component in a graphics card, but the acronym "GPU" is sometimes also used to refer to the graphics card as a whole erroneously.

Most graphics cards are not limited to simple display output. The graphics processing unit can be used for additional processing, which reduces the load from the CPU. Additionally, computing platforms such as OpenCL and CUDA allow using graphics cards for general-purpose computing. Applications of general-purpose computing on graphics cards include AI training, cryptocurrency mining, and molecular simulation.

View the full Wikipedia page for Graphics card
↑ Return to Menu

OpenCL in the context of LLVM

LLVM is a set of compiler and toolchain technologies that can be used to develop a frontend for any programming language and a backend for any instruction set architecture. LLVM is designed around a language-independent intermediate representation (IR) that serves as a portable, high-level assembly language that can be optimized with a variety of transformations over multiple passes. The name LLVM originally stood for Low Level Virtual Machine. However, the project has since expanded, and the name is no longer an acronym but an orphan initialism.

LLVM is written in C++ and is designed for compile-time, link-time, and runtime optimization. Originally implemented for C and C++, the language-agnostic design of LLVM has since spawned a wide variety of frontends: languages with compilers that use LLVM (or which do not directly use LLVM but can generate compiled programs as LLVM IR) include ActionScript, Ada, C# for .NET, Common Lisp, PicoLisp, Crystal, CUDA, D, Delphi, Dylan, Forth, Fortran, FreeBASIC, Free Pascal, Halide, Haskell, Idris, Jai (only for optimized release builds), Java bytecode, Julia, Kotlin, LabVIEW's G language, Objective-C, OpenCL, PostgreSQL's SQL and PL/pgSQL, Ruby, Rust, Scala, Standard ML, Swift, Xojo, and Zig.

View the full Wikipedia page for LLVM
↑ Return to Menu

OpenCL in the context of Molecular modeling on GPUs

Molecular modeling on GPU is the technique of using a graphics processing unit (GPU) for molecular simulations.

In 2007, Nvidia introduced video cards that could be used not only to show graphics but also for scientific calculations. These cards include many arithmetic units (as of 2022, up to 18,176 in the RTX 6000 Ada) working in parallel. Long before this event, the computational power of video cards was purely used to accelerate graphics calculations. The new features of these cards made it possible to develop parallel programs in a high-level application programming interface (API) named CUDA. This technology substantially simplified programming by enabling programs to be written in C/C++. More recently, OpenCL allows cross-platform GPU acceleration.

View the full Wikipedia page for Molecular modeling on GPUs
↑ Return to Menu