clang  6.0.0svn
Classes | Public Types | Public Member Functions | Protected Member Functions | List of all members
clang::CodeGen::CGOpenMPRuntimeNVPTX Class Reference

#include "/opt/doxygen-docs/src/llvm/tools/clang/lib/CodeGen/CGOpenMPRuntimeNVPTX.h"

Inheritance diagram for clang::CodeGen::CGOpenMPRuntimeNVPTX:
Inheritance graph
[legend]
Collaboration diagram for clang::CodeGen::CGOpenMPRuntimeNVPTX:
Collaboration graph
[legend]

Public Types

enum  ExecutionMode { Spmd, Generic, Unknown }
 Target codegen is specialized based on two programming models: the 'generic' fork-join model of OpenMP, and a more GPU efficient 'spmd' model for constructs like 'target parallel' that support it. More...
 

Public Member Functions

 CGOpenMPRuntimeNVPTX (CodeGenModule &CGM)
 
virtual void emitProcBindClause (CodeGenFunction &CGF, OpenMPProcBindClauseKind ProcBind, SourceLocation Loc) override
 Emit call to void __kmpc_push_proc_bind(ident_t *loc, kmp_int32 global_tid, int proc_bind) to generate code for 'proc_bind' clause. More...
 
virtual void emitNumThreadsClause (CodeGenFunction &CGF, llvm::Value *NumThreads, SourceLocation Loc) override
 Emits call to void __kmpc_push_num_threads(ident_t *loc, kmp_int32 global_tid, kmp_int32 num_threads) to generate code for 'num_threads' clause. More...
 
void emitNumTeamsClause (CodeGenFunction &CGF, const Expr *NumTeams, const Expr *ThreadLimit, SourceLocation Loc) override
 This function ought to emit, in the general case, a call to. More...
 
llvm::ValueemitParallelOutlinedFunction (const OMPExecutableDirective &D, const VarDecl *ThreadIDVar, OpenMPDirectiveKind InnermostKind, const RegionCodeGenTy &CodeGen) override
 Emits inlined function for the specified OpenMP parallel. More...
 
llvm::ValueemitTeamsOutlinedFunction (const OMPExecutableDirective &D, const VarDecl *ThreadIDVar, OpenMPDirectiveKind InnermostKind, const RegionCodeGenTy &CodeGen) override
 Emits inlined function for the specified OpenMP teams. More...
 
void emitTeamsCall (CodeGenFunction &CGF, const OMPExecutableDirective &D, SourceLocation Loc, llvm::Value *OutlinedFn, ArrayRef< llvm::Value *> CapturedVars) override
 Emits code for teams call of the OutlinedFn with variables captured in a record which address is stored in CapturedStruct. More...
 
void emitParallelCall (CodeGenFunction &CGF, SourceLocation Loc, llvm::Value *OutlinedFn, ArrayRef< llvm::Value *> CapturedVars, const Expr *IfCond) override
 Emits code for parallel or serial call of the OutlinedFn with variables captured in a record which address is stored in CapturedStruct. More...
 
virtual void emitReduction (CodeGenFunction &CGF, SourceLocation Loc, ArrayRef< const Expr *> Privates, ArrayRef< const Expr *> LHSExprs, ArrayRef< const Expr *> RHSExprs, ArrayRef< const Expr *> ReductionOps, ReductionOptionsTy Options) override
 Emit a code for reduction clause. More...
 
llvm::Constant * createNVPTXRuntimeFunction (unsigned Function)
 Returns specified OpenMP runtime function for the current OpenMP implementation. More...
 
const VarDecltranslateParameter (const FieldDecl *FD, const VarDecl *NativeParam) const override
 Translates the native parameter of outlined function if this is required for target. More...
 
Address getParameterAddress (CodeGenFunction &CGF, const VarDecl *NativeParam, const VarDecl *TargetParam) const override
 Gets the address of the native argument basing on the address of the target-specific parameter. More...
 
void emitOutlinedFunctionCall (CodeGenFunction &CGF, SourceLocation Loc, llvm::Value *OutlinedFn, ArrayRef< llvm::Value *> Args=llvm::None) const override
 Emits call of the outlined function with the provided arguments, translating these arguments to correct target-specific arguments. More...
 

Protected Member Functions

StringRef getOutlinedHelperName () const override
 Get the function name of an outlined region. More...
 
- Protected Member Functions inherited from clang::CodeGen::CGOpenMPRuntime
virtual void emitTargetOutlinedFunctionHelper (const OMPExecutableDirective &D, StringRef ParentName, llvm::Function *&OutlinedFn, llvm::Constant *&OutlinedFnID, bool IsOffloadEntry, const RegionCodeGenTy &CodeGen)
 Helper to emit outlined function for 'target' directive. More...
 
void emitOMPIfClause (CodeGenFunction &CGF, const Expr *Cond, const RegionCodeGenTy &ThenGen, const RegionCodeGenTy &ElseGen)
 Emits code for OpenMP 'if' clause using specified CodeGen function. More...
 
llvm::ValueemitUpdateLocation (CodeGenFunction &CGF, SourceLocation Loc, unsigned Flags=0)
 Emits object of ident_t type with info for source location. More...
 
llvm::TypegetIdentTyPointerTy ()
 Returns pointer to ident_t type. More...
 
llvm::ValuegetThreadID (CodeGenFunction &CGF, SourceLocation Loc)
 Gets thread id value for the current thread. More...
 
void emitCall (CodeGenFunction &CGF, llvm::Value *Callee, ArrayRef< llvm::Value *> Args=llvm::None, SourceLocation Loc=SourceLocation()) const
 Emits Callee function call with arguments Args with location Loc. More...
 

Additional Inherited Members

- Protected Attributes inherited from clang::CodeGen::CGOpenMPRuntime
CodeGenModuleCGM
 

Detailed Description

Definition at line 26 of file CGOpenMPRuntimeNVPTX.h.

Member Enumeration Documentation

◆ ExecutionMode

Target codegen is specialized based on two programming models: the 'generic' fork-join model of OpenMP, and a more GPU efficient 'spmd' model for constructs like 'target parallel' that support it.

Enumerator
Spmd 

Single Program Multiple Data.

Generic 

Generic codegen to support fork-join model.

Unknown 

Definition at line 294 of file CGOpenMPRuntimeNVPTX.h.

Constructor & Destructor Documentation

◆ CGOpenMPRuntimeNVPTX()

CGOpenMPRuntimeNVPTX::CGOpenMPRuntimeNVPTX ( CodeGenModule CGM)
explicit

Member Function Documentation

◆ createNVPTXRuntimeFunction()

llvm::Constant * CGOpenMPRuntimeNVPTX::createNVPTXRuntimeFunction ( unsigned  Function)

Returns specified OpenMP runtime function for the current OpenMP implementation.

Specialized for the NVPTX device.

Parameters
FunctionOpenMP runtime function.
Returns
Specified function.

Build void __kmpc_kernel_prepare_parallel( void *outlined_function, void ***args, kmp_int32 nArgs);

Build bool __kmpc_kernel_parallel(void **outlined_function, void ***args);

Build void __kmpc_kernel_end_parallel();

Definition at line 602 of file CGOpenMPRuntimeNVPTX.cpp.

References clang::ASTContext::BoolTy, clang::CodeGen::CodeGenTypes::ConvertType(), clang::CodeGen::CodeGenModule::CreateRuntimeFunction(), clang::CodeGen::CodeGenModule::getContext(), getExecutionModeForDirective(), clang::CodeGen::CodeGenModule::getTypes(), clang::CodeGen::CodeGenTypeCache::Int16Ty, clang::CodeGen::CodeGenTypeCache::Int32Ty, clang::CodeGen::CodeGenTypeCache::Int64Ty, clang::CodeGen::CodeGenTypeCache::Int8PtrPtrTy, clang::CodeGen::CodeGenTypeCache::Int8PtrTy, clang::None, setPropertyExecutionMode(), clang::CodeGen::CodeGenTypeCache::SizeTy, clang::CodeGen::Type, clang::prec::Unknown, clang::CodeGen::CodeGenTypeCache::VoidPtrTy, and clang::CodeGen::CodeGenTypeCache::VoidTy.

Referenced by createRuntimeShuffleFunction(), and emitParallelCall().

◆ emitNumTeamsClause()

void CGOpenMPRuntimeNVPTX::emitNumTeamsClause ( CodeGenFunction CGF,
const Expr NumTeams,
const Expr ThreadLimit,
SourceLocation  Loc 
)
override

This function ought to emit, in the general case, a call to.

Parameters
NumTeamsAn integer expression of teams.
ThreadLimitAn integer expression of threads.

Definition at line 862 of file CGOpenMPRuntimeNVPTX.cpp.

Referenced by getOutlinedHelperName().

◆ emitNumThreadsClause()

void CGOpenMPRuntimeNVPTX::emitNumThreadsClause ( CodeGenFunction CGF,
llvm::Value NumThreads,
SourceLocation  Loc 
)
overridevirtual

Emits call to void __kmpc_push_num_threads(ident_t *loc, kmp_int32 global_tid, kmp_int32 num_threads) to generate code for 'num_threads' clause.

Parameters
NumThreadsAn integer value of threads.

Definition at line 851 of file CGOpenMPRuntimeNVPTX.cpp.

References clang::CodeGen::emitNumThreadsClause().

Referenced by getOutlinedHelperName().

◆ emitOutlinedFunctionCall()

void CGOpenMPRuntimeNVPTX::emitOutlinedFunctionCall ( CodeGenFunction CGF,
SourceLocation  Loc,
llvm::Value OutlinedFn,
ArrayRef< llvm::Value *>  Args = llvm::None 
) const
override

Emits call of the outlined function with the provided arguments, translating these arguments to correct target-specific arguments.

Definition at line 2372 of file CGOpenMPRuntimeNVPTX.cpp.

Referenced by emitTeamsCall().

◆ emitParallelCall()

void CGOpenMPRuntimeNVPTX::emitParallelCall ( CodeGenFunction CGF,
SourceLocation  Loc,
llvm::Value OutlinedFn,
ArrayRef< llvm::Value *>  CapturedVars,
const Expr IfCond 
)
override

Emits code for parallel or serial call of the OutlinedFn with variables captured in a record which address is stored in CapturedStruct.

Parameters
OutlinedFnOutlined function to be run in parallel threads. Type of this function is void(*)(kmp_int32 *, kmp_int32, struct context_vars*).
CapturedVarsA pointer to the record with the references to variables used in OutlinedFn function.
IfCondCondition in the associated 'if' clause, if it was specified, nullptr otherwise.

Definition at line 916 of file CGOpenMPRuntimeNVPTX.cpp.

References clang::CodeGen::CGBuilderTy::CreateBitCast(), clang::CodeGen::CGBuilderTy::CreateConstInBoundsGEP(), createNVPTXRuntimeFunction(), clang::CodeGen::CGOpenMPRuntime::emitUpdateLocation(), clang::CodeGen::Address::getPointer(), clang::ASTContext::getPointerType(), clang::CodeGen::CGOpenMPRuntime::getThreadID(), clang::CodeGen::CodeGenFunction::HaveInsertPoint(), clang::CodeGen::CodeGenTypeCache::Int8PtrTy, clang::InternalLinkage, syncCTAThreads(), and clang::ASTContext::VoidPtrTy.

Referenced by getOutlinedHelperName().

◆ emitParallelOutlinedFunction()

llvm::Value * CGOpenMPRuntimeNVPTX::emitParallelOutlinedFunction ( const OMPExecutableDirective D,
const VarDecl ThreadIDVar,
OpenMPDirectiveKind  InnermostKind,
const RegionCodeGenTy CodeGen 
)
override

Emits inlined function for the specified OpenMP parallel.

D. This outlined function has type void(*)(kmp_int32 ThreadID, kmp_int32 BoundID, struct context_vars).

Parameters
DOpenMP directive.
ThreadIDVarVariable for thread id in the current OpenMP region.
InnermostKindKind of innermost directive (for simple directives it is a directive itself, for combined - its innermost directive).
CodeGenCode generation sequence for the D directive.

Definition at line 867 of file CGOpenMPRuntimeNVPTX.cpp.

References clang::CodeGen::emitParallelOutlinedFunction().

Referenced by getOutlinedHelperName().

◆ emitProcBindClause()

void CGOpenMPRuntimeNVPTX::emitProcBindClause ( CodeGenFunction CGF,
OpenMPProcBindClauseKind  ProcBind,
SourceLocation  Loc 
)
overridevirtual

Emit call to void __kmpc_push_proc_bind(ident_t *loc, kmp_int32 global_tid, int proc_bind) to generate code for 'proc_bind' clause.

Definition at line 840 of file CGOpenMPRuntimeNVPTX.cpp.

References clang::CodeGen::emitProcBindClause().

Referenced by getOutlinedHelperName().

◆ emitReduction()

void CGOpenMPRuntimeNVPTX::emitReduction ( CodeGenFunction CGF,
SourceLocation  Loc,
ArrayRef< const Expr *>  Privates,
ArrayRef< const Expr *>  LHSExprs,
ArrayRef< const Expr *>  RHSExprs,
ArrayRef< const Expr *>  ReductionOps,
ReductionOptionsTy  Options 
)
overridevirtual

Emit a code for reduction clause.

Design of OpenMP reductions on the GPU.

Parameters
PrivatesList of private copies for original reduction arguments.
LHSExprsList of LHS in ReductionOps reduction operations.
RHSExprsList of RHS in ReductionOps reduction operations.
ReductionOpsList of reduction operations in form 'LHS binop RHS' or 'operator binop(LHS, RHS)'.
OptionsList of options for reduction codegen: WithNowait true if parent directive has also nowait clause, false otherwise. SimpleReduction Emit reduction operation only. Used for omp simd directive on the host. ReductionKind The kind of reduction to perform.

Consider a typical OpenMP program with one or more reduction clauses:

float foo; double bar; #pragma omp target teams distribute parallel for \ reduction(+:foo) reduction(*:bar) for (int i = 0; i < N; i++) { foo += A[i]; bar *= B[i]; }

where 'foo' and 'bar' are reduced across all OpenMP threads in all teams. In our OpenMP implementation on the NVPTX device an OpenMP team is mapped to a CUDA threadblock and OpenMP threads within a team are mapped to CUDA threads within a threadblock. Our goal is to efficiently aggregate values across all OpenMP threads such that:

  • the compiler and runtime are logically concise, and
  • the reduction is performed efficiently in a hierarchical manner as follows: within OpenMP threads in the same warp, across warps in a threadblock, and finally across teams on the NVPTX device.

Introduction to Decoupling

We would like to decouple the compiler and the runtime so that the latter is ignorant of the reduction variables (number, data types) and the reduction operators. This allows a simpler interface and implementation while still attaining good performance.

Pseudocode for the aforementioned OpenMP program generated by the compiler is as follows:

  1. Create private copies of reduction variables on each OpenMP thread: 'foo_private', 'bar_private'
  2. Each OpenMP thread reduces the chunk of 'A' and 'B' assigned to it and writes the result in 'foo_private' and 'bar_private' respectively.
  3. Call the OpenMP runtime on the GPU to reduce within a team and store the result on the team master:

    __kmpc_nvptx_parallel_reduce_nowait(..., reduceData, shuffleReduceFn, interWarpCpyFn)

    where: struct ReduceData { double *foo; double *bar; } reduceData reduceData.foo = &foo_private reduceData.bar = &bar_private

    'shuffleReduceFn' and 'interWarpCpyFn' are pointers to two auxiliary functions generated by the compiler that operate on variables of type 'ReduceData'. They aid the runtime perform algorithmic steps in a data agnostic manner.

    'shuffleReduceFn' is a pointer to a function that reduces data of type 'ReduceData' across two OpenMP threads (lanes) in the same warp. It takes the following arguments as input:

    a. variable of type 'ReduceData' on the calling lane, b. its lane_id, c. an offset relative to the current lane_id to generate a remote_lane_id. The remote lane contains the second variable of type 'ReduceData' that is to be reduced. d. an algorithm version parameter determining which reduction algorithm to use.

    'shuffleReduceFn' retrieves data from the remote lane using efficient GPU shuffle intrinsics and reduces, using the algorithm specified by the 4th parameter, the two operands element-wise. The result is written to the first operand.

    Different reduction algorithms are implemented in different runtime functions, all calling 'shuffleReduceFn' to perform the essential reduction step. Therefore, based on the 4th parameter, this function behaves slightly differently to cooperate with the runtime to ensure correctness under different circumstances.

    'InterWarpCpyFn' is a pointer to a function that transfers reduced variables across warps. It tunnels, through CUDA shared memory, the thread-private data of type 'ReduceData' from lane 0 of each warp to a lane in the first warp.

  4. Call the OpenMP runtime on the GPU to reduce across teams. The last team writes the global reduced value to memory.

    ret = __kmpc_nvptx_teams_reduce_nowait(..., reduceData, shuffleReduceFn, interWarpCpyFn, scratchpadCopyFn, loadAndReduceFn)

    'scratchpadCopyFn' is a helper that stores reduced data from the team master to a scratchpad array in global memory.

    'loadAndReduceFn' is a helper that loads data from the scratchpad array and reduces it with the input operand.

    These compiler generated functions hide address calculation and alignment information from the runtime.

  5. if ret == 1: The team master of the last team stores the reduced result to the globals in memory. foo += reduceData.foo; bar *= reduceData.bar

Warp Reduction Algorithms

On the warp level, we have three algorithms implemented in the OpenMP runtime depending on the number of active lanes:

Full Warp Reduction

The reduce algorithm within a warp where all lanes are active is implemented in the runtime as follows:

full_warp_reduce(void *reduce_data, kmp_ShuffleReductFctPtr ShuffleReduceFn) { for (int offset = WARPSIZE/2; offset > 0; offset /= 2) ShuffleReduceFn(reduce_data, 0, offset, 0); }

The algorithm completes in log(2, WARPSIZE) steps.

'ShuffleReduceFn' is used here with lane_id set to 0 because it is not used therefore we save instructions by not retrieving lane_id from the corresponding special registers. The 4th parameter, which represents the version of the algorithm being used, is set to 0 to signify full warp reduction.

In this version, 'ShuffleReduceFn' behaves, per element, as follows:

#reduce_elem refers to an element in the local lane's data structure #remote_elem is retrieved from a remote lane remote_elem = shuffle_down(reduce_elem, offset, WARPSIZE); reduce_elem = reduce_elem REDUCE_OP remote_elem;

Contiguous Partial Warp Reduction

This reduce algorithm is used within a warp where only the first 'n' (n <= WARPSIZE) lanes are active. It is typically used when the number of OpenMP threads in a parallel region is not a multiple of WARPSIZE. The algorithm is implemented in the runtime as follows:

void contiguous_partial_reduce(void *reduce_data, kmp_ShuffleReductFctPtr ShuffleReduceFn, int size, int lane_id) { int curr_size; int offset; curr_size = size; mask = curr_size/2; while (offset>0) { ShuffleReduceFn(reduce_data, lane_id, offset, 1); curr_size = (curr_size+1)/2; offset = curr_size/2; } }

In this version, 'ShuffleReduceFn' behaves, per element, as follows:

remote_elem = shuffle_down(reduce_elem, offset, WARPSIZE); if (lane_id < offset) reduce_elem = reduce_elem REDUCE_OP remote_elem else reduce_elem = remote_elem

This algorithm assumes that the data to be reduced are located in a contiguous subset of lanes starting from the first. When there is an odd number of active lanes, the data in the last lane is not aggregated with any other lane's dat but is instead copied over.

Dispersed Partial Warp Reduction

This algorithm is used within a warp when any discontiguous subset of lanes are active. It is used to implement the reduction operation across lanes in an OpenMP simd region or in a nested parallel region.

void dispersed_partial_reduce(void *reduce_data, kmp_ShuffleReductFctPtr ShuffleReduceFn) { int size, remote_id; int logical_lane_id = number_of_active_lanes_before_me() * 2; do { remote_id = next_active_lane_id_right_after_me();

the above function returns 0 of no active lane

is present right after the current lane.

size = number_of_active_lanes_in_this_warp(); logical_lane_id /= 2; ShuffleReduceFn(reduce_data, logical_lane_id, remote_id-1-threadIdx.x, 2); } while (logical_lane_id % 2 == 0 && size > 1); }

There is no assumption made about the initial state of the reduction. Any number of lanes (>=1) could be active at any position. The reduction result is returned in the first active lane.

In this version, 'ShuffleReduceFn' behaves, per element, as follows:

remote_elem = shuffle_down(reduce_elem, offset, WARPSIZE); if (lane_id % 2 == 0 && offset > 0) reduce_elem = reduce_elem REDUCE_OP remote_elem else reduce_elem = remote_elem

Intra-Team Reduction

This function, as implemented in the runtime call '__kmpc_nvptx_parallel_reduce_nowait', aggregates data across OpenMP threads in a team. It first reduces within a warp using the aforementioned algorithms. We then proceed to gather all such reduced values at the first warp.

The runtime makes use of the function 'InterWarpCpyFn', which copies data from each of the "warp master" (zeroth lane of each warp, where warp-reduced data is held) to the zeroth warp. This step reduces (in a mathematical sense) the problem of reduction across warp masters in a block to the problem of warp reduction.

Inter-Team Reduction

Once a team has reduced its data to a single value, it is stored in a global scratchpad array. Since each team has a distinct slot, this can be done without locking.

The last team to write to the scratchpad array proceeds to reduce the scratchpad array. One or more workers in the last team use the helper 'loadAndReduceDataFn' to load and reduce values from the array, i.e., the k'th worker reduces every k'th element.

Finally, a call is made to '__kmpc_nvptx_parallel_reduce_nowait' to reduce across workers and compute a globally reduced value.

Definition at line 2167 of file CGOpenMPRuntimeNVPTX.cpp.

Referenced by getOutlinedHelperName().

◆ emitTeamsCall()

void CGOpenMPRuntimeNVPTX::emitTeamsCall ( CodeGenFunction CGF,
const OMPExecutableDirective D,
SourceLocation  Loc,
llvm::Value OutlinedFn,
ArrayRef< llvm::Value *>  CapturedVars 
)
override

Emits code for teams call of the OutlinedFn with variables captured in a record which address is stored in CapturedStruct.

Parameters
OutlinedFnOutlined function to be run by team masters. Type of this function is void(*)(kmp_int32 *, kmp_int32, struct context_vars*).
CapturedVarsA pointer to the record with the references to variables used in OutlinedFn function.

Definition at line 897 of file CGOpenMPRuntimeNVPTX.cpp.

References clang::CodeGen::CodeGenFunction::Builder, clang::CodeGen::CodeGenFunction::CreateTempAlloca(), emitOutlinedFunctionCall(), clang::CharUnits::fromQuantity(), clang::CodeGen::Address::getPointer(), clang::CodeGen::CodeGenFunction::HaveInsertPoint(), clang::CodeGen::CodeGenFunction::InitTempAlloca(), and clang::CodeGen::CodeGenTypeCache::Int32Ty.

Referenced by getOutlinedHelperName().

◆ emitTeamsOutlinedFunction()

llvm::Value * CGOpenMPRuntimeNVPTX::emitTeamsOutlinedFunction ( const OMPExecutableDirective D,
const VarDecl ThreadIDVar,
OpenMPDirectiveKind  InnermostKind,
const RegionCodeGenTy CodeGen 
)
override

Emits inlined function for the specified OpenMP teams.

D. This outlined function has type void(*)(kmp_int32 ThreadID, kmp_int32 BoundID, struct context_vars).

Parameters
DOpenMP directive.
ThreadIDVarVariable for thread id in the current OpenMP region.
InnermostKindKind of innermost directive (for simple directives it is a directive itself, for combined - its innermost directive).
CodeGenCode generation sequence for the D directive.

Definition at line 883 of file CGOpenMPRuntimeNVPTX.cpp.

References clang::CodeGen::emitTeamsOutlinedFunction().

Referenced by getOutlinedHelperName().

◆ getOutlinedHelperName()

StringRef clang::CodeGen::CGOpenMPRuntimeNVPTX::getOutlinedHelperName ( ) const
inlineoverrideprotectedvirtual

◆ getParameterAddress()

Address CGOpenMPRuntimeNVPTX::getParameterAddress ( CodeGenFunction CGF,
const VarDecl NativeParam,
const VarDecl TargetParam 
) const
override

◆ translateParameter()

const VarDecl * CGOpenMPRuntimeNVPTX::translateParameter ( const FieldDecl FD,
const VarDecl NativeParam 
) const
override

The documentation for this class was generated from the following files: