clang  7.0.0svn
Classes | Enumerations | Functions
CGOpenMPRuntimeNVPTX.cpp File Reference
#include "CGOpenMPRuntimeNVPTX.h"
#include "CodeGenFunction.h"
#include "clang/AST/DeclOpenMP.h"
#include "clang/AST/StmtOpenMP.h"
#include "clang/AST/StmtVisitor.h"
#include "llvm/ADT/SmallPtrSet.h"
Include dependency graph for CGOpenMPRuntimeNVPTX.cpp:

Go to the source code of this file.

Classes

struct  CopyOptionsTy
 

Enumerations

enum  OpenMPRTLFunctionNVPTX
 
enum  MachineConfiguration : unsigned
 GPU Configuration: This information can be derived from cuda registers, however, providing compile time constants helps generate more efficient code. More...
 
enum  NamedBarrier : unsigned
 
enum  CopyAction : unsigned
 

Functions

static llvm::ValuegetNVPTXWarpSize (CodeGenFunction &CGF)
 Get the GPU warp size. More...
 
static llvm::ValuegetNVPTXThreadID (CodeGenFunction &CGF)
 Get the id of the current thread on the GPU. More...
 
static llvm::ValuegetNVPTXWarpID (CodeGenFunction &CGF)
 Get the id of the warp in the block. More...
 
static llvm::ValuegetNVPTXLaneID (CodeGenFunction &CGF)
 Get the id of the current lane in the Warp. More...
 
static llvm::ValuegetNVPTXNumThreads (CodeGenFunction &CGF)
 Get the maximum number of threads in a block of the GPU. More...
 
static void getNVPTXCTABarrier (CodeGenFunction &CGF)
 Get barrier to synchronize all threads in a block. More...
 
static void getNVPTXBarrier (CodeGenFunction &CGF, int ID, llvm::Value *NumThreads)
 Get barrier #ID to synchronize selected (multiple of warp size) threads in a CTA. More...
 
static void syncCTAThreads (CodeGenFunction &CGF)
 Synchronize all GPU threads in a block. More...
 
static void syncParallelThreads (CodeGenFunction &CGF, llvm::Value *NumThreads)
 Synchronize worker threads in a parallel region. More...
 
static llvm::ValuegetThreadLimit (CodeGenFunction &CGF, bool IsInSpmdExecutionMode=false)
 Get the value of the thread_limit clause in the teams directive. More...
 
static llvm::ValuegetMasterThreadID (CodeGenFunction &CGF)
 Get the thread id of the OMP master thread. More...
 
static CGOpenMPRuntimeNVPTX::DataSharingMode getDataSharingMode (CodeGenModule &CGM)
 
static const StmtgetSingleCompoundChild (const Stmt *Body)
 Checks if the Body is the CompoundStmt and returns its child statement iff there is only one. More...
 
static bool hasParallelIfNumThreadsClause (ASTContext &Ctx, const OMPExecutableDirective &D)
 Check if the parallel directive has an 'if' clause with non-constant or false condition. More...
 
static bool hasNestedSPMDDirective (ASTContext &Ctx, const OMPExecutableDirective &D)
 Check for inner (nested) SPMD construct, if any. More...
 
static bool supportsSPMDExecutionMode (ASTContext &Ctx, const OMPExecutableDirective &D)
 
static void setPropertyExecutionMode (CodeGenModule &CGM, StringRef Name, bool Mode)
 
static llvm::ValuecastValueToType (CodeGenFunction &CGF, llvm::Value *Val, QualType ValTy, QualType CastTy, SourceLocation Loc)
 Cast value to the specified type. More...
 
static llvm::ValuecreateRuntimeShuffleFunction (CodeGenFunction &CGF, llvm::Value *Elem, QualType ElemType, llvm::Value *Offset, SourceLocation Loc)
 This function creates calls to one of two shuffle functions to copy variables between lanes in a warp. More...
 
static void emitReductionListCopy (CopyAction Action, CodeGenFunction &CGF, QualType ReductionArrayTy, ArrayRef< const Expr *> Privates, Address SrcBase, Address DestBase, CopyOptionsTy CopyOptions={nullptr, nullptr, nullptr})
 Emit instructions to copy a Reduce list, which contains partially aggregated values, in the specified direction. More...
 
static llvm::ValueemitReduceScratchpadFunction (CodeGenModule &CGM, ArrayRef< const Expr *> Privates, QualType ReductionArrayTy, llvm::Value *ReduceFn, SourceLocation Loc)
 This function emits a helper that loads data from the scratchpad array and (optionally) reduces it with the input operand. More...
 
static llvm::ValueemitCopyToScratchpad (CodeGenModule &CGM, ArrayRef< const Expr *> Privates, QualType ReductionArrayTy, SourceLocation Loc)
 This function emits a helper that stores reduced data from the team master to a scratchpad array in global memory. More...
 
static llvm::ValueemitInterWarpCopyFunction (CodeGenModule &CGM, ArrayRef< const Expr *> Privates, QualType ReductionArrayTy, SourceLocation Loc)
 This function emits a helper that gathers Reduce lists from the first lane of every active warp to lanes in the first warp. More...
 
static llvm::ValueemitShuffleAndReduceFunction (CodeGenModule &CGM, ArrayRef< const Expr *> Privates, QualType ReductionArrayTy, llvm::Value *ReduceFn, SourceLocation Loc)
 Emit a helper that reduces data across two OpenMP threads (lanes) in the same warp. More...
 

Enumeration Type Documentation

◆ CopyAction

enum CopyAction : unsigned

Definition at line 2078 of file CGOpenMPRuntimeNVPTX.cpp.

◆ MachineConfiguration

enum MachineConfiguration : unsigned

GPU Configuration: This information can be derived from cuda registers, however, providing compile time constants helps generate more efficient code.

For all practical purposes this is fine because the configuration is the same for all known NVPTX architectures.

Definition at line 162 of file CGOpenMPRuntimeNVPTX.cpp.

◆ NamedBarrier

enum NamedBarrier : unsigned

Definition at line 173 of file CGOpenMPRuntimeNVPTX.cpp.

◆ OpenMPRTLFunctionNVPTX

Definition at line 26 of file CGOpenMPRuntimeNVPTX.cpp.

Function Documentation

◆ castValueToType()

static llvm::Value* castValueToType ( CodeGenFunction CGF,
llvm::Value Val,
QualType  ValTy,
QualType  CastTy,
SourceLocation  Loc 
)
static

◆ createRuntimeShuffleFunction()

static llvm::Value* createRuntimeShuffleFunction ( CodeGenFunction CGF,
llvm::Value Elem,
QualType  ElemType,
llvm::Value Offset,
SourceLocation  Loc 
)
static

◆ emitCopyToScratchpad()

static llvm::Value* emitCopyToScratchpad ( CodeGenModule CGM,
ArrayRef< const Expr *>  Privates,
QualType  ReductionArrayTy,
SourceLocation  Loc 
)
static

◆ emitInterWarpCopyFunction()

static llvm::Value* emitInterWarpCopyFunction ( CodeGenModule CGM,
ArrayRef< const Expr *>  Privates,
QualType  ReductionArrayTy,
SourceLocation  Loc 
)
static

This function emits a helper that gathers Reduce lists from the first lane of every active warp to lanes in the first warp.

void inter_warp_copy_func(void* reduce_data, num_warps) shared smem[warp_size]; For all data entries D in reduce_data: If (I am the first lane in each warp) Copy my local D to smem[warp_id] sync if (I am the first warp) Copy smem[thread_id] to my local D sync

Definition at line 2511 of file CGOpenMPRuntimeNVPTX.cpp.

References clang::CodeGen::CodeGenModule::addCompilerUsedGlobal(), clang::CodeGen::CodeGenTypes::arrangeBuiltinFunctionDeclaration(), clang::CodeGen::CodeGenFunction::Builder, clang::CodeGen::CodeGenFunction::ConvertTypeForMem(), clang::Create(), clang::CodeGen::CodeGenFunction::createBasicBlock(), clang::CodeGen::CGBuilderTy::CreateConstArrayGEP(), clang::CodeGen::CGBuilderTy::CreateElementBitCast(), clang::CodeGen::CGBuilderTy::CreatePointerBitCastOrAddrSpaceCast(), clang::CodeGen::CGBuilderTy::CreateStore(), clang::cuda_shared, clang::CodeGen::CodeGenFunction::EmitBlock(), clang::CodeGen::CodeGenFunction::EmitLoadOfScalar(), clang::CodeGen::CodeGenFunction::EmitStoreOfScalar(), clang::CodeGen::CodeGenFunction::FinishFunction(), clang::CodeGen::CodeGenFunction::GetAddrOfLocalVar(), clang::CodeGen::CodeGenModule::getContext(), clang::CodeGen::CodeGenTypes::GetFunctionType(), clang::ASTContext::getIntTypeForBitwidth(), clang::CodeGen::CodeGenModule::getModule(), getNVPTXLaneID(), getNVPTXThreadID(), getNVPTXWarpID(), getNVPTXWarpSize(), clang::CodeGen::CodeGenTypeCache::getPointerAlign(), clang::CodeGen::CodeGenTypeCache::getPointerSize(), clang::ASTContext::getTargetAddressSpace(), clang::CodeGen::Address::getType(), clang::ASTContext::getTypeAlignInChars(), clang::CodeGen::CodeGenModule::getTypes(), clang::CodeGen::CodeGenTypeCache::Int64Ty, clang::InternalLinkage, clang::ASTContext::IntTy, clang::ImplicitParamDecl::Other, clang::CodeGen::CodeGenModule::SetInternalFunctionAttributes(), clang::CodeGen::CodeGenFunction::StartFunction(), syncParallelThreads(), clang::ASTContext::VoidPtrTy, and clang::ASTContext::VoidTy.

◆ emitReduceScratchpadFunction()

static llvm::Value* emitReduceScratchpadFunction ( CodeGenModule CGM,
ArrayRef< const Expr *>  Privates,
QualType  ReductionArrayTy,
llvm::Value ReduceFn,
SourceLocation  Loc 
)
static

This function emits a helper that loads data from the scratchpad array and (optionally) reduces it with the input operand.

load_and_reduce(local, scratchpad, index, width, should_reduce) reduce_data remote; for elem in remote: remote.elem = Scratchpad[elem_id][index] if (should_reduce) local = local @ remote else local = remote

Definition at line 2293 of file CGOpenMPRuntimeNVPTX.cpp.

References clang::CodeGen::CodeGenTypes::arrangeBuiltinFunctionDeclaration(), clang::CodeGen::CodeGenFunction::Builder, clang::CodeGen::CodeGenFunction::ConvertTypeForMem(), clang::Create(), clang::CodeGen::CodeGenFunction::createBasicBlock(), clang::CodeGen::CodeGenFunction::CreateMemTemp(), clang::CodeGen::CGBuilderTy::CreatePointerBitCastOrAddrSpaceCast(), clang::CodeGen::CodeGenFunction::EmitBlock(), clang::CodeGen::CodeGenFunction::EmitLoadOfScalar(), emitReductionListCopy(), clang::CodeGen::CodeGenFunction::FinishFunction(), clang::CodeGen::CodeGenFunction::GetAddrOfLocalVar(), clang::CodeGen::CodeGenModule::getContext(), clang::CodeGen::CodeGenTypes::GetFunctionType(), clang::ASTContext::getIntTypeForBitwidth(), clang::CodeGen::CodeGenModule::getModule(), clang::CodeGen::CodeGenModule::getOpenMPRuntime(), clang::CodeGen::CodeGenTypeCache::getPointerAlign(), clang::CodeGen::CodeGenModule::getTypes(), clang::InternalLinkage, clang::ImplicitParamDecl::Other, clang::CodeGen::CodeGenModule::SetInternalFunctionAttributes(), clang::CodeGen::CodeGenTypeCache::SizeTy, clang::CodeGen::CodeGenFunction::StartFunction(), clang::CodeGen::CodeGenTypeCache::VoidPtrTy, clang::ASTContext::VoidPtrTy, and clang::ASTContext::VoidTy.

◆ emitReductionListCopy()

static void emitReductionListCopy ( CopyAction  Action,
CodeGenFunction CGF,
QualType  ReductionArrayTy,
ArrayRef< const Expr *>  Privates,
Address  SrcBase,
Address  DestBase,
CopyOptionsTy  CopyOptions = {nullptr, nullptr, nullptr} 
)
static

◆ emitShuffleAndReduceFunction()

static llvm::Value* emitShuffleAndReduceFunction ( CodeGenModule CGM,
ArrayRef< const Expr *>  Privates,
QualType  ReductionArrayTy,
llvm::Value ReduceFn,
SourceLocation  Loc 
)
static

Emit a helper that reduces data across two OpenMP threads (lanes) in the same warp.

It uses shuffle instructions to copy over data from a remote lane's stack. The reduction algorithm performed is specified by the fourth parameter.

Algorithm Versions. Full Warp Reduce (argument value 0): This algorithm assumes that all 32 lanes are active and gathers data from these 32 lanes, producing a single resultant value. Contiguous Partial Warp Reduce (argument value 1): This algorithm assumes that only a contiguous subset of lanes are active. This happens for the last warp in a parallel region when the user specified num_threads is not an integer multiple of

  1. This contiguous subset always starts with the zeroth lane. Partial Warp Reduce (argument value 2): This algorithm gathers data from any number of lanes at any position. All reduced values are stored in the lowest possible lane. The set of problems every algorithm addresses is a super set of those addressable by algorithms with a lower version number. Overhead increases as algorithm version increases.

Terminology Reduce element: Reduce element refers to the individual data field with primitive data types to be combined and reduced across threads. Reduce list: Reduce list refers to a collection of local, thread-private reduce elements. Remote Reduce list: Remote Reduce list refers to a collection of remote (relative to the current thread) reduce elements.

We distinguish between three states of threads that are important to the implementation of this function. Alive threads: Threads in a warp executing the SIMT instruction, as distinguished from threads that are inactive due to divergent control flow. Active threads: The minimal set of threads that has to be alive upon entry to this function. The computation is correct iff active threads are alive. Some threads are alive but they are not active because they do not contribute to the computation in any useful manner. Turning them off may introduce control flow overheads without any tangible benefits. Effective threads: In order to comply with the argument requirements of the shuffle function, we must keep all lanes holding data alive. But at most half of them perform value aggregation; we refer to this half of threads as effective. The other half is simply handing off their data.

Procedure Value shuffle: In this step active threads transfer data from higher lane positions in the warp to lower lane positions, creating Remote Reduce list. Value aggregation: In this step, effective threads combine their thread local Reduce list with Remote Reduce list and store the result in the thread local Reduce list. Value copy: In this step, we deal with the assumption made by algorithm 2 (i.e. contiguity assumption). When we have an odd number of lanes active, say 2k+1, only k threads will be effective and therefore k new values will be produced. However, the Reduce list owned by the (2k+1)th thread is ignored in the value aggregation. Therefore we copy the Reduce list from the (2k+1)th lane to (k+1)th lane so that the contiguity assumption still holds.

Definition at line 2761 of file CGOpenMPRuntimeNVPTX.cpp.

References clang::CodeGen::CodeGenTypes::arrangeBuiltinFunctionDeclaration(), clang::CodeGen::CodeGenFunction::Builder, clang::CodeGen::CodeGenFunction::ConvertTypeForMem(), clang::Create(), clang::CodeGen::CodeGenFunction::createBasicBlock(), clang::CodeGen::CodeGenFunction::CreateMemTemp(), clang::CodeGen::CGBuilderTy::CreatePointerBitCastOrAddrSpaceCast(), clang::CodeGen::CodeGenFunction::EmitBlock(), clang::CodeGen::CodeGenFunction::EmitLoadOfScalar(), emitReductionListCopy(), clang::CodeGen::CodeGenFunction::FinishFunction(), clang::CodeGen::CodeGenFunction::GetAddrOfLocalVar(), clang::CodeGen::CodeGenModule::getContext(), clang::CodeGen::CodeGenTypes::GetFunctionType(), clang::CodeGen::CodeGenModule::getModule(), clang::CodeGen::CodeGenModule::getOpenMPRuntime(), clang::CodeGen::Address::getPointer(), clang::CodeGen::CodeGenTypeCache::getPointerAlign(), clang::CodeGen::CodeGenModule::getTypes(), clang::InternalLinkage, clang::ImplicitParamDecl::Other, clang::CodeGen::CodeGenModule::SetInternalFunctionAttributes(), clang::ASTContext::ShortTy, clang::CodeGen::CodeGenFunction::StartFunction(), clang::CodeGen::CodeGenTypeCache::VoidPtrTy, clang::ASTContext::VoidPtrTy, and clang::ASTContext::VoidTy.

◆ getDataSharingMode()

static CGOpenMPRuntimeNVPTX::DataSharingMode getDataSharingMode ( CodeGenModule CGM)
static

◆ getMasterThreadID()

static llvm::Value* getMasterThreadID ( CodeGenFunction CGF)
static

Get the thread id of the OMP master thread.

The master thread id is the first thread (lane) of the last warp in the GPU block. Warp size is assumed to be some power of 2. Thread id is 0 indexed. E.g: If NumThreads is 33, master id is 32. If NumThreads is 64, master id is 32. If NumThreads is 1024, master id is 992.

Definition at line 569 of file CGOpenMPRuntimeNVPTX.cpp.

References clang::CodeGen::CodeGenTypes::arrangeNullaryFunction(), clang::CodeGen::CodeGenFunction::Builder, clang::Create(), clang::CodeGen::CodeGenTypes::GetFunctionType(), clang::CodeGen::CodeGenModule::getModule(), getNVPTXNumThreads(), getNVPTXWarpSize(), clang::CodeGen::CodeGenModule::getTypes(), clang::InternalLinkage, and clang::CodeGen::CodeGenModule::SetInternalFunctionAttributes().

Referenced by clang::CodeGen::CGOpenMPRuntimeNVPTX::emitParallelCall(), and supportsSPMDExecutionMode().

◆ getNVPTXBarrier()

static void getNVPTXBarrier ( CodeGenFunction CGF,
int  ID,
llvm::Value NumThreads 
)
static

Get barrier #ID to synchronize selected (multiple of warp size) threads in a CTA.

Definition at line 531 of file CGOpenMPRuntimeNVPTX.cpp.

References clang::CodeGen::CodeGenFunction::Builder, clang::CodeGen::CodeGenFunction::CGM, clang::CodeGen::CodeGenFunction::EmitRuntimeCall(), and clang::CodeGen::CodeGenModule::getModule().

Referenced by syncParallelThreads().

◆ getNVPTXCTABarrier()

static void getNVPTXCTABarrier ( CodeGenFunction CGF)
static

◆ getNVPTXLaneID()

static llvm::Value* getNVPTXLaneID ( CodeGenFunction CGF)
static

Get the id of the current lane in the Warp.

We assume that the warp size is 32, which is always the case on the NVPTX device, to generate more efficient code.

Definition at line 509 of file CGOpenMPRuntimeNVPTX.cpp.

References clang::CodeGen::CodeGenFunction::Builder, and getNVPTXThreadID().

Referenced by emitInterWarpCopyFunction().

◆ getNVPTXNumThreads()

static llvm::Value* getNVPTXNumThreads ( CodeGenFunction CGF)
static

◆ getNVPTXThreadID()

static llvm::Value* getNVPTXThreadID ( CodeGenFunction CGF)
static

◆ getNVPTXWarpID()

static llvm::Value* getNVPTXWarpID ( CodeGenFunction CGF)
static

Get the id of the warp in the block.

We assume that the warp size is 32, which is always the case on the NVPTX device, to generate more efficient code.

Definition at line 501 of file CGOpenMPRuntimeNVPTX.cpp.

References clang::CodeGen::CodeGenFunction::Builder, and getNVPTXThreadID().

Referenced by emitInterWarpCopyFunction().

◆ getNVPTXWarpSize()

static llvm::Value* getNVPTXWarpSize ( CodeGenFunction CGF)
static

◆ getSingleCompoundChild()

static const Stmt* getSingleCompoundChild ( const Stmt Body)
static

Checks if the Body is the CompoundStmt and returns its child statement iff there is only one.

Definition at line 611 of file CGOpenMPRuntimeNVPTX.cpp.

Referenced by hasNestedSPMDDirective().

◆ getThreadLimit()

static llvm::Value* getThreadLimit ( CodeGenFunction CGF,
bool  IsInSpmdExecutionMode = false 
)
static

Get the value of the thread_limit clause in the teams directive.

For the 'generic' execution mode, the runtime encodes thread_limit in the launch parameters, always starting thread_limit+warpSize threads per CTA. The threads in the last warp are reserved for master execution. For the 'spmd' execution mode, all threads in a CTA are part of the team.

Definition at line 553 of file CGOpenMPRuntimeNVPTX.cpp.

References clang::CodeGen::CodeGenFunction::Builder, getNVPTXNumThreads(), and getNVPTXWarpSize().

Referenced by supportsSPMDExecutionMode().

◆ hasNestedSPMDDirective()

static bool hasNestedSPMDDirective ( ASTContext Ctx,
const OMPExecutableDirective D 
)
static

◆ hasParallelIfNumThreadsClause()

static bool hasParallelIfNumThreadsClause ( ASTContext Ctx,
const OMPExecutableDirective D 
)
static

Check if the parallel directive has an 'if' clause with non-constant or false condition.

Also, check if the number of threads is strictly specified and run those directives in non-SPMD mode.

Definition at line 621 of file CGOpenMPRuntimeNVPTX.cpp.

References clang::Expr::EvaluateAsBooleanCondition(), clang::OMPExecutableDirective::getClausesOfKind(), clang::OMPExecutableDirective::hasClausesOfKind(), and clang::OMPD_unknown.

Referenced by hasNestedSPMDDirective(), and supportsSPMDExecutionMode().

◆ setPropertyExecutionMode()

static void setPropertyExecutionMode ( CodeGenModule CGM,
StringRef  Name,
bool  Mode 
)
static

◆ supportsSPMDExecutionMode()

static bool supportsSPMDExecutionMode ( ASTContext Ctx,
const OMPExecutableDirective D 
)
static

◆ syncCTAThreads()

static void syncCTAThreads ( CodeGenFunction CGF)
static

Synchronize all GPU threads in a block.

Definition at line 541 of file CGOpenMPRuntimeNVPTX.cpp.

References getNVPTXCTABarrier().

Referenced by clang::CodeGen::CGOpenMPRuntimeNVPTX::emitParallelCall(), and supportsSPMDExecutionMode().

◆ syncParallelThreads()

static void syncParallelThreads ( CodeGenFunction CGF,
llvm::Value NumThreads 
)
static

Synchronize worker threads in a parallel region.

Definition at line 544 of file CGOpenMPRuntimeNVPTX.cpp.

References getNVPTXBarrier().

Referenced by emitInterWarpCopyFunction().