GEMM#

namespace librapid
namespace linalg

Functions

template<typename Int, typename Alpha, typename A, typename B, typename Beta, typename C>
void gemm(bool transA, bool transB, Int m, Int n, Int k, Alpha alpha, A *a, Int lda, B *b, Int ldb, Beta beta, C *c, Int ldc, backend::CPU backend = backend::CPU())

General matrix-matrix multiplication.

Computes \( \mathbf{C} = \alpha \mathrm{OP}_A(\mathbf{A}) \mathrm{OP}_B(\mathbf{B}) + \beta \mathbf{C} \) for matrices \( \mathbf{A} \), \( \mathbf{B} \) and \( \mathbf{C} \). \( \mathrm{OP}_A \) and \( \mathrm{OP}_B \) are either the identity or the transpose operation.

Template Parameters
  • Int – Integer type for matrix dimensions

  • Alpha – Type of \( \alpha \)

  • A – Type of \( \mathbf{A} \)

  • B – Type of \( \mathbf{B} \)

  • Beta – Type of \( \beta \)

  • C – Type of \( \mathbf{C} \)

Parameters
  • transA – Whether to transpose \( \mathbf{A} \) (determines \( \mathrm{OP}_A \))

  • transB – Whether to transpose \( \mathbf{B} \) (determines \( \mathrm{OP}_B \))

  • m – Rows of \( \mathbf{A} \) and \( \mathbf{C} \)

  • n – Columns of \( \mathbf{B} \) and \( \mathbf{C} \)

  • k – Columns of \( \mathbf{A} \) and rows of \( \mathbf{B} \)

  • alpha – Scalar \( \alpha \)

  • a – Pointer to \( \mathbf{A} \)

  • lda – Leading dimension of \( \mathbf{A} \)

  • b – Pointer to \( \mathbf{B} \)

  • ldb – Leading dimension of \( \mathbf{B} \)

  • beta – Scalar \( \beta \)

  • c – Pointer to \( \mathbf{C} \)

  • ldc – Leading dimension of \( \mathbf{C} \)

  • backend – Backend to use for computation

CuBLASGemmComputeType cublasGemmComputeType(cublasDataType_t a, cublasDataType_t b, cublasDataType_t c)
template<typename Int, typename Alpha, typename A, typename B, typename Beta, typename C>
void gemm(bool transA, bool transB, Int m, Int n, Int k, Alpha alpha, A *a, Int lda, B *b, Int ldb, Beta beta, C *c, Int ldc, backend::CUDA)
struct CuBLASGemmComputeType
#include <gemm.hpp>

Public Members

cublasComputeType_t computeType
cublasDataType_t scaleType