Extension for CUDA.jl
Introduction
This is an extension to support QuantumObject.data
conversion from standard dense and sparse CPU arrays to GPU (CUDA.jl
) arrays.
This extension will be automatically loaded if user imports both QuantumToolbox.jl
and CUDA.jl
:
using QuantumToolbox
using CUDA
using CUDA.CUSPARSE
CUDA.allowscalar(false) # Avoid unexpected scalar indexing
We wrapped several functions in CUDA
and CUDA.CUSPARSE
in order to not only converting QuantumObject.data
into GPU arrays, but also changing the element type and word size (32
and 64
) since some of the GPUs perform better in 32
-bit. The functions are listed as follows (where input A
is a QuantumObject
):
cu(A; word_size=64)
: return a newQuantumObject
withCUDA
arrays and specifiedword_size
.CuArray(A)
: IfA.data
is a dense array, return a newQuantumObject
withCUDA.CuArray
.CuArray{T}(A)
: IfA.data
is a dense array, return a newQuantumObject
withCUDA.CuArray
under element typeT
.CuSparseVector(A)
: IfA.data
is a sparse vector, return a newQuantumObject
withCUDA.CUSPARSE.CuSparseVector
.CuSparseVector{T}(A)
: IfA.data
is a sparse vector, return a newQuantumObject
withCUDA.CUSPARSE.CuSparseVector
under element typeT
.CuSparseMatrixCSC(A)
: IfA.data
is a sparse matrix, return a newQuantumObject
withCUDA.CUSPARSE.CuSparseMatrixCSC
.CuSparseMatrixCSC{T}(A)
: IfA.data
is a sparse matrix, return a newQuantumObject
withCUDA.CUSPARSE.CuSparseMatrixCSC
under element typeT
.CuSparseMatrixCSR(A)
: IfA.data
is a sparse matrix, return a newQuantumObject
withCUDA.CUSPARSE.CuSparseMatrixCSR
.CuSparseMatrixCSR{T}(A)
: IfA.data
is a sparse matrix, return a newQuantumObject
withCUDA.CUSPARSE.CuSparseMatrixCSR
under element typeT
.
We suggest to convert the arrays from CPU to GPU memory by using the function cu
because it allows different data
-types of input QuantumObject
.
Here are some examples:
Converting dense arrays
V = fock(2, 0) # CPU dense vector
Quantum Object: type=Ket dims=[2] size=(2,)
2-element Vector{ComplexF64}:
1.0 + 0.0im
0.0 + 0.0im
cu(V)
Quantum Object: type=Ket dims=[2] size=(2,)
2-element CuArray{ComplexF64, 1, CUDA.DeviceMemory}:
1.0 + 0.0im
0.0 + 0.0im
cu(V; word_size = 32)
Quantum Object: type=Ket dims=[2] size=(2,)
2-element CuArray{ComplexF32, 1, CUDA.DeviceMemory}:
1.0 + 0.0im
0.0 + 0.0im
M = Qobj([1 2; 3 4]) # CPU dense matrix
Quantum Object: type=Operator dims=[2] size=(2, 2) ishermitian=false
2×2 Matrix{Int64}:
1 2
3 4
cu(M)
Quantum Object: type=Operator dims=[2] size=(2, 2) ishermitian=false
2×2 CuArray{Int64, 2, CUDA.DeviceMemory}:
1 2
3 4
cu(M; word_size = 32)
Quantum Object: type=Operator dims=[2] size=(2, 2) ishermitian=false
2×2 CuArray{Int32, 2, CUDA.DeviceMemory}:
1 2
3 4
Converting sparse arrays
V = fock(2, 0; sparse=true) # CPU sparse vector
Quantum Object: type=Ket dims=[2] size=(2,)
2-element SparseVector{ComplexF64, Int64} with 1 stored entry:
[1] = 1.0+0.0im
cu(V)
Quantum Object: type=Ket dims=[2] size=(2,)
2-element CuSparseVector{ComplexF64, Int32} with 1 stored entry:
[1] = 1.0+0.0im
cu(V; word_size = 32)
Quantum Object: type=Ket dims=[2] size=(2,)
2-element CuSparseVector{ComplexF32, Int32} with 1 stored entry:
[1] = 1.0+0.0im
M = sigmax() # CPU sparse matrix
Quantum Object: type=Operator dims=[2] size=(2, 2) ishermitian=true
2×2 SparseMatrixCSC{ComplexF64, Int64} with 2 stored entries:
⋅ 1.0+0.0im
1.0+0.0im ⋅
cu(M)
Quantum Object: type=Operator dims=[2] size=(2, 2) ishermitian=true
2×2 CuSparseMatrixCSC{ComplexF64, Int32} with 2 stored entries:
⋅ 1.0+0.0im
1.0+0.0im ⋅
cu(M; word_size = 32)
Quantum Object: type=Operator dims=[2] size=(2, 2) ishermitian=true
2×2 CuSparseMatrixCSC{ComplexF32, Int32} with 2 stored entries:
⋅ 1.0+0.0im
1.0+0.0im ⋅