Extension for CUDA.jl
Introduction
This is an extension to support QuantumObject.data conversion from standard dense and sparse CPU arrays to GPU (CUDA.jl) arrays.
This extension will be automatically loaded if user imports both QuantumToolbox and CUDA.jl:
using QuantumToolbox
using CUDA
using CUDA.CUSPARSE
CUDA.allowscalar(false) # Avoid unexpected scalar indexingWe wrapped several functions in CUDA and CUDA.CUSPARSE in order to not only converting QuantumObject.data into GPU arrays, but also changing the element type and word size (32 and 64) since some of the GPUs perform better in 32-bit. The functions are listed as follows (where input A is a QuantumObject):
cu(A; word_size=64): return a newQuantumObjectwithCUDAarrays and specifiedword_size.CuArray(A): IfA.datais a dense array, return a newQuantumObjectwithCUDA.CuArray.CuArray{T}(A): IfA.datais a dense array, return a newQuantumObjectwithCUDA.CuArrayunder element typeT.CuSparseVector(A): IfA.datais a sparse vector, return a newQuantumObjectwithCUDA.CUSPARSE.CuSparseVector.CuSparseVector{T}(A): IfA.datais a sparse vector, return a newQuantumObjectwithCUDA.CUSPARSE.CuSparseVectorunder element typeT.CuSparseMatrixCSC(A): IfA.datais a sparse matrix, return a newQuantumObjectwithCUDA.CUSPARSE.CuSparseMatrixCSC.CuSparseMatrixCSC{T}(A): IfA.datais a sparse matrix, return a newQuantumObjectwithCUDA.CUSPARSE.CuSparseMatrixCSCunder element typeT.CuSparseMatrixCSR(A): IfA.datais a sparse matrix, return a newQuantumObjectwithCUDA.CUSPARSE.CuSparseMatrixCSR.CuSparseMatrixCSR{T}(A): IfA.datais a sparse matrix, return a newQuantumObjectwithCUDA.CUSPARSE.CuSparseMatrixCSRunder element typeT.
We suggest to convert the arrays from CPU to GPU memory by using the function cu because it allows different data-types of input QuantumObject.
Here are some examples:
Converting dense arrays
V = fock(2, 0) # CPU dense vectorQuantum Object: type=Ket dims=[2] size=(2,)
2-element Vector{ComplexF64}:
1.0 + 0.0im
0.0 + 0.0imcu(V)Quantum Object: type=Ket dims=[2] size=(2,)
2-element CuArray{ComplexF64, 1, CUDA.DeviceMemory}:
1.0 + 0.0im
0.0 + 0.0imcu(V; word_size = 32)Quantum Object: type=Ket dims=[2] size=(2,)
2-element CuArray{ComplexF32, 1, CUDA.DeviceMemory}:
1.0 + 0.0im
0.0 + 0.0imM = Qobj([1 2; 3 4]) # CPU dense matrixQuantum Object: type=Operator dims=[2] size=(2, 2) ishermitian=false
2×2 Matrix{Int64}:
1 2
3 4cu(M)Quantum Object: type=Operator dims=[2] size=(2, 2) ishermitian=false
2×2 CuArray{Int64, 2, CUDA.DeviceMemory}:
1 2
3 4cu(M; word_size = 32)Quantum Object: type=Operator dims=[2] size=(2, 2) ishermitian=false
2×2 CuArray{Int32, 2, CUDA.DeviceMemory}:
1 2
3 4Converting sparse arrays
V = fock(2, 0; sparse=true) # CPU sparse vectorQuantum Object: type=Ket dims=[2] size=(2,)
2-element SparseVector{ComplexF64, Int64} with 1 stored entry:
[1] = 1.0+0.0imcu(V)Quantum Object: type=Ket dims=[2] size=(2,)
2-element CuSparseVector{ComplexF64, Int32} with 1 stored entry:
[1] = 1.0+0.0imcu(V; word_size = 32)Quantum Object: type=Ket dims=[2] size=(2,)
2-element CuSparseVector{ComplexF32, Int32} with 1 stored entry:
[1] = 1.0+0.0imM = sigmax() # CPU sparse matrixQuantum Object: type=Operator dims=[2] size=(2, 2) ishermitian=true
2×2 SparseMatrixCSC{ComplexF64, Int64} with 2 stored entries:
⋅ 1.0+0.0im
1.0+0.0im ⋅ cu(M)Quantum Object: type=Operator dims=[2] size=(2, 2) ishermitian=true
2×2 CuSparseMatrixCSC{ComplexF64, Int32} with 2 stored entries:
⋅ 1.0+0.0im
1.0+0.0im ⋅ cu(M; word_size = 32)Quantum Object: type=Operator dims=[2] size=(2, 2) ishermitian=true
2×2 CuSparseMatrixCSC{ComplexF32, Int32} with 2 stored entries:
⋅ 1.0+0.0im
1.0+0.0im ⋅