Skip to content

Commit

Permalink
Documentation for NVQC (#1412)
Browse files Browse the repository at this point in the history
  • Loading branch information
1tnguyen committed Mar 18, 2024
1 parent b95172e commit ca78d1a
Show file tree
Hide file tree
Showing 10 changed files with 430 additions and 3 deletions.
8 changes: 7 additions & 1 deletion .github/workflows/config/md_link_check_config.json
Original file line number Diff line number Diff line change
Expand Up @@ -23,6 +23,12 @@
},
{
"pattern": "openapi.html"
},
{
"pattern": "^https://www.nvidia.com/en-us/solutions/quantum-computing/cloud"
},
{
"pattern": "^https://developer.nvidia.com/quantum-cloud-early-access"
}
]
}
}
1 change: 1 addition & 0 deletions .github/workflows/config/spelling_allowlist.txt
Original file line number Diff line number Diff line change
Expand Up @@ -220,6 +220,7 @@ reStructuredText
runtime
rvalue
scalability
scalable
sexualized
struct
structs
Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/python_wheels.yml
Original file line number Diff line number Diff line change
Expand Up @@ -289,7 +289,7 @@ jobs:
docker run --rm -dit --name wheel-validation-snippets wheel_validation:local
status_sum=0
for ex in `find docs/sphinx/snippets/python -name '*.py' -not -path '*/platform/*'`; do
for ex in `find docs/sphinx/snippets/python -name '*.py' -not -path '*/platform/*' -not -path '*/nvqc/*'`; do
file="${ex#docs/sphinx/snippets/python/}"
echo "__Snippet ${file}:__" >> /tmp/validation.out
(docker exec wheel-validation-snippets bash -c "python${{ inputs.python_version }} /tmp/snippets/$file" >> /tmp/validation.out) && success=true || success=false
Expand Down
7 changes: 6 additions & 1 deletion docs/sphinx/releases.rst
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,12 @@ The latest version of CUDA Quantum is on the main branch of our `GitHub reposito

**0.7.0**

The 0.7.0 release adds support for using NVIDIA Quantum Cloud, giving you access to our most powerful GPU-accelerated simulators even if you don't have an NVIDIA GPU. With 0.7.0, we have furthermore greatly increased expressiveness of the Python and C++ language frontends. Check our our `documentation <https://nvidia.github.io/cuda-quantum/latest/using/quick_start.html>`__ to learn more about the new Python syntax support we have added, and `follow our blog <https://developer.nvidia.com/cuda-q>`__ to learn more about the new setup and its performance benefits.
The 0.7.0 release adds support for using :doc:`NVIDIA Quantum Cloud <using/backends/nvqc>`,
giving you access to our most powerful GPU-accelerated simulators even if you don't have an NVIDIA GPU.
With 0.7.0, we have furthermore greatly increased expressiveness of the Python and C++ language frontends.
Check out our `documentation <https://nvidia.github.io/cuda-quantum/latest/using/quick_start.html>`__
to get started with the new Python syntax support we have added, and `follow our blog <https://developer.nvidia.com/cuda-q>`__
to learn more about the new setup and its performance benefits.

- `Docker image <https://catalog.ngc.nvidia.com/orgs/nvidia/containers/cuda-quantum>`__
- `Python wheel <https://pypi.org/project/cuda-quantum/>`__
Expand Down
29 changes: 29 additions & 0 deletions docs/sphinx/snippets/cpp/using/cudaq/nvqc/nvqc_intro.cpp
Original file line number Diff line number Diff line change
@@ -0,0 +1,29 @@
/*******************************************************************************
* Copyright (c) 2022 - 2024 NVIDIA Corporation & Affiliates. *
* All rights reserved. *
* *
* This source code and the accompanying materials are made available under *
* the terms of the Apache License 2.0 which accompanies this distribution. *
******************************************************************************/

// [Begin Documentation]
#include <cudaq.h>

// Define a simple quantum kernel to execute on NVQC.
struct ghz {
// Maximally entangled state between 25 qubits.
auto operator()() __qpu__ {
constexpr int NUM_QUBITS = 25;
cudaq::qvector q(NUM_QUBITS);
h(q[0]);
for (int i = 0; i < NUM_QUBITS - 1; i++) {
x<cudaq::ctrl>(q[i], q[i + 1]);
}
auto result = mz(q);
}
};

int main() {
auto counts = cudaq::sample(ghz{});
counts.dump();
}
51 changes: 51 additions & 0 deletions docs/sphinx/snippets/cpp/using/cudaq/nvqc/nvqc_mqpu.cpp
Original file line number Diff line number Diff line change
@@ -0,0 +1,51 @@
/*******************************************************************************
* Copyright (c) 2022 - 2024 NVIDIA Corporation & Affiliates. *
* All rights reserved. *
* *
* This source code and the accompanying materials are made available under *
* the terms of the Apache License 2.0 which accompanies this distribution. *
******************************************************************************/

// [Begin Documentation]
#include <cudaq.h>
#include <cudaq/algorithm.h>
#include <cudaq/gradients.h>
#include <cudaq/optimizers.h>
#include <iostream>

int main() {
using namespace cudaq::spin;
cudaq::spin_op h = 5.907 - 2.1433 * x(0) * x(1) - 2.1433 * y(0) * y(1) +
.21829 * z(0) - 6.125 * z(1);

auto [ansatz, theta] = cudaq::make_kernel<double>();
auto q = ansatz.qalloc();
auto r = ansatz.qalloc();
ansatz.x(q);
ansatz.ry(theta, r);
ansatz.x<cudaq::ctrl>(r, q);

// Run VQE with a gradient-based optimizer.
// Delegate cost function and gradient computation across different NVQC-based
// QPUs.
// Note: this needs to be compiled with `--nvqc-nqpus 3` create 3 virtual
// QPUs.
cudaq::optimizers::lbfgs optimizer;
auto [opt_val, opt_params] = optimizer.optimize(
/*dim=*/1, /*opt_function*/ [&](const std::vector<double> &params,
std::vector<double> &grads) {
// Queue asynchronous jobs to do energy evaluations across multiple QPUs
auto energy_future =
cudaq::observe_async(/*qpu_id=*/0, ansatz, h, params[0]);
const double paramShift = M_PI_2;
auto plus_future = cudaq::observe_async(/*qpu_id=*/1, ansatz, h,
params[0] + paramShift);
auto minus_future = cudaq::observe_async(/*qpu_id=*/2, ansatz, h,
params[0] - paramShift);
grads[0] = (plus_future.get().expectation() -
minus_future.get().expectation()) /
2.0;
return energy_future.get().expectation();
});
std::cout << "Minimum energy = " << opt_val << " (expected -1.74886).\n";
}
23 changes: 23 additions & 0 deletions docs/sphinx/snippets/python/using/cudaq/nvqc/nvqc_intro.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,23 @@
# ============================================================================ #
# Copyright (c) 2022 - 2024 NVIDIA Corporation & Affiliates. #
# All rights reserved. #
# #
# This source code and the accompanying materials are made available under #
# the terms of the Apache License 2.0 which accompanies this distribution. #
# ============================================================================ #
# [Begin Documentation]
import cudaq

cudaq.set_target("nvqc")
num_qubits = 25
# Define a simple quantum kernel to execute on NVQC.
kernel = cudaq.make_kernel()
qubits = kernel.qalloc(num_qubits)
# Maximally entangled state between 25 qubits.
kernel.h(qubits[0])
for i in range(num_qubits - 1):
kernel.cx(qubits[i], qubits[i + 1])
kernel.mz(qubits)

counts = cudaq.sample(kernel)
print(counts)
54 changes: 54 additions & 0 deletions docs/sphinx/snippets/python/using/cudaq/nvqc/nvqc_mqpu.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,54 @@
# ============================================================================ #
# Copyright (c) 2022 - 2024 NVIDIA Corporation & Affiliates. #
# All rights reserved. #
# #
# This source code and the accompanying materials are made available under #
# the terms of the Apache License 2.0 which accompanies this distribution. #
# ============================================================================ #
# [Begin Documentation]
import cudaq
from cudaq import spin
import math

# Use NVQC with 3 virtual QPUs
cudaq.set_target("nvqc", nqpus=3)

print("Number of QPUs:", cudaq.get_target().num_qpus())
# Create the parameterized ansatz
kernel, theta = cudaq.make_kernel(float)
qreg = kernel.qalloc(2)
kernel.x(qreg[0])
kernel.ry(theta, qreg[1])
kernel.cx(qreg[1], qreg[0])

# Define its spin Hamiltonian.
hamiltonian = (5.907 - 2.1433 * spin.x(0) * spin.x(1) -
2.1433 * spin.y(0) * spin.y(1) + 0.21829 * spin.z(0) -
6.125 * spin.z(1))


def opt_gradient(parameter_vector):
# Evaluate energy and gradient on different remote QPUs
# (i.e., concurrent job submissions to NVQC)
energy_future = cudaq.observe_async(kernel,
hamiltonian,
parameter_vector[0],
qpu_id=0)
plus_future = cudaq.observe_async(kernel,
hamiltonian,
parameter_vector[0] + 0.5 * math.pi,
qpu_id=1)
minus_future = cudaq.observe_async(kernel,
hamiltonian,
parameter_vector[0] - 0.5 * math.pi,
qpu_id=2)
return (energy_future.get().expectation(), [
(plus_future.get().expectation() - minus_future.get().expectation()) /
2.0
])


optimizer = cudaq.optimizers.LBFGS()
optimal_value, optimal_parameters = optimizer.optimize(1, opt_gradient)
print("Ground state energy =", optimal_value)
print("Optimal parameters =", optimal_parameters)
2 changes: 2 additions & 0 deletions docs/sphinx/using/backends/backends.rst
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,7 @@ CUDA Quantum Backends

Simulation <simulators.rst>
Quantum Hardware <hardware.rst>
NVIDIA Quantum Cloud <nvqc.rst>
Multi-Processor Platforms <platform.rst>

**The following is a comprehensive list of the available targets in CUDA Quantum:**
Expand All @@ -18,6 +19,7 @@ CUDA Quantum Backends
* :ref:`nvidia-fp64 <nvidia-fp64-backend>`
* :ref:`nvidia-mqpu <nvidia-mgpu-backend>`
* :ref:`nvidia-mqpu-fp64 <nvidia-mgpu-backend>`
* :doc:`nvqc <nvqc>`
* :ref:`oqc <oqc-backend>`
* :ref:`qpp-cpu <qpp-cpu-backend>`
* :ref:`quantinuum <quantinuum-backend>`
Expand Down
Loading

0 comments on commit ca78d1a

Please sign in to comment.