Skip to content

Commit

Permalink
Spellcheck Done [Unitary Hack 2024] (#12501)
Browse files Browse the repository at this point in the history
* spell check iter1

* spell check iter 2

* Fix fmt

* Update qiskit/_numpy_compat.py

* Update qiskit/synthesis/evolution/product_formula.py

* Update qiskit/synthesis/evolution/product_formula.py

* Update releasenotes/notes/0.13/qinfo-states-7f67e2432cf0c12c.yaml

* undo some corrections

---------

Co-authored-by: Eric Arellano <14852634+Eric-Arellano@users.noreply.github.com>
Co-authored-by: Luciano Bello <bel@zurich.ibm.com>
Co-authored-by: Julien Gacon <jules.gacon@googlemail.com>
  • Loading branch information
4 people committed Jun 19, 2024
1 parent 53667d1 commit 0f51357
Show file tree
Hide file tree
Showing 223 changed files with 370 additions and 369 deletions.
2 changes: 1 addition & 1 deletion .binder/postBuild
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@
# - pylatexenc: for MPL drawer
# - pillow: for image comparison
# - appmode: jupyter extension for executing the notebook
# - seaborn: visualisation pacakge required for some graphs
# - seaborn: visualization pacakge required for some graphs
pip install matplotlib pylatexenc pillow appmode seaborn
pip install .

Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/backport.yml
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
name: Backport metadata

# Mergify manages the opening of the backport PR, this workflow is just to extend its behaviour to
# Mergify manages the opening of the backport PR, this workflow is just to extend its behavior to
# do useful things like copying across the tagged labels and milestone from the base PR.

on:
Expand Down
4 changes: 2 additions & 2 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -532,7 +532,7 @@ we used in our CI systems more closely.

### Snapshot Testing for Visualizations

If you are working on code that makes changes to any matplotlib visualisations
If you are working on code that makes changes to any matplotlib visualizations
you will need to check that your changes don't break any snapshot tests, and add
new tests where necessary. You can do this as follows:

Expand All @@ -543,7 +543,7 @@ the snapshot tests (note this may take some time to finish loading).
3. Each test result provides a set of 3 images (left: reference image, middle: your test result, right: differences). In the list of tests the passed tests are collapsed and failed tests are expanded. If a test fails, you will see a situation like this:

<img width="995" alt="Screenshot_2021-03-26_at_14 13 54" src="https://user-images.githubusercontent.com/23662430/112663508-d363e800-8e50-11eb-9478-6d665d0ff086.png">
4. Fix any broken tests. Working on code for one aspect of the visualisations
4. Fix any broken tests. Working on code for one aspect of the visualizations
can sometimes result in minor changes elsewhere to spacing etc. In these cases
you just need to update the reference images as follows:
- download the mismatched images (link at top of Jupyter Notebook output)
Expand Down
8 changes: 4 additions & 4 deletions crates/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,11 +29,11 @@ This would be a particular problem for defining the circuit object and using it

## Developer notes

### Beware of initialisation order
### Beware of initialization order

The Qiskit C extension `qiskit._accelerate` needs to be initialised in a single go.
It is the lowest part of the Python package stack, so it cannot rely on importing other parts of the Python library at initialisation time (except for exceptions through PyO3's `import_exception!` mechanism).
This is because, unlike pure-Python modules, the initialisation of `_accelerate` cannot be done partially, and many components of Qiskit import their accelerators from `_accelerate`.
The Qiskit C extension `qiskit._accelerate` needs to be initialized in a single go.
It is the lowest part of the Python package stack, so it cannot rely on importing other parts of the Python library at initialization time (except for exceptions through PyO3's `import_exception!` mechanism).
This is because, unlike pure-Python modules, the initialization of `_accelerate` cannot be done partially, and many components of Qiskit import their accelerators from `_accelerate`.

In general, this should not be too onerous a requirement, but if you violate it, you might see Rust panics on import, and PyO3 should wrap that up into an exception.
You might be able to track down the Rust source of the import cycle by running the import with the environment variable `RUST_BACKTRACE=full`.
Expand Down
8 changes: 4 additions & 4 deletions crates/accelerate/src/pauli_exp_val.rs
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@ pub fn fast_sum_with_simd<S: Simd>(simd: S, values: &[f64]) -> f64 {
sum + tail.iter().sum::<f64>()
}

/// Compute the pauli expectatation value of a statevector without x
/// Compute the pauli expectation value of a statevector without x
#[pyfunction]
#[pyo3(text_signature = "(data, num_qubits, z_mask, /)")]
pub fn expval_pauli_no_x(
Expand Down Expand Up @@ -63,7 +63,7 @@ pub fn expval_pauli_no_x(
}
}

/// Compute the pauli expectatation value of a statevector with x
/// Compute the pauli expectation value of a statevector with x
#[pyfunction]
#[pyo3(text_signature = "(data, num_qubits, z_mask, x_mask, phase, x_max, /)")]
pub fn expval_pauli_with_x(
Expand Down Expand Up @@ -121,7 +121,7 @@ pub fn expval_pauli_with_x(
}
}

/// Compute the pauli expectatation value of a density matrix without x
/// Compute the pauli expectation value of a density matrix without x
#[pyfunction]
#[pyo3(text_signature = "(data, num_qubits, z_mask, /)")]
pub fn density_expval_pauli_no_x(
Expand Down Expand Up @@ -153,7 +153,7 @@ pub fn density_expval_pauli_no_x(
}
}

/// Compute the pauli expectatation value of a density matrix with x
/// Compute the pauli expectation value of a density matrix with x
#[pyfunction]
#[pyo3(text_signature = "(data, num_qubits, z_mask, x_mask, phase, x_max, /)")]
pub fn density_expval_pauli_with_x(
Expand Down
2 changes: 1 addition & 1 deletion crates/accelerate/src/sabre/sabre_dag.rs
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ pub struct DAGNode {
}

/// A DAG representation of the logical circuit to be routed. This represents the same dataflow
/// dependences as the Python-space [DAGCircuit], but without any information about _what_ the
/// dependencies as the Python-space [DAGCircuit], but without any information about _what_ the
/// operations being performed are. Note that all the qubit references here are to "virtual"
/// qubits, that is, the qubits are those specified by the user. This DAG does not need to be
/// full-width on the hardware.
Expand Down
8 changes: 4 additions & 4 deletions crates/accelerate/src/sparse_pauli_op.rs
Original file line number Diff line number Diff line change
Expand Up @@ -421,7 +421,7 @@ fn decompose_dense_inner(
) {
if num_qubits == 0 {
// It would be safe to `return` here, but if it's unreachable then LLVM is allowed to
// optimise out this branch entirely in release mode, which is good for a ~2% speedup.
// optimize out this branch entirely in release mode, which is good for a ~2% speedup.
unreachable!("should not call this with an empty operator")
}
// Base recursion case.
Expand Down Expand Up @@ -529,7 +529,7 @@ fn to_matrix_dense_inner(paulis: &MatrixCompressedPaulis, parallel: bool) -> Vec
out
};
let write_row = |(i_row, row): (usize, &mut [Complex64])| {
// Doing the initialisation here means that when we're in parallel contexts, we do the
// Doing the initialization here means that when we're in parallel contexts, we do the
// zeroing across the whole threadpool. This also seems to give a speed-up in serial
// contexts, but I don't understand that. ---Jake
row.fill(Complex64::new(0.0, 0.0));
Expand Down Expand Up @@ -721,7 +721,7 @@ macro_rules! impl_to_matrix_sparse {

// The parallel overhead from splitting a subtask is fairly high (allocating and
// potentially growing a couple of vecs), so we're trading off some of Rayon's ability
// to keep threads busy by subdivision with minimising overhead; we're setting the
// to keep threads busy by subdivision with minimizing overhead; we're setting the
// chunk size such that the iterator will have as many elements as there are threads.
let num_threads = rayon::current_num_threads();
let chunk_size = (side + num_threads - 1) / num_threads;
Expand All @@ -738,7 +738,7 @@ macro_rules! impl_to_matrix_sparse {
// Since we compressed the Paulis by summing equal elements, we're
// lower-bounded on the number of elements per row by this value, up to
// cancellations. This should be a reasonable trade-off between sometimes
// expandin the vector and overallocation.
// expanding the vector and overallocation.
let mut values =
Vec::<Complex64>::with_capacity(chunk_size * (num_ops + 1) / 2);
let mut indices = Vec::<$int_ty>::with_capacity(chunk_size * (num_ops + 1) / 2);
Expand Down
4 changes: 2 additions & 2 deletions crates/accelerate/src/two_qubit_decompose.rs
Original file line number Diff line number Diff line change
Expand Up @@ -293,7 +293,7 @@ fn __num_basis_gates(basis_b: f64, basis_fidelity: f64, unitary: MatRef<c64>) ->
c64::new(4.0 * c.cos(), 0.0),
c64::new(4.0, 0.0),
];
// The originial Python had `np.argmax`, which returns the lowest index in case two or more
// The original Python had `np.argmax`, which returns the lowest index in case two or more
// values have a common maximum value.
// `max_by` and `min_by` return the highest and lowest indices respectively, in case of ties.
// So to reproduce `np.argmax`, we use `min_by` and switch the order of the
Expand Down Expand Up @@ -587,7 +587,7 @@ impl TwoQubitWeylDecomposition {
// M2 is a symmetric complex matrix. We need to decompose it as M2 = P D P^T where
// P ∈ SO(4), D is diagonal with unit-magnitude elements.
//
// We can't use raw `eig` directly because it isn't guaranteed to give us real or othogonal
// We can't use raw `eig` directly because it isn't guaranteed to give us real or orthogonal
// eigenvectors. Instead, since `M2` is complex-symmetric,
// M2 = A + iB
// for real-symmetric `A` and `B`, and as
Expand Down
6 changes: 3 additions & 3 deletions crates/qasm2/src/expr.rs
Original file line number Diff line number Diff line change
Expand Up @@ -104,7 +104,7 @@ impl From<TokenType> for Op {
}
}

/// An atom of the operator-precendence expression parsing. This is a stripped-down version of the
/// An atom of the operator-precedence expression parsing. This is a stripped-down version of the
/// [Token] and [TokenType] used in the main parser. We can use a data enum here because we do not
/// need all the expressive flexibility in expecting and accepting many different token types as
/// we do in the main parser; it does not significantly harm legibility to simply do
Expand Down Expand Up @@ -233,7 +233,7 @@ fn binary_power(op: Op) -> (u8, u8) {
/// A subparser used to do the operator-precedence part of the parsing for individual parameter
/// expressions. The main parser creates a new instance of this struct for each expression it
/// expects, and the instance lives only as long as is required to parse that expression, because
/// it takes temporary resposibility for the [TokenStream] that backs the main parser.
/// it takes temporary responsibility for the [TokenStream] that backs the main parser.
pub struct ExprParser<'a> {
pub tokens: &'a mut Vec<TokenStream>,
pub context: &'a mut TokenContext,
Expand Down Expand Up @@ -504,7 +504,7 @@ impl<'a> ExprParser<'a> {
// This deliberately parses an _integer_ token as a float, since all OpenQASM 2.0
// integers can be interpreted as floats, and doing that allows us to gracefully handle
// cases where a huge float would overflow a `usize`. Never mind that in such a case,
// there's almost certainly precision loss from the floating-point representating
// there's almost certainly precision loss from the floating-point representing
// having insufficient mantissa digits to faithfully represent the angle mod 2pi;
// that's not our fault in the parser.
TokenType::Real | TokenType::Integer => Ok(Some(Atom::Const(token.real(self.context)))),
Expand Down
6 changes: 3 additions & 3 deletions crates/qasm2/src/lex.rs
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@
//! keyword; the spec technically says that any real number is valid, but in reality that leads to
//! weirdness like `200.0e-2` being a valid version specifier. We do things with a custom
//! context-dependent match after seeing an `OPENQASM` token, to avoid clashes with the general
//! real-number tokenisation.
//! real-number tokenization.

use hashbrown::HashMap;
use pyo3::prelude::PyResult;
Expand All @@ -30,7 +30,7 @@ use std::path::Path;

use crate::error::{message_generic, Position, QASM2ParseError};

/// Tokenised version information data. This is more structured than the real number suggested by
/// Tokenized version information data. This is more structured than the real number suggested by
/// the specification.
#[derive(Clone, Debug)]
pub struct Version {
Expand Down Expand Up @@ -353,7 +353,7 @@ impl TokenStream {
line_buffer: Vec::with_capacity(80),
done: false,
// The first line is numbered "1", and the first column is "0". The counts are
// initialised like this so the first call to `next_byte` can easily detect that it
// initialized like this so the first call to `next_byte` can easily detect that it
// needs to extract the next line.
line: 0,
col: 0,
Expand Down
2 changes: 1 addition & 1 deletion crates/qasm2/src/parse.rs
Original file line number Diff line number Diff line change
Expand Up @@ -1630,7 +1630,7 @@ impl State {

/// Update the parser state with the definition of a particular gate. This does not emit any
/// bytecode because not all gate definitions need something passing to Python. For example,
/// the Python parser initialises its state including the built-in gates `U` and `CX`, and
/// the Python parser initializes its state including the built-in gates `U` and `CX`, and
/// handles the `qelib1.inc` include specially as well.
fn define_gate(
&mut self,
Expand Down
4 changes: 2 additions & 2 deletions crates/qasm3/src/build.rs
Original file line number Diff line number Diff line change
Expand Up @@ -69,7 +69,7 @@ impl BuilderState {
Err(QASM3ImporterError::new_err("cannot handle consts"))
} else if decl.initializer().is_some() {
Err(QASM3ImporterError::new_err(
"cannot handle initialised bits",
"cannot handle initialized bits",
))
} else {
self.add_clbit(py, name_id.clone())
Expand All @@ -80,7 +80,7 @@ impl BuilderState {
Err(QASM3ImporterError::new_err("cannot handle consts"))
} else if decl.initializer().is_some() {
Err(QASM3ImporterError::new_err(
"cannot handle initialised registers",
"cannot handle initialized registers",
))
} else {
match dims {
Expand Down
2 changes: 1 addition & 1 deletion crates/qasm3/src/circuit.rs
Original file line number Diff line number Diff line change
Expand Up @@ -281,7 +281,7 @@ impl PyCircuitModule {
/// Circuit construction context object to provide an easier Rust-space interface for us to
/// construct the Python :class:`.QuantumCircuit`. The idea of doing this from Rust space like
/// this is that we might steadily be able to move more and more of it into being native Rust as
/// the Rust-space APIs around the internal circuit data stabilise.
/// the Rust-space APIs around the internal circuit data stabilize.
pub struct PyCircuit(Py<PyAny>);

impl PyCircuit {
Expand Down
2 changes: 1 addition & 1 deletion crates/qasm3/src/expr.rs
Original file line number Diff line number Diff line change
Expand Up @@ -71,7 +71,7 @@ fn eval_const_int(_py: Python, _ast_symbols: &SymbolTable, expr: &asg::TExpr) ->
match expr.expression() {
asg::Expr::Literal(asg::Literal::Int(lit)) => Ok(*lit.value() as isize),
expr => Err(QASM3ImporterError::new_err(format!(
"unhandled expression type for constant-integer evaluatation: {:?}",
"unhandled expression type for constant-integer evaluation: {:?}",
expr
))),
}
Expand Down
4 changes: 2 additions & 2 deletions docs/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -114,9 +114,9 @@
autosummary_generate = True
autosummary_generate_overwrite = False

# The pulse library contains some names that differ only in capitalisation, during the changeover
# The pulse library contains some names that differ only in capitalization, during the changeover
# surrounding SymbolPulse. Since these resolve to autosummary filenames that also differ only in
# capitalisation, this causes problems when the documentation is built on an OS/filesystem that is
# capitalization, this causes problems when the documentation is built on an OS/filesystem that is
# enforcing case-insensitive semantics. This setting defines some custom names to prevent the clash
# from happening.
autosummary_filename_map = {
Expand Down
2 changes: 1 addition & 1 deletion qiskit/_numpy_compat.py
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@
# copyright notice, and modified files need to carry a notice indicating
# that they have been altered from the originals.

"""Compatiblity helpers for the Numpy 1.x to 2.0 transition."""
"""Compatibility helpers for the Numpy 1.x to 2.0 transition."""

import re
import typing
Expand Down
2 changes: 1 addition & 1 deletion qiskit/circuit/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -270,7 +270,7 @@
* :class:`ContinueLoopOp`, to move immediately to the next iteration of the containing loop
* :class:`ForLoopOp`, to loop over a fixed range of values
* :class:`IfElseOp`, to conditionally enter one of two subcircuits
* :class:`SwitchCaseOp`, to conditionally enter one of many subcicuits
* :class:`SwitchCaseOp`, to conditionally enter one of many subcircuits
* :class:`WhileLoopOp`, to repeat a subcircuit until a condition is falsified.
:ref:`Circuits can include classical expressions that are evaluated in real time
Expand Down
10 changes: 5 additions & 5 deletions qiskit/circuit/_classical_resource_map.py
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,7 @@ class VariableMapper(expr.ExprVisitor[expr.Expr]):
call its :meth:`map_condition`, :meth:`map_target` or :meth:`map_expr` methods as appropriate,
which will return the new object that should be used.
If an ``add_register`` callable is given to the initialiser, the mapper will use it to attempt
If an ``add_register`` callable is given to the initializer, the mapper will use it to attempt
to add new aliasing registers to the outer circuit object, if there is not already a suitable
register for the mapping available in the circuit. If this parameter is not given, a
``ValueError`` will be raised instead. The given ``add_register`` callable may choose to raise
Expand Down Expand Up @@ -73,12 +73,12 @@ def _map_register(self, theirs: ClassicalRegister) -> ClassicalRegister:

def map_condition(self, condition, /, *, allow_reorder=False):
"""Map the given ``condition`` so that it only references variables in the destination
circuit (as given to this class on initialisation).
circuit (as given to this class on initialization).
If ``allow_reorder`` is ``True``, then when a legacy condition (the two-tuple form) is made
on a register that has a counterpart in the destination with all the same (mapped) bits but
in a different order, then that register will be used and the value suitably modified to
make the equality condition work. This is maintaining legacy (tested) behaviour of
make the equality condition work. This is maintaining legacy (tested) behavior of
:meth:`.DAGCircuit.compose`; nowhere else does this, and in general this would require *far*
more complex classical rewriting than Terra needs to worry about in the full expression era.
"""
Expand All @@ -91,7 +91,7 @@ def map_condition(self, condition, /, *, allow_reorder=False):
return (self.bit_map[target], value)
if not allow_reorder:
return (self._map_register(target), value)
# This is maintaining the legacy behaviour of `DAGCircuit.compose`. We don't attempt to
# This is maintaining the legacy behavior of `DAGCircuit.compose`. We don't attempt to
# speed-up this lookup with a cache, since that would just make the more standard cases more
# annoying to deal with.
mapped_bits_order = [self.bit_map[bit] for bit in target]
Expand All @@ -114,7 +114,7 @@ def map_condition(self, condition, /, *, allow_reorder=False):

def map_target(self, target, /):
"""Map the real-time variables in a ``target`` of a :class:`.SwitchCaseOp` to the new
circuit, as defined in the ``circuit`` argument of the initialiser of this class."""
circuit, as defined in the ``circuit`` argument of the initializer of this class."""
if isinstance(target, Clbit):
return self.bit_map[target]
if isinstance(target, ClassicalRegister):
Expand Down
Loading

0 comments on commit 0f51357

Please sign in to comment.