Skip to content

Commit

Permalink
Develop (#76)
Browse files Browse the repository at this point in the history
* Fixed CPU Reference Bug (#58)

* Fixed CPU Reference Bug

* Ran black

* Ran black

* Renyi entropy observable implemented and tested over small chains

* Rename EntanglementEntropy to RenyiEntropy, and run black

* Ee observable (#69)

This fix a formatter error in CI. Remember to tail your white spaces guys!

* Improve Test Cov, rename some base classes Wavefunction etc.  (#71)

* add docstring

* add test for observables

* fix flake8 check

* run black

* add license header

* add Dan's patch

* run black

* add tests for observables

* add some new tests

* run black

* renames

* correct name conventions in examples,tests

* renaming

* fixed all name changes

* fix docs

* fix ci

* fix ci

* fixing NaN in KL

* improve speed (#75)

* improve speed

* rm another einsum, no more einsum pls

* run black

* add tests for observables (#74)

* add tests for observables

* fix pauli z

* fix error

* Fix missing file error
  • Loading branch information
emerali authored and Roger-luo committed Dec 17, 2018
1 parent b137a2d commit 6335289
Show file tree
Hide file tree
Showing 50 changed files with 484 additions and 251 deletions.
2 changes: 1 addition & 1 deletion docs/callbacks.rst
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
Callbacks
=========

.. autoclass:: qucumber.callbacks.Callback
.. autoclass:: qucumber.callbacks.CallbackBase
:members:

.. autoclass:: qucumber.callbacks.LambdaCallback
Expand Down
4 changes: 2 additions & 2 deletions docs/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -15,8 +15,8 @@ Welcome to QuCumber's documentation!
:caption: Tutorials

tutorial
_examples/Tutorial1_TrainPosRealWavefunction/tutorial_quantum_ising.ipynb
_examples//Tutorial2_TrainComplexWavefunction/tutorial_qubits.ipynb
_examples/Tutorial1_TrainPosRealWaveFunction/tutorial_quantum_ising.ipynb
_examples//Tutorial2_TrainComplexWaveFunction/tutorial_qubits.ipynb
_examples/Tutorial3_DataGeneration_CalculateObservables/tutorial_sampling_observables.ipynb
_examples/Tutorial4_MonitoringObservables/tutorial_monitor_observables.ipynb

Expand Down
12 changes: 6 additions & 6 deletions docs/quantum_states.rst
Original file line number Diff line number Diff line change
Expand Up @@ -4,27 +4,27 @@ Quantum States
==============================


Positive Wavefunction
Positive WaveFunction
------------------------------

.. autoclass:: qucumber.nn_states.PositiveWavefunction
.. autoclass:: qucumber.nn_states.PositiveWaveFunction
:members:
:inherited-members:
:show-inheritance:

Complex Wavefunction
Complex WaveFunction
------------------------------

.. autoclass:: qucumber.nn_states.ComplexWavefunction
.. autoclass:: qucumber.nn_states.ComplexWaveFunction
:members:
:inherited-members:
:show-inheritance:

Abstract Wavefunction
Abstract WaveFunction
------------------------------

.. note:: |AbstractClassNote|

.. autoclass:: qucumber.nn_states.Wavefunction
.. autoclass:: qucumber.nn_states.WaveFunctionBase
:members:
:show-inheritance:
4 changes: 2 additions & 2 deletions docs/tutorial.rst
Original file line number Diff line number Diff line change
Expand Up @@ -5,11 +5,11 @@ Download the tutorials
Once you have installed QuCumber, we recommend going through our tutorial that
is divided into two parts.

#. Training a Wavefunction to reconstruct a positive-real wavefunction (i.e.
#. Training a wave function to reconstruct a positive-real wave function (i.e.
no phase) from a transverse-field Ising model (TFIM) and then generating new
data.

#. Training an Wavefunction to reconstruct a complex wavefunction (i.e. with a
#. Training an wave function to reconstruct a complex wave function (i.e. with a
phase) from a simple two qubit random state and then generating new data.

We have made interactive python notebooks that can be downloaded (along with
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,7 @@
"import numpy as np\n",
"import matplotlib.pyplot as plt\n",
"\n",
"from qucumber.nn_states import PositiveWavefunction\n",
"from qucumber.nn_states import PositiveWaveFunction\n",
"from qucumber.callbacks import MetricEvaluator\n",
"\n",
"import qucumber.utils.training_statistics as ts\n",
Expand All @@ -50,9 +50,9 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"The Python class *PositiveWavefunction* contains generic properties of a RBM meant to reconstruct a positive-real wavefunction, the most notable one being the gradient function required for stochastic gradient descent.\n",
"The Python class *PositiveWaveFunction* contains generic properties of a RBM meant to reconstruct a positive-real wavefunction, the most notable one being the gradient function required for stochastic gradient descent.\n",
"\n",
"To instantiate a *PositiveWavefunction* object, one needs to specify the number of visible and hidden units in the RBM. The number of visible units, *num_visible*, is given by the size of the physical system, i.e. the number of spins or qubits (10 in this case), while the number of hidden units, *num_hidden*, can be varied to change the expressiveness of the neural network.\n",
"To instantiate a *PositiveWaveFunction* object, one needs to specify the number of visible and hidden units in the RBM. The number of visible units, *num_visible*, is given by the size of the physical system, i.e. the number of spins or qubits (10 in this case), while the number of hidden units, *num_hidden*, can be varied to change the expressiveness of the neural network.\n",
"\n",
"**Note:** The optimal *num_hidden* : *num_visible* ratio will depend on the system. For the TFIM, having this ratio be equal to 1 leads to good results with reasonable computational effort.\n",
"\n",
Expand Down Expand Up @@ -83,7 +83,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"As previously mentioned, to instantiate a *PositiveWavefunction* object, one needs to specify the number of visible and hidden units in the RBM. These two quantities equal will be kept equal."
"As previously mentioned, to instantiate a *PositiveWaveFunction* object, one needs to specify the number of visible and hidden units in the RBM. These two quantities equal will be kept equal."
]
},
{
Expand All @@ -95,15 +95,15 @@
"nv = train_data.shape[-1]\n",
"nh = nv\n",
"\n",
"nn_state = PositiveWavefunction(num_visible=nv, num_hidden=nh)\n",
"# nn_state = PositiveWavefunction(num_visible=nv, num_hidden=nh, gpu = False)"
"nn_state = PositiveWaveFunction(num_visible=nv, num_hidden=nh)\n",
"# nn_state = PositiveWaveFunction(num_visible=nv, num_hidden=nh, gpu = False)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"By default, qucumber will attempt to run on a GPU if one is available (if one is not available, qucumber will default to CPU). If one wishes to run qucumber on a CPU, add the flag \"gpu = False\" in the *PositiveWavefunction* object instantiation (i.e. uncomment the line above). \n",
"By default, qucumber will attempt to run on a GPU if one is available (if one is not available, qucumber will default to CPU). If one wishes to run qucumber on a CPU, add the flag \"gpu = False\" in the *PositiveWaveFunction* object instantiation (i.e. uncomment the line above). \n",
"\n",
"Now the hyperparameters of the training process can be specified.\n",
"\n",
Expand Down Expand Up @@ -149,7 +149,7 @@
"\n",
"Although the fidelity and KL divergence are excellent training evaluators, they are not practical to calculate in most cases; the user may not have access to the target wavefunction of the system, nor may generating the hilbert space of the system be computationally feasible. However, evaluating the training in real time is extremely convenient. \n",
"\n",
"Any custom function that the user would like to use to evaluate the training can be given to the *MetricEvaluator*, thus avoiding having to calculate fidelity and/or KL divergence. Any custom function given to *MetricEvaluator* must take the neural-network state (in this case, the *PositiveWavefunction* object) and keyword arguments. As an example, the function to be passed to the *MetricEvaluator* will be the fifth coefficient of the reconstructed wavefunction multiplied by a parameter, *A*."
"Any custom function that the user would like to use to evaluate the training can be given to the *MetricEvaluator*, thus avoiding having to calculate fidelity and/or KL divergence. Any custom function given to *MetricEvaluator* must take the neural-network state (in this case, the *PositiveWaveFunction* object) and keyword arguments. As an example, the function to be passed to the *MetricEvaluator* will be the fifth coefficient of the reconstructed wavefunction multiplied by a parameter, *A*."
]
},
{
Expand Down Expand Up @@ -190,7 +190,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"Now the training can begin. The *PositiveWavefunction* object has a property called *fit* which takes care of this. *MetricEvaluator* must be passed to the *fit* function in a list (*callbacks*)."
"Now the training can begin. The *PositiveWaveFunction* object has a property called *fit* which takes care of this. *MetricEvaluator* must be passed to the *fit* function in a list (*callbacks*)."
]
},
{
Expand Down Expand Up @@ -406,7 +406,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.6"
"version": "3.6.5"
}
},
"nbformat": 4,
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -43,7 +43,7 @@
"import torch\n",
"import matplotlib.pyplot as plt\n",
"\n",
"from qucumber.nn_states import ComplexWavefunction\n",
"from qucumber.nn_states import ComplexWaveFunction\n",
"\n",
"from qucumber.callbacks import MetricEvaluator\n",
"\n",
Expand All @@ -58,13 +58,13 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"The Python class *ComplexWavefunction* contains generic properties of a RBM meant to reconstruct a complex wavefunction, the most notable one being the gradient function required for stochastic gradient descent.\n",
"The Python class *ComplexWaveFunction* contains generic properties of a RBM meant to reconstruct a complex wavefunction, the most notable one being the gradient function required for stochastic gradient descent.\n",
"\n",
"To instantiate a *ComplexWavefunction* object, one needs to specify the number of visible and hidden units in the RBM. The number of visible units, *num_visible*, is given by the size of the physical system, i.e. the number of spins or qubits (2 in this case), while the number of hidden units, *num_hidden*, can be varied to change the expressiveness of the neural network.\n",
"To instantiate a *ComplexWaveFunction* object, one needs to specify the number of visible and hidden units in the RBM. The number of visible units, *num_visible*, is given by the size of the physical system, i.e. the number of spins or qubits (2 in this case), while the number of hidden units, *num_hidden*, can be varied to change the expressiveness of the neural network.\n",
"\n",
"**Note:** The optimal *num_hidden* : *num_visible* ratio will depend on the system. For the two-qubit wavefunction described above, good results are yielded when this ratio is 1.\n",
"\n",
"On top of needing the number of visible and hidden units, a *ComplexWavefunction* object requires the user to input a dictionary containing the unitary operators (2x2) that will be used to rotate the qubits in and out of the computational basis, Z, during the training process. The *unitaries* utility will take care of creating this dictionary.\n",
"On top of needing the number of visible and hidden units, a *ComplexWaveFunction* object requires the user to input a dictionary containing the unitary operators (2x2) that will be used to rotate the qubits in and out of the computational basis, Z, during the training process. The *unitaries* utility will take care of creating this dictionary.\n",
"\n",
"The *MetricEvaluator* class and *training_statistics* utility are built-in amenities that will allow the user to evaluate the training in real time. \n",
"\n",
Expand Down Expand Up @@ -97,7 +97,7 @@
"source": [
"The file *qubits_bases.txt* contains every unique basis in the *qubits_train_bases.txt* file. Calculation of the full KL divergence in every basis requires the user to specify each unique basis.\n",
"\n",
"As previouosly mentioned, a *ComplexWavefunction* object requires a dictionary that contains the unitariy operators that will be used to rotate the qubits in and out of the computational basis, Z, during the training process. In the case of the provided dataset, the unitaries required are the well-known $H$, and $K$ gates. The dictionary needed can be created with the following command."
"As previouosly mentioned, a *ComplexWaveFunction* object requires a dictionary that contains the unitariy operators that will be used to rotate the qubits in and out of the computational basis, Z, during the training process. In the case of the provided dataset, the unitaries required are the well-known $H$, and $K$ gates. The dictionary needed can be created with the following command."
]
},
{
Expand Down Expand Up @@ -130,17 +130,17 @@
"nv = train_samples.shape[-1]\n",
"nh = nv\n",
"\n",
"nn_state = ComplexWavefunction(\n",
"nn_state = ComplexWaveFunction(\n",
" num_visible=nv, num_hidden=nh, unitary_dict=unitary_dict, gpu=False\n",
")\n",
"# nn_state = ComplexWavefunction(num_visible=nv, num_hidden=nh, unitary_dict=unitary_dict)"
"# nn_state = ComplexWaveFunction(num_visible=nv, num_hidden=nh, unitary_dict=unitary_dict)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"By default, qucumber will attempt to run on a GPU if one is available (if one is not available, qucumber will default to CPU). If one wishes to run qucumber on a CPU, add the flag \"gpu = False\" in the *ComplexWavefunction* object instantiation. Uncomment the line above to run this tutorial on a GPU.\n",
"By default, qucumber will attempt to run on a GPU if one is available (if one is not available, qucumber will default to CPU). If one wishes to run qucumber on a CPU, add the flag \"gpu = False\" in the *ComplexWaveFunction* object instantiation. Uncomment the line above to run this tutorial on a GPU.\n",
"\n",
"Now the hyperparameters of the training process can be specified. \n",
"\n",
Expand Down Expand Up @@ -186,7 +186,7 @@
"\n",
"Although the fidelity and KL divergence are excellent training evaluators, they are not practical to calculate in most cases; the user may not have access to the target wavefunction of the system, nor may generating the hilbert space of the system be computationally feasible. However, evaluating the training in real time is extremely convenient. \n",
"\n",
"Any custom function that the user would like to use to evaluate the training can be given to the *MetricEvaluator*, thus avoiding having to calculate fidelity and/or KL divergence. As an example, functions that calculate the the norm of each of the reconstructed wavefunction's coefficients are presented. Any custom function given to *MetricEvaluator* must take the neural-network state (in this case, the *ComplexWavefunction* object) and keyword arguments. Although the given example requires the hilbert space to be computed, the scope of the *MetricEvaluator*'s ability to be able to handle any function should still be evident."
"Any custom function that the user would like to use to evaluate the training can be given to the *MetricEvaluator*, thus avoiding having to calculate fidelity and/or KL divergence. As an example, functions that calculate the the norm of each of the reconstructed wavefunction's coefficients are presented. Any custom function given to *MetricEvaluator* must take the neural-network state (in this case, the *ComplexWaveFunction* object) and keyword arguments. Although the given example requires the hilbert space to be computed, the scope of the *MetricEvaluator*'s ability to be able to handle any function should still be evident."
]
},
{
Expand Down Expand Up @@ -278,7 +278,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"Now the training can begin. The *ComplexWavefunction* object has a property called *fit* which takes care of this."
"Now the training can begin. The *ComplexWaveFunction* object has a property called *fit* which takes care of this."
]
},
{
Expand Down Expand Up @@ -446,7 +446,7 @@
"source": [
"It should be noted that one could have just ran *nn_state.fit(train_samples)* and just used the default hyperparameters and no training evaluators.\n",
"\n",
"At the end of the training process, the network parameters (the weights, visible biases and hidden biases) are stored in the *ComplexWavefunction* object. One can save them to a pickle file, which will be called *saved_params.pt*, with the following command."
"At the end of the training process, the network parameters (the weights, visible biases and hidden biases) are stored in the *ComplexWaveFunction* object. One can save them to a pickle file, which will be called *saved_params.pt*, with the following command."
]
},
{
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -55,7 +55,7 @@ def apply(self, nn_state, samples):
:type samples: torch.Tensor
"""
samples = to_pm1(samples)
log_psis = -nn_state.rbm_am.effective_energy(to_01(samples)).div(2.)
log_psis = -nn_state.rbm_am.effective_energy(to_01(samples)).div(2.0)

shape = log_psis.shape + (samples.shape[-1],)
log_flipped_psis = torch.zeros(
Expand All @@ -66,7 +66,7 @@ def apply(self, nn_state, samples):
self._flip_spin(i, samples) # flip the spin at site i
log_flipped_psis[:, i] = -nn_state.rbm_am.effective_energy(
to_01(samples)
).div(2.)
).div(2.0)
self._flip_spin(i, samples) # flip it back

log_flipped_psis = torch.logsumexp(log_flipped_psis, 1, keepdim=True).squeeze()
Expand All @@ -77,7 +77,7 @@ def apply(self, nn_state, samples):
# convert to ratio of probabilities
transverse_field_terms = log_flipped_psis.sub(log_psis).exp()

energy = transverse_field_terms.mul(self.h).add(interaction_terms).mul(-1.)
energy = transverse_field_terms.mul(self.h).add(interaction_terms).mul(-1.0)

return energy.div(samples.shape[-1])

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@
"# Sampling and calculating observables\n",
"## Generate new samples\n",
"\n",
"Firstly, to generate meaningful data, an RBM needs to be trained. Please refer to the tutorials 1 and 2 on training an RBM if how to train an RBM using qucumber is unclear. An RBM with a positive-real wavefunction describing a transverse-field Ising model (TFIM) with 10 sites has already been trained in the first tutorial, with the parameters of the machine saved here as *saved_params.pt*. The *autoload* function can be employed here to instantiate the corresponding *PositiveWavefunction* object from the saved RBM parameters."
"Firstly, to generate meaningful data, an RBM needs to be trained. Please refer to the tutorials 1 and 2 on training an RBM if how to train an RBM using qucumber is unclear. An RBM with a positive-real wavefunction describing a transverse-field Ising model (TFIM) with 10 sites has already been trained in the first tutorial, with the parameters of the machine saved here as *saved_params.pt*. The *autoload* function can be employed here to instantiate the corresponding *PositiveWaveFunction* object from the saved RBM parameters."
]
},
{
Expand All @@ -19,21 +19,21 @@
"import numpy as np\n",
"import matplotlib.pyplot as plt\n",
"\n",
"from qucumber.nn_states import PositiveWavefunction\n",
"from qucumber.nn_states import PositiveWaveFunction\n",
"\n",
"from qucumber.observables import Observable\n",
"\n",
"import quantum_ising_chain\n",
"from quantum_ising_chain import TFIMChainEnergy\n",
"\n",
"nn_state = PositiveWavefunction.autoload(\"saved_params.pt\")"
"nn_state = PositiveWaveFunction.autoload(\"saved_params.pt\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"A *PositiveWavefunction* object has a property called *sample* that takes in the following arguments.\n",
"A *PositiveWaveFunction* object has a property called *sample* that takes in the following arguments.\n",
"\n",
"1. **k**: the number of Gibbs steps to perform to generate the new samples\n",
"2. **num_samples**: the number of new data points to be generated"
Expand Down Expand Up @@ -116,7 +116,7 @@
"source": [
"The exact value for the magnetization is 0.5610. \n",
"\n",
"The magnetization and the newly-generated samples can also be saved to a pickle file along with the RBM parameters in the *PositiveWavefunction* object."
"The magnetization and the newly-generated samples can also be saved to a pickle file along with the RBM parameters in the *PositiveWaveFunction* object."
]
},
{
Expand Down

0 comments on commit 6335289

Please sign in to comment.