Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

An Error while running Classical Pipeline Example given in docs/examples #7

Closed
srinjoyganguly opened this issue Dec 9, 2021 · 8 comments

Comments

@srinjoyganguly
Copy link

Hi @dimkart I hope you are doing well

I am trying to run the code given here on my Google Colab account - https://github.com/CQCL/lambeq/blob/main/docs/examples/classical_pipeline.ipynb

I am installing lambeq directly on Colab and it is picking up the latest version of DisCoPy

But I am continuously getting an error like this. I have pasted the full stack trace here -

---------------------------------------------------------------------------
RuntimeError                              Traceback (most recent call last)
<ipython-input-11-84634b74856a> in <module>()
     39 dev_cost_fn, dev_costs, dev_accs = make_cost_fn(dev_pred_fn, dev_labels)
     40 
---> 41 result = train(train_cost_fn, x0, niter=20, callback=dev_cost_fn, optimizer_fn=torch.optim.AdamW, lr=0.1)

10 frames
<ipython-input-11-84634b74856a> in train(func, x0, niter, callback, optimizer_fn, lr)
      3     optimizer = optimizer_fn(x, lr=lr)
      4     for _ in range(niter):
----> 5         loss = func(x)
      6 
      7         optimizer.zero_grad()

<ipython-input-11-84634b74856a> in cost_fn(params, **kwargs)
     16 def make_cost_fn(pred_fn, labels):
     17     def cost_fn(params, **kwargs):
---> 18         predictions = pred_fn(params)
     19 
     20         logits = predictions[:, 1] - predictions[:, 0]

<ipython-input-10-dbb8534e3157> in predict(params)
      1 def make_pred_fn(circuits):
      2     def predict(params):
----> 3         return torch.stack([c.lambdify(*parameters)(*params).eval(contractor=tn.contractors.auto).array for c in circuits])
      4     return predict
      5 

<ipython-input-10-dbb8534e3157> in <listcomp>(.0)
      1 def make_pred_fn(circuits):
      2     def predict(params):
----> 3         return torch.stack([c.lambdify(*parameters)(*params).eval(contractor=tn.contractors.auto).array for c in circuits])
      4     return predict
      5 

/usr/local/lib/python3.7/dist-packages/discopy/tensor.py in eval(self, contractor)
    448         if contractor is None:
    449             return Functor(ob=lambda x: x, ar=lambda f: f.array)(self)
--> 450         array = contractor(*self.to_tn()).tensor
    451         return Tensor(self.dom, self.cod, array)
    452 

/usr/local/lib/python3.7/dist-packages/tensornetwork/contractors/opt_einsum_paths/path_contractors.py in auto(nodes, output_edge_order, memory_limit, ignore_edge_order)
    262         output_edge_order=output_edge_order,
    263         nbranch=1,
--> 264         ignore_edge_order=ignore_edge_order)
    265   return greedy(nodes, output_edge_order, memory_limit, ignore_edge_order)
    266 

/usr/local/lib/python3.7/dist-packages/tensornetwork/contractors/opt_einsum_paths/path_contractors.py in branch(nodes, output_edge_order, memory_limit, nbranch, ignore_edge_order)
    160   alg = functools.partial(
    161       opt_einsum.paths.branch, memory_limit=memory_limit, nbranch=nbranch)
--> 162   return base(nodes, alg, output_edge_order, ignore_edge_order)
    163 
    164 

/usr/local/lib/python3.7/dist-packages/tensornetwork/contractors/opt_einsum_paths/path_contractors.py in base(nodes, algorithm, output_edge_order, ignore_edge_order)
     86   path, nodes = utils.get_path(nodes_set, algorithm)
     87   for a, b in path:
---> 88     new_node = contract_between(nodes[a], nodes[b], allow_outer_product=True)
     89     nodes.append(new_node)
     90     nodes = utils.multi_remove(nodes, [a, b])

/usr/local/lib/python3.7/dist-packages/tensornetwork/network_components.py in contract_between(node1, node2, name, allow_outer_product, output_edge_order, axis_names)
   2083     axes1 = [axes1[i] for i in ind_sort]
   2084     axes2 = [axes2[i] for i in ind_sort]
-> 2085     new_tensor = backend.tensordot(node1.tensor, node2.tensor, [axes1, axes2])
   2086     new_node = Node(tensor=new_tensor, name=name, backend=backend)
   2087     # node1 and node2 get new edges in _remove_edges

/usr/local/lib/python3.7/dist-packages/tensornetwork/backends/pytorch/pytorch_backend.py in tensordot(self, a, b, axes)
     44   def tensordot(self, a: Tensor, b: Tensor,
     45                 axes: Union[int, Sequence[Sequence[int]]]) -> Tensor:
---> 46     return torchlib.tensordot(a, b, dims=axes)
     47 
     48   def reshape(self, tensor: Tensor, shape: Tensor) -> Tensor:

/usr/local/lib/python3.7/dist-packages/torch/functional.py in tensordot(a, b, dims, out)
   1032 
   1033     if out is None:
-> 1034         return _VF.tensordot(a, b, dims_a, dims_b)  # type: ignore[attr-defined]
   1035     else:
   1036         return _VF.tensordot(a, b, dims_a, dims_b, out=out)  # type: ignore[attr-defined]

RuntimeError: expected scalar type Float but found Double

I was able to successfully carry out experiments using the Quantum Pipeline code on Google Colab and did not faced any issues but for this one I am getting error. I have tried to fix the issue by converting variables or some function outputs to float() but I was unable to rectify this.

Can you please help me fix this issue?

Thank you so much!

@ianyfan
Copy link
Collaborator

ianyfan commented Dec 9, 2021

Try setting torch.set_default_tensor_type(torch.FloatTensor) or torch.set_default_tensor_type(torch.cuda.FloatTensor) at the start of your script?

@dimkart
Copy link
Contributor

dimkart commented Dec 9, 2021

Hello @srinjoyganguly and thanks for reporting this. @ianyfan has already responded what we think might be the problem, let us know if this works for you.

@srinjoyganguly
Copy link
Author

Hi @ianyfan thank you so much for the help! I tried using the lines you gave at the start of my script but still getting the same issue. The same error persists. I realized that this line - torch.set_default_tensor_type(torch.cuda.FloatTensor) already existed in my code and I was getting the error. I tried with - torch.set_default_tensor_type(torch.FloatTensor) but the error is still the same. Are there any other ways?

@dimkart you are very welcome! Thank you a lot!

@dimkart
Copy link
Contributor

dimkart commented Dec 10, 2021

Hi @srinjoyganguly, can you send us your notebook/code in lambeq-support@cambridgequantum.com (or post it here), so we can have a better look? It's really difficult to say what is going on just from the error message.

@ianyfan
Copy link
Collaborator

ianyfan commented Dec 10, 2021

Changing the line to

torch.set_default_tensor_type(torch.cuda.DoubleTensor)

should allow the notebook to run, though we are working on a proper fix on the underlying code.

@srinjoyganguly
Copy link
Author

Indeed @ianyfan the code works with DoubleTensor! Thank you so much for the help! I look forward to the proper fix in the underlying code! Thanks a lot @ianyfan and @dimkart !

@dimkart
Copy link
Contributor

dimkart commented Dec 10, 2021

Thanks for pointing this out @srinjoyganguly. We have updated the notebook to use a double tensor until we fix this in DisCoPy. This issue will be now closed.

@dimkart dimkart closed this as completed Dec 10, 2021
@srinjoyganguly
Copy link
Author

You are very welcome @dimkart ! I am glad, thanks so much!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants