Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Working version of example_encode on Mac M1 #4

Closed
erlebach opened this issue Oct 31, 2022 · 2 comments
Closed

Working version of example_encode on Mac M1 #4

erlebach opened this issue Oct 31, 2022 · 2 comments

Comments

@erlebach
Copy link

Hi,

Your code has multiple issues preventing it from running on the Mac M1 on Big Sur with Julia 1.8.2 . Below is a version that runs correctly:
Issues:

  1. PyPlot cannot be installed. Use Plots.heatmap to plot an array.
  2. I added "../src" to LOAD_PATH to make the example work.
  3. In case you have a dark background, change the line and text color of YaoPlots.plot:
CircuitStyles.textcolor[]="yellow"
CircuitStyles.linecolor[]="yellow"
Cheers, 

  Gordon

Working code

using Flux
using Yao, Zygote, YaoPlots #, CuYao, Yao.EasyBuild
using Yao.EasyBuild
using LinearAlgebra, Statistics, Random, StatsBase, ArgParse, Distributions
# Issue with Mac M1. I might need the M1 version of Anaconda
#using PyPlot
using Printf, BenchmarkTools, MAT, Plots
#using Flux: batch   # Does not work because batch does not exist? 
using Flux: batch

using YaoPlots: plot  # For some reason plot is not loaded. 

push!(LOAD_PATH, "../src")   
using Quantum_Neural_Network_Classifiers: ent_cx, params_layer, acc_loss_evaluation

# import the FashionMNIST data
vars = matread("../dataset/FashionMNIST_1_2_wk.mat")
x_train = vars["x_train"]
y_train = vars["y_train"]
x_test = vars["x_test"]
y_test = vars["y_test"]

num_train = 1000
num_test = 200
x_train = x_train[:,1:num_train]
y_train = y_train[1:num_train,:]
x_test = x_test[:,1:num_test]
y_test = y_test[1:num_test,:];

i = 13
a = real(vars["x_train"][1:256,i])
c1 = reshape(a,(16,16))
i = 6
a = real(vars["x_train"][1:256,i])
c2 = reshape(a,(16,16))
# matshow(hcat(c1,c2)) # T-shirt and ankle boot
heatmap(hcat(c1,c2))

num_qubit = 8    # number of qubits
depth = 10       # number of parameterized composite_blocks
batch_size = 100 # batch size
lr = 0.01        # learning rate
niters = 100;     # number of iterations
optim = Flux.ADAM(lr); # Adam optimizer

# index of qubit that will be measured
pos_ = 8;       
op0 = put(num_qubit, pos_=>0.5*(I2+Z))
op1 = put(num_qubit, pos_=>0.5*(I2-Z));

# if GPU resources are available, you can make modifications including 
# replacing  "|> cpu" by "|> cu"
x_train_yao = zero_state(num_qubit,nbatch=num_train)
x_train_yao.state = x_train;
cu_x_train_yao = copy(x_train_yao) |> cpu;

x_test_yao = zero_state(num_qubit,nbatch=num_test)
x_test_yao.state  = x_test;
cu_x_test_yao = copy(x_test_yao) |> cpu;

# define the QNN circuit, some functions have been defined before
ent_layer(nbit::Int64) = ent_cx(nbit)
parameterized_layer(nbit::Int64) = params_layer(nbit)
composite_block(nbit::Int64) = chain(nbit, parameterized_layer(nbit::Int64), ent_layer(nbit::Int64))
circuit = chain(composite_block(num_qubit) for _ in 1:depth)
# assign random initial parameters to the circuit

CircuitStyles.textcolor[]="yellow"
CircuitStyles.linecolor[]="yellow"

dispatch!(circuit, :random)
params = parameters(circuit);
YaoPlots.plot(circuit) # for a long circuit, the plot will cost much time

# record the training history
loss_train_history = Float64[]
acc_train_history = Float64[]
loss_test_history = Float64[]
acc_test_history = Float64[];

for k in 0:niters
    # calculate the accuracy & loss for the training & test set
    train_acc, train_loss = acc_loss_evaluation(circuit,cu_x_train_yao,y_train,num_train, pos_)
    test_acc, test_loss = acc_loss_evaluation(circuit,cu_x_test_yao,y_test,num_test, pos_)
    push!(loss_train_history, train_loss)
    push!(loss_test_history, test_loss)
    push!(acc_train_history, train_acc)
    push!(acc_test_history, test_acc)
    if k % 5 == 0
        @printf("\nStep=%d, loss=%.3f, acc=%.3f, test_loss=%.3f,test_acc=%.3f\n",k,train_loss,train_acc,test_loss,test_acc)
    end
    
    # at each training epoch, randomly choose a batch of samples from the training set
    batch_index = randperm(size(x_train)[2])[1:batch_size]
    x_batch = x_train[:,batch_index]
    y_batch = y_train[batch_index,:];
    # prepare these samples into quantum states
    x_batch_1 = copy(x_batch)
    x_batch_yao = zero_state(num_qubit,nbatch=batch_size)
    x_batch_yao.state = x_batch_1;
    cu_x_batch_yao = copy(x_batch_yao) |> cpu;
    batc = [zero_state(num_qubit) for i in 1:batch_size]
    for i in 1:batch_size
        batc[i].state = x_batch_1[:,i:i]
    end
    
    # for all samples in the batch, repeatly measure their qubits at position pos_ 
    # on the computational basis
    q_ = zeros(batch_size,2);
    res = copy(cu_x_train_yao) |> circuit
    for i=1:batch_size
        rdm = density_matrix(viewbatch(res, i), (pos_,))
        q_[i,:] = Yao.probs(rdm)
    end
    
    # calculate the gradients w.r.t. the cross-entropy loss function
    Arr = Array{Float64}(zeros(batch_size,nparameters(circuit)))
    for i in 1:batch_size
        Arr[i,:] = expect'(op0, copy(batc[i])=>circuit)[2]
    end
    C = [Arr, -Arr]
    grads = collect(mean([-sum([y_batch[i,j]*((1 ./ q_)[i,j])*batch(C)[i,:,j] for j in 1:2]) for i=1:batch_size]))
    
    # update the parameters
    updates = Flux.Optimise.update!(optim, params, grads);
    dispatch!(circuit, updates) 
end

Plots.plot(acc_train_history,label="accuracy_train",legend = :bottomright)
Plots.plot!(acc_test_history,label="accuracy_test",legend = :bottomright)
# Plots.savefig("acc.pdf")

Plots.plot(loss_train_history,label="loss_train")
Plots.plot!(loss_test_history,label="loss_test")
# Plots.savefig("loss.pdf")

res = copy(cu_x_train_yao) |> circuit
q_ = zeros(num_train,2);
for i=1:num_train
    q_[i,:] = Yao.probs(density_matrix(viewbatch(res, i), (pos_,)))
end
class1x = Int64[]
class2x = Int64[]
class1y = Float64[]
class2y = Float64[]
for i in 1:num_train
    if y_train[i,1] == 1.0
        push!(class1x,i)
        push!(class1y,q_[i,1])
    else
        push!(class2x,i)
        push!(class2y,q_[i,1])
    end
end
# predicted value (expectation value)
# lower loss leads to larger separation between the two classes of data points
Plots.plot(class1x, class1y, seriestype = :scatter)
Plots.plot!(class2x, class2y, seriestype = :scatter)
@LWKJJONAK
Copy link
Owner

Hi Prof. Erlebacher:

Thank you very much for your comment! Here is my point-by-point reply:

PyPlot cannot be installed. Use Plots.heatmap to plot an array:
Reply: Yes, we need to have the python package Matplotlib installed before installing PyPlot and I forgot to mention this. In the revised Readme.md, I will point this out.

I added "../src" to LOAD_PATH to make the example work:
Reply: I tried the code on three devices: Linux, Mac intel, and Mac M1. It seems that they all work well.
First, I use these four command lines to build the working env:

$ git clone https://github.com/LWKJJONAK/Quantum_Neural_Network_Classifiers
$ cd Quantum_Neural_Network_Classifiers
$ julia --project=amplitude_encode -e "using Pkg; Pkg.instantiate()"
$ julia --project=block_encode -e "using Pkg; Pkg.instantiate()"

Then I can run all the tutorial codes in the .ipynb files (e.g. https://github.com/LWKJJONAK/Quantum_Neural_Network_Classifiers/blob/main/amplitude_encode/an_example_code_for_the_whole_training_procedure.ipynb). I don't know what exactly the problem is (did you follow the four command lines to install the packages? On our Mac M1 device, we made it work without adding "../src" to LOAD_PATH). We also write "using Quantum_Neural_Network_Classifiers: ent_cx, params_layer, acc_loss_evaluation" into a .jl file and use the command line "julia --project=amplitude_encode a.jl" to successfully run it.

In case you have a dark background, change the line and text color of YaoPlots.plot:

CircuitStyles.textcolor[]="yellow"
CircuitStyles.linecolor[]="yellow"

Reply: Thank you for this suggestion, and I will put these codes in the revised Readme.md.

@erlebach
Copy link
Author

erlebach commented Nov 1, 2022

Thanks for the detailed reply. I admit I did not read the README.md file carefully. I am at fault. Sorry about that.
Gordon.

@erlebach erlebach closed this as completed Nov 1, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants