Skip to content

Conversation

@shravanngoswamii
Copy link
Member

@shravanngoswamii
Copy link
Member Author

I do not know why the preview deployment is failing, its working fine when I open PR in my fork:
preview: https://shravanngoswamii.github.io/docs/pr-previews/9/tutorials/03-bayesian-neural-network/
pr: shravanngoswamii#9

@yebai
Copy link
Member

yebai commented May 28, 2024

@shravanngoswamii It is like permission issues. If so, you can always create new branches within this repo rather than from your fork.

@shravanngoswamii
Copy link
Member Author

shravanngoswamii commented May 28, 2024

@shravanngoswamii It is like permission issues. If so, you can always create new branches within this repo rather than from your fork.

Ohhk, let me look into this issue too, as in future if someone else opens pr then this should be working fine!

edit: fixed

@shravanngoswamii shravanngoswamii linked an issue May 28, 2024 that may be closed by this pull request
@yebai
Copy link
Member

yebai commented May 28, 2024

@yebai
Copy link
Member

yebai commented May 28, 2024

Also, @shravanngoswamii can you see whether we can improve the visualisation of the results? Here is a nice example for inspirations: https://playground.tensorflow.org/

@YongchaoHuang
Copy link
Contributor

All codes work well on my side.
Maybe we can reduce the trajectory length N = 5000 (Line 211) to N = 500 to reduce runtime.
Also, safe to modify the nn_predict as:

function nn_predict(x, θ, num)
    num = min(num, size(θ, 1))  # make sure num does not exceed the number of samples
    return mean([first(nn_forward(x, view(θ, i, :))) for i in 1:10:num])
end

Further, following code snapshot allows user to choose from the drop-down menu a sample and plots the corresponding decision boundary (using Plotly.jl):

using PlotlyJS

mcmc_samples = [θ[k,:,1] for k in 1:10:200] # thinning the chain

function generate_decision_boundary_plot(θ)
    Z = [first(nn_forward([x1, x2], θ, ps)[1]) for x1 in x1_range, x2 in x2_range]
    contour = PlotlyJS.contour(x=x1_range, y=x2_range, z=Z, colorscale="Viridis")
    return contour
end


initial_plot = generate_decision_boundary_plot(mcmc_samples[1]) # generate initial plot

# create dropdown menu for selecting different MCMC samples
dropdown_buttons = [attr(
    label = "Sample $i",
    method = "restyle",
    args = [[json([generate_decision_boundary_plot(mcmc_samples[i])])]]
) for i in 1:N]

layout = Layout(
    title="Decision boundary based on chosen sample",
    updatemenus=[attr(
        buttons=dropdown_buttons,
        direction="down"
    )]
)

fig = Plot([initial_plot], layout)
display(fig)

Not sure if this is what you wanted @yebai.

@avik-pal
Copy link

Code: shravanngoswamii/docs@d3c0e0d/tutorials/03-bayesian-neural-network/index.qmd#L209C1-L213C4

@avik-pal do you know why we have this ReverseDiff related warning?

For the most part, that should be harmless; it is just telling you the layer got an Array{<:TrackedReal} instead of TrackedArray. The former tends to perform badly for neural networks, so we automatically convert that to a TrackedArray.

shravanngoswamii and others added 3 commits May 29, 2024 11:49
* testing preview workflow

* testing preview workflow 2

* testing preview workflow 3

* testing preview workflow 4

* testing preview workflow 5

* testing preview workflow 6

* testing preview workflow 6

* testing preview workflow 6

* new preview test

* new preview test 1

* final workflow

* final workflow

---------

Co-authored-by: Shravan Goswami <shravanngoswamii.com>
@github-actions
Copy link
Contributor

Preview the changes: https://turinglang.org/docs/pr-previews/462

@yebai
Copy link
Member

yebai commented May 29, 2024

Further, following code snapshot allows user to choose from the drop-down menu a sample and plots the corresponding decision boundary (using Plotly.jl):

Let's keep things simple in this PR. I am mostly referring to improving the plotting and visualisation of results.

@shravanngoswamii
Copy link
Member Author

shravanngoswamii commented May 29, 2024

Let's keep things simple in this PR. I am mostly referring to improving the plotting and visualisation of results.

I will look into adding interactivity in plots! let's keep this feature for another day!

@shravanngoswamii
Copy link
Member Author

@yebai This PR is good to go after this issue:

Code:

# Perform inference.
N = 500
ch = sample(bayes_nn(reduce(hcat, xs), ts), NUTS(; adtype=AutoReverseDiff()), N);

Warning

┌ Warning: Lux.apply(m::Lux.AbstractExplicitLayer, x::AbstractArray{<:ReverseDiff.TrackedReal}, ps, st) input was corrected to Lux.apply(m::Lux.AbstractExplicitLayer, x::ReverseDiff.TrackedArray}, ps, st).
│ 
│ 1. If this was not the desired behavior overload the dispatch on `m`.
│ 
│ 2. This might have performance implications. Check which layer was causing this problem using `Lux.Experimental.@debug_mode`.
└ @ LuxReverseDiffExt ~/.julia/packages/Lux/PsbZF/ext/LuxReverseDiffExt.jl:25
┌ Info: Found initial step size
└   ϵ = 0.8

Preview: https://turinglang.org/docs/pr-previews/462/tutorials/03-bayesian-neural-network/

@patrickm663
Copy link

It will be a great help if someone looks into this error, I do not know how it works!: Code: https://github.com/shravanngoswamii/docs/blob/d3c0e0de7930bc862ec355edd614788f1079b040/tutorials/03-bayesian-neural-network/index.qmd#L209C1-L213C4

Not sure if this helps things, but I found switching to Tracker.jl doesn't give any warnings. Results are fairly similar and Tracker.jl ends up being relatively fast (~85 sec to run 1,000 samples).

@yebai yebai closed this May 29, 2024
@yebai yebai reopened this May 29, 2024
@yebai yebai merged commit 4ef520d into TuringLang:master May 29, 2024
github-actions bot added a commit that referenced this pull request May 29, 2024
@shravanngoswamii shravanngoswamii deleted the bayes-nn branch May 30, 2024 01:12
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Converting Bayes NN example to Lux

5 participants