Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fork implementing multi-region spatial control #46

Open
genekogan opened this issue Oct 31, 2019 · 34 comments
Open

Fork implementing multi-region spatial control #46

genekogan opened this issue Oct 31, 2019 · 34 comments

Comments

@genekogan
Copy link

First of all, this is a great repo! It seems a bit faster and more memory efficient than the original lua-based neural-style.

I've made a fork of this repo trying to add masked style transfer as described by Gatys, and going off of the gist you wrote for the lua version.

I've almost got it working, but my implementation is suffering from two bugs. The first is that, testing with two style images and segmentations, my implementation seems only to get gradients for the first mask but not the second.

So for example, the following command:

python neural_style.py -backend cudnn -style_image examples/inputs/cubist.jpg,examples/inputs/starry_night.jpg -style_seg examples/segments/cubist.png,examples/segments/starry_night.png -content_seg examples/segments/monalisa.png -color_codes white,black

produces the following output:

out1

where the first style (cubist) and corresponding segmentation get good gradients and works in the mask provided, but the second mask (starry night) has little or no gradient signal.

By simply swapping the order of the style images, as in:

python neural_style.py -backend cudnn -style_image examples/inputs/starry_night.jpg,examples/inputs/cubist.jpg -style_seg examples/segments/starry_night.png,examples/segments/cubist.png -content_seg examples/segments/monalisa.png -color_codes white,black

I get the opposite effect where only the starry night style works and the cubist style in its mask is not there.

out2

I have been trying to debug this, checking the masks, and everything seems right to me, and I can't figure out the problem. This was almost a pytorch mirror of what you made in your gist, which does appear to work fine. I'm not sure if there's some typo I'm missing or something deeper.

Additionally, loss.backward() without keeping the gradients with retain_graph=True produces a runtime error (RuntimeError: Trying to backward through the graph a second time, but the buffers have already been freed. Specify retain_graph=True when calling backward the first time.), which makes me think I setup the graph wrong.

If you are able to see what I'm doing wrong so that we can fix it, I'd love to see this implemented in PyTorch. I think it would be a really nice addition to the repo.

@ProGamerGov
Copy link
Owner

Nice work on translating my old code!

I think that loss.backward() shouldn't require retain_graph=True, as I don't think that we need to save the intermediate values. The intermediate values will also waste GPU resources if we save them. I'm not sure what is causing the other issue with only the first style image working.

@genekogan
Copy link
Author

Yeah, I agree, I think there might be an issue with how I set up the MaskedStyleLoss layer, maybe the graph gets detached somewhere. Perhaps that's also related to why the second style doesn't get picked up. The code is almost identical to yours, just added the diffs that you added to the old lua code in your gist.

@ProGamerGov
Copy link
Owner

ProGamerGov commented Nov 1, 2019

PyTorch has a few differences with Torch7, due to the Autograd feature, and because of Python.

I suspect that the issue lies with something in the MaskedStyleLoss function. This example may help figure out what is causing the issue: https://discuss.pytorch.org/t/runtimeerror-trying-to-backward-through-the-graph-a-second-time-but-the-buffers-have-already-been-freed-specify-retain-graph-true-when-calling-backward-the-first-time/6795/29

@genekogan
Copy link
Author

Ok, I fixed the problem with the graph, just had to detach the masked gram before operating with it.

But the problem with the second style not having any effect persists. One clue is that there seems to be a large magnitude difference between the two MaskedStyleLoss for each of the two styles. Will keep investigating.

@genekogan
Copy link
Author

I've fixed the bug with the second style and now everything works properly! See result.

out

Need to add a little bit of documentation to the README, and I can also send a PR to you if you'd like here. To my eye, I think it still needs a bit of work. In the paper the authors gave some nuances to the generation of masks besides for simple bilinear scaling. I am also trying to figure out how to make non-discrete continuous masks to do transitioning between styles but finding this isn't as straightforward as I thought it would be!

@ProGamerGov
Copy link
Owner

@genekogan Looks good! I'm not sure about the licensing issue with translated code with respect to the segmentation code from Lua. That could conflict with neural-style-pt license, and as such that could mean that it's better to list it in the wiki, like how the original was linked to in the neural-style wiki.

I also wonder if we can simplify the code and improve how it looks/works? Python is a lot more powerful than Lua, and opens up possibilities for improving the code.

@genekogan
Copy link
Author

Sure, I am fine with listing it on the wiki instead.

Yeah, I'd definitely like to improve the code. One thing I'm currently struggling with is blending or transitioning between masks by making them continuous instead of discrete. I've implemented this in a separate branch but it produces poor results in the boundary areas. I wrote about this in more detail in this issue. I'd be curious if you have any ideas.

@ProGamerGov
Copy link
Owner

@genekogan Are making sure that that TV weight is set to 0 in your experiments?

@genekogan
Copy link
Author

Yes, setting tv_weight to 0 has not really helped. I also just started a new branch which replaces gram loss with @pierre-wilmot's histogram loss as described here. I'm getting interesting results with it, but the big gap in the middle remains. I'm pretty stumped. I might start trying more hacky approaches.

@ProGamerGov
Copy link
Owner

ProGamerGov commented Dec 4, 2019

@genekogan I was actually recently looking into histogram loss myself, after seeing the results from: https://arxiv.org/pdf/1701.08893.pdf. It was used in deep-painterly-harmonization, and it seems like a better idea than performing histogram matching before/after style transfer. deep-painterly-harmonization seems to implement the histogram loss as a type of layer alongside content and style loss layers.

I'm not sure what's causing the gap in middle with your code. I haven't come across the issue myself before, so I have no idea what could be going wrong in your code.

@ProGamerGov
Copy link
Owner

ProGamerGov commented Dec 4, 2019

Also, on a bit of an unrelated note, have you tried to get gradient normalization from the original Lua/Torch7 code working in PyTorch? I did figure out that it's more like gradient scaling: #26 (comment), but I'm beginning to think that it's not possible in PyTorch without a ton of hacky workarounds.

@genekogan
Copy link
Author

The histogram approach is getting interesting aesthetic results, and seems to work well in combination with normal Gram losses. Pierre also uses Adam to optimize it instead of L-BFGS, which didn't work well in the original neural-style, but maybe could if the hyper-parameters are fine-tuned just right.

Yeah, I'm stumped on the gray region. I don't think there's a bug in the code... I think maybe it's just the expected behavior when you try to spatially mix gradients. I'm still researching alternatives.

I have not tried implementing normalized gradients. My recollection from the original neural-style was that it did not produce dramatic differences, but maybe I am not aware of cases where it might be useful?

@ProGamerGov
Copy link
Owner

ProGamerGov commented Dec 5, 2019

@genekogan I do recall seeing some issues with gradients when using masks that had very small/thin regions surrounded by other regions. Maybe something like that could be being exaggerated by your code?

Gradient normalization in neural-style worked extremely well with higher content and style weight values. (ex: jcjohnson/neural-style#240 (comment), though I'd suggest values closer to cw 50-500, sw 4000-8000) I've also seen users on Reddit talking about how it made heavily stylized faces look better.

@ProGamerGov
Copy link
Owner

Torch7 had really bad default parameter values for the Adam optimizer, which is why neural-style had a parameter for the learning rate. PyTorch's Adam optimizer seems use better default parameter values, though I haven't played around with different values for it (though the values are really similar, if not the same as the ones I used in modified neural-style versions).

Do you think the histogram results work better as their own separate layers, or as part of the style layers (like in your code, I think)?

@genekogan
Copy link
Author

Yes, I have it in the same layers, which is how Pierre did it. I don't know any reason why it might do better in different layers. I do need to find better values for the strength coefficients, as the histogram loss at the values it has now overwhelms the other loss terms. Pierre wrote in his paper that the best results come from using both histogram and Gram loss together.

@ProGamerGov
Copy link
Owner

ProGamerGov commented Dec 8, 2019

I implemented histogram loss as it's own layer type alongside content and style layers. The code can be found here: https://gist.github.com/ProGamerGov/30e95ac9ff42f3e09288ae07dc012a76

Histogram loss example output on the left, control test (no histogram loss) on the right:

There are more examples in the comments of the gist.

@genekogan
Copy link
Author

Super nice, I commented further in the gist.

@genekogan
Copy link
Author

Another note about transitional blending problem. I e-mailed Leon Gatys about it and he suggested that since covariance loss seems to reduce the smudging effect more than Gram, to try using covariance loss on the lower layers (where the differences between the styles are greatest) and using Gram on the higher layers to preserve better style reconstruction. Going to try that next.

@ProGamerGov
Copy link
Owner

@genekogan I replied to your comment in the gist regarding weights.

Someone also already implemented covariance loss in neural-style-pt here: #11, so that should help with covariance loss part of your plan.

@ProGamerGov
Copy link
Owner

ProGamerGov commented Dec 10, 2019

It looks like there may an issue with larger image sizes when testing the histogram layers. I don't know enough about C++ to decode it.

Running optimization with L-BFGS
Traceback (most recent call last):
  File "neural_style_hist_loss.py", line 538, in <module>
    main()
  File "neural_style_hist_loss.py", line 289, in main
    optimizer.step(feval)
  File "/usr/local/lib/python3.5/dist-packages/torch/optim/lbfgs.py", line 307, in step
    orig_loss = closure()
  File "neural_style_hist_loss.py", line 267, in feval
    net(img)
  File "/usr/local/lib/python3.5/dist-packages/torch/nn/modules/module.py", line 541, in __call__
    result = self.forward(*input, **kwargs)
  File "/usr/local/lib/python3.5/dist-packages/torch/nn/modules/container.py", line 92, in forward
    input = module(input)
  File "/usr/local/lib/python3.5/dist-packages/torch/nn/modules/module.py", line 541, in __call__
    result = self.forward(*input, **kwargs)
  File "neural_style_hist_loss.py", line 531, in forward
    target = self.calcHist(input[0], self.target_hist, self.target_min, self.target_max)
  File "neural_style_hist_loss.py", line 518, in calcHist
    cpp.matchHistogram(res, target.clone())
RuntimeError: n cannot be greater than 2^24+1 for Float type. (check_supported_max_int_with_precision at /pytorch/aten/src/ATen/native/TensorFactories.h:78)
frame #0: c10::Error::Error(c10::SourceLocation, std::string const&) + 0x33 (0x7fc56a16b813 in /usr/local/lib/python3.5/dist-packages/torch/lib/libc10.so)
frame #1: <unknown function> + 0x1bb1638 (0x7fc56c377638 in /usr/local/lib/python3.5/dist-packages/torch/lib/libtorch.so)
frame #2: at::native::randperm_out_cpu(at::Tensor&, long, at::Generator*) + 0x3c (0x7fc56c36fd0c in /usr/local/lib/python3.5/dist-packages/torch/lib/libtorch.so)
frame #3: <unknown function> + 0x1d9e3e4 (0x7fc56c5643e4 in /usr/local/lib/python3.5/dist-packages/torch/lib/libtorch.so)
frame #4: at::native::randperm(long, at::Generator*, c10::TensorOptions const&) + 0xab (0x7fc56c36c5eb in /usr/local/lib/python3.5/dist-packages/torch/lib/libtorch.so)
frame #5: at::native::randperm(long, c10::TensorOptions const&) + 0xe (0x7fc56c36c6ee in /usr/local/lib/python3.5/dist-packages/torch/lib/libtorch.so)
frame #6: <unknown function> + 0x1ecce9b (0x7fc56c692e9b in /usr/local/lib/python3.5/dist-packages/torch/lib/libtorch.so)
frame #7: at::Tensor at::ATenOpTable::callUnboxed<at::Tensor, long, c10::TensorOptions const&>(long, c10::TensorOptions const&) const + 0xb6 (0x7fc565ecd1d4 in /tmp/torch_extensions/histogram_cpp/histogram_cpp.so)
frame #8: <unknown function> + 0x82f69 (0x7fc565ebff69 in /tmp/torch_extensions/histogram_cpp/histogram_cpp.so)
frame #9: torch::randperm(long, c10::TensorOptions const&)::{lambda()#1}::operator()() const + 0x97 (0x7fc565ec8b81 in /tmp/torch_extensions/histogram_cpp/histogram_cpp.so)
frame #10: torch::randperm(long, c10::TensorOptions const&) + 0x192 (0x7fc565ec8d5c in /tmp/torch_extensions/histogram_cpp/histogram_cpp.so)
frame #11: matchHistogram(at::Tensor&, at::Tensor&) + 0x10a (0x7fc565ec0696 in /tmp/torch_extensions/histogram_cpp/histogram_cpp.so)
frame #12: <unknown function> + 0x7e653 (0x7fc565ebb653 in /tmp/torch_extensions/histogram_cpp/histogram_cpp.so)
frame #13: <unknown function> + 0x7b692 (0x7fc565eb8692 in /tmp/torch_extensions/histogram_cpp/histogram_cpp.so)
frame #14: <unknown function> + 0x77343 (0x7fc565eb4343 in /tmp/torch_extensions/histogram_cpp/histogram_cpp.so)
frame #15: <unknown function> + 0x77533 (0x7fc565eb4533 in /tmp/torch_extensions/histogram_cpp/histogram_cpp.so)
frame #16: <unknown function> + 0x6a4a1 (0x7fc565ea74a1 in /tmp/torch_extensions/histogram_cpp/histogram_cpp.so)
<omitting python frames>
frame #21: python3() [0x4ebe37]
frame #25: python3() [0x4ebd23]
frame #27: python3() [0x4fb9ce]
frame #29: python3() [0x574b36]
frame #33: python3() [0x4ebe37]
frame #37: python3() [0x4ebd23]
frame #39: python3() [0x4fb9ce]
frame #41: python3() [0x574b36]
frame #44: python3() [0x5406df]
frame #46: python3() [0x5406df]
frame #48: python3() [0x5406df]
frame #50: python3() [0x540199]
frame #52: python3() [0x60c272]
frame #57: __libc_start_main + 0xf0 (0x7fc5c1e76830 in /lib/x86_64-linux-gnu/libc.so.6)

@genekogan
Copy link
Author

It looks like maybe it originates in randomIndices[featureMaps.numel()] = torch::randperm(featureMaps.numel()).to(at::kLong).cuda();
Maybe problem is featureMaps.numel() > max number for a float?
Not sure of an easy workaround, but a not easy one would be to downsample the feature maps before they go into the histogram layer. They probably don't need to be so high-res to get an accurate histogram loss. It would probably speed it up too as it seems to be a lot slower than Gram loss.
Another idea would be to simply remove histogram loss from the first style layer, and just have it in the later ones where the feature maps are smaller.

@genekogan
Copy link
Author

I just updated my histogram_loss branch with your Histogram loss module, and made it support masking in the same way the Style Loss does. I also added a covariance loss option for the normal StyleLoss module. So far with limited tests, histogram loss does not seem to do much to fix the blending problem. I'm going to do some tests combining the style loss parameters and see if I can either improve that issue somehow or at least get the general style transfer to look nicer.

@ProGamerGov
Copy link
Owner

@genekogan Has covariance loss made any difference with the lower layers?

For the histogram size problem, we could potentially try to recreate the matchHistogram() function in PyTorch, though torch.histc() is not differentiable so we likely can't use that.

It also looks like Pierre downscales tensors before running them through the matchHistogram() function: https://github.com/pierre-wilmot/NeuralTextureSynthesis/blob/master/main.py#L179-L186

            model.setStyle(torch.nn.functional.interpolate(style, scale_factor = 1.0/4))
            result = torch.nn.functional.interpolate(result, scale_factor = 2)
            model.setStyle(torch.nn.functional.interpolate(style, scale_factor = 1.0/2))
            result = torch.nn.functional.interpolate(result, scale_factor = 2)
            model.setStyle(torch.nn.functional.interpolate(style, scale_factor = 1))

@genekogan
Copy link
Author

I think in that block, he is actually just doing a 3-part multiscale generation. Capture style at 1/4 scale, generate, upsample it 2x, then capture style at 1/2, generate on top of that, upsample 2x, capture style at 1, generate one last time at 4x original resolution.

@ProGamerGov
Copy link
Owner

ProGamerGov commented Dec 11, 2019

@genekogan I think you're right.

As for the n cannot be greater than 2^24+1 for Float type error, I think it's because of a limitation with float numbers themselves:

A & B are equal to each other according to Python:

a = 2.0e24+5
b = 2.0e24+1

The largest value representable by an n bit integer is (2^n)-1. As noted above, a float has 24 bits of precision in the significand which would seem to imply that 2^24 wouldn't fit.

However.

Powers of 2 within the range of the exponent are exactly representable as 1.0×2n, so 2^24 can fit and consequently the first unrepresentable integer for float is (2^24)+1. As noted above. Again.

Source: https://stackoverflow.com/questions/3793838/which-is-the-first-integer-that-an-ieee-754-float-is-incapable-of-representing-e

Edit: This is exactly what is happening to us.

Resizing the height and width of tensors for the histogram loss layer did not seem to resolve the issue.

@ProGamerGov
Copy link
Owner

ProGamerGov commented Dec 13, 2019

@genekogan I translated my linear-color-transfer.py to PyTorch: https://gist.github.com/ProGamerGov/684c0953395e66db6ac5fe09d6723a5b

The code expects both inputs to have the same size, and it does not change the BGR images to RGB or un-normalize them (though neither of those things seem to influence the output). Hopefully we can use it to create some sort of histogram matching loss function and replace the bugged CUDA code?

@ProGamerGov
Copy link
Owner

ProGamerGov commented Dec 13, 2019

I got linear-color-transfer fully working inside neural-style-pt: https://gist.github.com/ProGamerGov/923b1679b243911e71f9bef4a4bda65a

The histogram class is used to perform the standard histogram matching that's normally done via linear-color-transfer, and it's also used by the histogram loss function.

The histogram loss doesn't work well yet, and quickly becomes nan with -hist_mode pca. -hist_mode pca breaks down on the 70th iteration, and -hist_mode chol seems to work. I'm not sure exactly why this is happening as currently it almost looks like it's working for the first 30 iterations. NumPy is used through PyTorch on a few different lines I think, and these NumPy operations are done on the CPU?

The chol mode can't seem to handle relu1_1. It also seems like the lower histogram matching loss layers are important for reducing the smudged out gray areas.

@ProGamerGov
Copy link
Owner

Using these histogram parameters:

-hist_mode chol -hist_weight 4000 -hist_layers relu2_1,relu3_1,relu4_1,relu4_2,relu5_1 -hist_image examples/inputs/seated-nude.jpg -hist_target content

Histogram loss & histogram matching preprocessing example output on the left, control test (no histogram loss) & histogram matching preprocessing on the right:

And the histogram loss output without histogram matching preprocessing :

@ProGamerGov
Copy link
Owner

ProGamerGov commented Dec 16, 2019

So, oddly enough this code replicates the results from using the CUDA histogram matching code extremely well:

    def double_mean(self, tensor):
        tensor = tensor.squeeze(0).permute(2, 1, 0)
        return tensor.mean(0).mean(0)

    def forward(self, input):
        if self.mode == 'captureS':     
            self.target = self.double_mean(input.detach())      
        elif self.mode == 'loss':
            input_dmean = self.double_mean(input.detach())
            self.loss = 0.01 * self.strength * self.crit(input_dmean , self.target)
        return input
-hist_weight 40000 -hist_layers relu1_1,relu2_1,relu3_1,relu4_1,relu4_2,relu5_1

With relu1_1 on the left, without relu1_1 on the right:

@ProGamerGov
Copy link
Owner

ProGamerGov commented Dec 17, 2019

So, MSELoss() is implemented as: ((input-target)**2).mean(). When I combine MSELoss() with mean(0).mean(0) for content and style loss, I get what looks like DeepDream hallucinations.

Both images had .permute(2, 1, 0) used on them before I got the means. The image on left uses MSELoss(input.mean(0).mean(0), target.mean(0).mean(0)), while the image on the right adds MSELoss(input.mean(1).mean(0), target.mean(1).mean(0)) as well in addition to the first:

I previously used this code in neural_style_deepdream.py to implement simultaneous style transfer and DeepDream:

-input.mean() * self.strength

So, it looks like my code in my above comment is essentially a DeepDream layer (not a histogram matching loss layer) , and those DeepDream hallucinations provide detail for the style transfer process to attach to on bland regions like the sky in the example input image.

@genekogan I wonder if this could be used as a possible solution to your blending problem?

@genekogan
Copy link
Author

@ProGamerGov your results look amazing! But I'm unable to reproduce it...

Using your code directly from https://gist.github.com/ProGamerGov/923b1679b243911e71f9bef4a4bda65a, with:

python neural_style_hist.py -backend cudnn -hist_mode chol -hist_weight 4000 -hist_layers relu2_1,relu3_1,relu4_1,relu4_2,relu5_1 -hist_image examples/inputs/seated-nude.jpg -hist_target content

I get some numerical instability: RuntimeError: cholesky_cuda: U(140,140) is zero, singular U. Seems like this is a known issue. Changing from chol to pca I also get nan after bit more than 100 iterations.

When I switch over the double_mean code in HistLoss, and use

python neural_style_hist.py -backend cudnn -hist_weight 40000 -hist_layers relu1_1,relu2_1,relu3_1,relu4_1,relu4_2,relu5_1 

It works, but I only see:

out3

Not sure what I might be doing wrong. It shows the histogram losses alright:

Iteration 1000 / 1000
  Content 1 loss: 460392.1875
  Style 1 loss: 229.07461547851562
  Style 2 loss: 2093.78125
  Style 3 loss: 2881.075927734375
  Style 4 loss: 110088.375
  Style 5 loss: 410.4393005371094
  Histogram 1 loss: 2500.414794921875
  Histogram 2 loss: 7188.25
  Histogram 3 loss: 31537.544921875
  Histogram 4 loss: 673509.0
  Histogram 5 loss: 859414.625
  Histogram 6 loss: 107440.390625
  Total loss: 2263659.0

I'm not sure if perhaps I duplicated your code incorrectly or not. But nevertheless it looks really promising.

Regarding the deepdream idea, I'll have to try that separately to see if it solves the problem. Additionally I'd love to integrate this into my masked style transfer to see how the histogram loss works in tandem with that, once I can replicate the results you are getting.

@ProGamerGov
Copy link
Owner

ProGamerGov commented Dec 19, 2019

@genekogan I only had Cholesky errors with relu1_1, and didn't experience any other is zero, singular U. errors, but I can reproduce your error message with -init random. Using -init image appears to correct the error, and thus I think some parameters may contribute to causing the instability.

In addition to the histogram parameters, I've been mostly using these other parameters as a default:

-style_image examples/inputs/seated-nude.jpg -style_weight 4000 -normalize_weights -tv_weight 0 -init image -seed 876 -backend cudnn  

Any other parameters are just the neural-style-pt default values.

As for the DeepDream/Mean code, I accidentally included input.detach() in the loss mode in my above code, and that seems to mess up the results. This should fix it (by using clone() instead of detach()):

    def forward(self, input):
        if self.mode == 'captureS':     
            self.target = self.double_mean(input.detach())      
        elif self.mode == 'loss':
            self.loss = 0.01 * self.strength * self.crit(self.double_mean(input.clone()) , self.target)
        return input

I made a neural-style-pt gist that implements the mean loss layer type: https://gist.github.com/ProGamerGov/9c2aa72f21f0f22c64d0a6ee7294cf3c

@ProGamerGov
Copy link
Owner

ProGamerGov commented Dec 24, 2019

On a bit of an unrelated note, I tried to implement a tiled gradient calculation in an attempt to lower GPU usage.

On the left is without trying to hide the tile borders, and on the right is the result of randomly shifting the image before tiling:

If the tile coordinates don't cover the entire image, you get this effect:

Sadly, it doesn't seem like my code reduces memory usage yet. I was loosely following the official Tensorflow DeepDream guide and implemented a few DeepDream related functions for things like rolling/jitter and resizing.

The code can be found here: https://gist.github.com/ProGamerGov/e64fcb309274c2946f5a9a679ed45669/ae37552a77c4b67c0eb021a6d7237868ecb464e4

Somehow VakonS was able to modify Justin Johnson's neural-style in a way that uses to tiling to create larger images.

Edit:

I think overlapping tiles is a better solution than random shifting.

nn.fold() and nn.unfold() don't hide the tile edges effectively.

@ProGamerGov
Copy link
Owner

The CUDA histogram matching code now works with images larger than 512px: https://gist.github.com/ProGamerGov/30e95ac9ff42f3e09288ae07dc012a76, now that the bug has been fixed: pierre-wilmot/NeuralTextureSynthesis#1

I have also managed to implement tiling that is similar to VaKonS' neural-style modifications: https://gist.github.com/ProGamerGov/e64fcb309274c2946f5a9a679ed45669, though it currently doesn't work with more than 2x2 tiles.

I have also constructed a standalone DeepDream project using neural-style-pt as a base: https://gist.github.com/ProGamerGov/a416cc21a9ce454fdc160ad846410237

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants