Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Inconsistent surfaces caused by WallOverlapComputation being used in random directions #1266

Closed
Dawoodoz opened this issue Jun 6, 2020 · 1 comment

Comments

@Dawoodoz
Copy link

Dawoodoz commented Jun 6, 2020

Problem
In src/wallOverlap.h: "When producing gcode, the first line crossing the overlap area is laid down normally and the second line is reduced by the overlap amount." This is the problem. The slightest change can alter the loop's direction and therefore change who's first.

Because of the rough rounding of line thickness on whole lines, always choosing the first layer to get full thickness cause deep dents and high bumps of up to 40 micrometers on all of my prints. The error is clearly visible on all slices I have ever done in Cura. You cannot miss it.

Solution

  • Group lines into arcs separated by sharp corners and crossings. This ensures that the following decisions are consistent. Planning the path will no longer be straight forward.
  • Always let outer borders go before inner borders, which is consistent for whole loops. Let the rest prefer convex arcs that have a higher visibility over concave arcs with less visibility.
  • Store the final decision about which arcs to get full thickness for the next layer to make more consistent decisions. This can be drawn to a 2D raster layer where a separable 3x3 box convolution approximates Gaussian blur over multiple layers. Decay can be implemented using a (0.25, 0.25, 0.25) kernel where the sum is less than one.
  • Next layer will let the arc sample a number of points along the arc and get the average priority to disambiguate.

This should remove most of the inconsistent artifacts along smooth surfaces. I see that the code is not optimized, so performance is not an issue. The Cura engine is only utilizing 0.5% of the CPU's full power.

My opinionated rant about why you are having these problems to begin with
As a former firmware developer in safety-critical robotic vision, I would highly suggest all of you to stop using the horribly obsolete C++ standard collections. Their slow and outdated design is optimized for computers from the 1990s and the poor usability is a thorn in the eye of any serious algorithm developer. I can see many algorithms in the Cura Engine being held back by having for-each loops as the go-to choice when more advanced iterations would easily unlock more features. I now understand why Cura is 200 times slower than it should be. Cura could be running in real-time! Don't keep using something just because it's C++ tradition. Profile against hand-written assembly and make the hardware abstractions that you need to achieve the same performance with higher safety and readability.

Application version
Cura 4.5.0

Platform
Manjaro Linux (should not matter for the WallOverlapComputation class)

@Ghostkeeper
Copy link
Collaborator

Ghostkeeper commented Jun 10, 2020

The Cura engine is only utilizing 0.5% of the CPU's full power.

How did you measure this? If you claim that CuraEngine could be made 200 times as fast, we'd love to see a pull request. Join in on the fun! Even a partial improvement is an improvement, so you don't have to work through the whole code base at once. I have to disagree with your claim that the C++ standard collections are optimised for computers from the 1990s. GCC for example includes auto-vectorisation for SIMD, which was very rare in home computers until the later half of the 0's. We've also done profile-guided optimization on CuraEngine on computers from 2015-2018 and found that the default GCC optimizations are optimal within our measurement errors.

As for your actual suggestion though, we are currently working on a replacement of that system over here: #1210 . This solution uses a more consistent approach where both walls share the disparity rather than just one of them. This prevents the ambiguity in what constitutes a convex or concave polyline. It also allows us to process layers in parallel, thus allowing use more than one core of the CPU without locking. Your solution prevents that due to the proposed convolution spanning over multiple layers to hint other layers as to which walls should reduce flow.

We will choose the ongoing work on merging libArachne over your proposal, and they are mutually exclusive, so I'll close this feature request. If you have suggestions about how we can improve performance, please write that in a separate report or in a pull request.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants