-
-
Notifications
You must be signed in to change notification settings - Fork 1.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
small corners dont print smoothly (raspberry pi) #450
Comments
You are printing from the Pi, right? You might simply crash against it's |
It might be better than just stating that raspi might be too slow, then maybe Octoprint can detect that it is falling behind, and report an warning or such. |
Suggestions on how to do this (without making the whole processing even |
First suggestion is you could (while anaylsing the gcode), try to detect Perimeter Movements, e.g. by Speed, and check if there are many "small" movements after each other. Just one suggestion, I believe if I think that further I'll find a few more ways to check that. |
If the errors are happening because the processor gets too loaded, maybe just having a graph showing the processor utilization? |
Hello, I had an idea about speeding up the whole Analysis Process and even detecting for small movements... I don*t know if it's a good idea so i post it before doing any programming. Idea: e.g.: I want to upload File XY, i drop it onto the Browser, now a Javascript Gcode Analyser will process the whole file, sending all necessary informations with the file (as a comment or JSON-Data), Octoprint wouldn't need to do anything anymore. |
The problem I have with this approach is that it's moving too much of the core work over to the client. This is critical due to two reasons.
IMHO the gcode analysis should be the server's responsibility (and right now it doesn't add much anyways besides some stats, which is why it's done asynchronously and should never be done when printing), as should be anything to do with file processing. Recognizing small movements might be possible on the fly during printing, so we could display a warning in such a case. What I want to prevent though is adding even more detection code and stuff in the send loop that will put even more strain on the poor CPU and make serial communication even more fiddly (which it already is anyways thanks to the protocol), which is why I'm quite reluctant to add large "am I slow now?" detection mechanisms in there. This is also the reason for my backtalk above ;) I'm not even sure if a software solution is the best approach here to be honest (due to the serial bottleneck and also the fact that as said above, anything done in Software will only put more strain on the system). A small part of me is rather thinking in the direction of emulated SD card attachments connected via hi-speed-USB to the host. In any case, at the current state it's a non trivial issue. |
anyway to compress gcode before sending it over the wire ? |
Does this problem still occur in OctoPrint 1.2.x? I backported a lot of the fixes I did on the commRefactoring branch (which sadly proofed after a lot of work on it to be unviable), so the problem shouldn't exist as much anymore (although the serial line will still always be a bottleneck the smaller the segments get, there's nothing I can do about that). |
I have been bit by this as well. I made a small g-code file, only containing a few G0 commands and tried to have octoprint(1.2.9) send it to marlin over raspi serial (the one on the GPIO) at 250000 baud. I then hooked up a LSA to RX and TX and measured time between "ok" and the next "G0" command: approx 10ms. First place to look could be disabling terminal logging during prints. (i tried with a filter, but it did not help much) |
The firmware should buffer up G0 commands, so unless you are trying to print more than 100 line segments per second you shouldn't notice. What speed are you printing at and how short are your line segments during a curve? |
The test data mentioned is crafted only track down where the bottlenecks are. No real printing going on here. But if I make small enough curve segments it will exhaust the buffer in the firmware. I'm pointing out that the send loop isn't particular fast, which I think I read some of the comments above to confirm as well. |
Well yes it will always be possible to exhaust the buffer with a finite comms speed, but is 10ms a problem in a practical sense? It doesn't seem to affect my prints but then I don't print segments shorter than my filament width. |
Perhaps I should explain why I came across this in the first place. I was wondering why upload to sd card takes to long. And here I more or less constantly see a 6.6ms delay from previous "ok" till next line starts transmitting. |
So I thought I would investigate this but I can't get upload to SD to work at all with Version: 1.2.9 (master branch). The log just says Does this work for anybody else? |
There is a bug with sd uploads in 1.2.9 that I already have fixed on the
maintenance branch, but I got sick before I could release that and since
then it has been on hold. See #1224
|
OK I downgraded to 1.2.8 using your instructions. The sudo service octoprint restart did not find a service but a manual reboot worked. The delay from OK to the next line is 16ms on my RPI B. Marlin only takes about 0.68ms to process the line and reply with OK. The line takes 5.3ms to send so it could potentially run a lot faster. |
I agree, there's something up here. @MortenGuldager also pinged me on another channel and suggested he'd open a new ticket for that. He observed a significant performance drop between 1.2.2 and 1.2.9 so it definitely looks like there was some issue introduced somewhere between those versions. For this a new ticket indeed might be better (specific issue introduced through a code change somewhere). It's actually a perfect timing now since the 1.2.10 has gotten delayed anyhow, this way that could contain another valuable fix if we figure this out. I have to admit, I'm unsure how to best measure this stuff (not sure my cheap logic analyser is up to that actually) and I'm still not fit again, so any help from you two in that matter is welcome to get that ironed out as fast as possible. |
I measured it with an extremely cheap Chinese clone of a Saleae logic analyser. Although I do have much better equipment that was the easiest to attach to the machine in my garage that uses ttyAMA0 at 115200, rather than USB at 250000. A long time ago I found that faster for SD upload but I can't remember if that was an early OctoPrint or Pronterface. The next step I will try is 1.2.2. To get further it might be necessary to use a Python profiler. Or I could add code to pulse GPIO lines at specific points in the code as I have six more channels on this device. |
my fault. perhaps not the latest octopi. The "fast" one reports Version: 1.1.1-30-g4fede5a (master branch) |
That is bad then, because 1.1.1 contained bugs in the comm layer that could basically ruin prints under the right conditions (race conditions in resend handling and overrunning the firmware's receive buffer) that the big rework in 1.2.x solved. So no going back to the old version without basically breaking everything left and right again for a lot of people. There are still points in the code where I could imagine there are optimization possibilities though. |
well, lets leave it as it is here. I will look into other ways for speed-uploading and eventually make a feature request for a faster serial communication. Sorry for the fuss I stirred up. |
I found I had the serial log enabled. Turning that off reduces the latency to just over 9ms, so similar to @MortenGuldager's finding. If the latency was zero it would go nearly three times faster. |
The sd upload via serial is more like a hobbled wheel anyhow. Even if you blast at full serial rate, a large file still takes ages. I've recently been rather experimenting with FlashAir (sd card with built in Wi-Fi) and that looks like a way less annoying approach. I'll still take another look at the current code though once I'm back on my feet, your mention of a Saleae clone reminded me that I have a ScanaPlus that might be able to help after all. |
Or to use arc commands instead of line segments for everything, something which slicers sadly still fail to do for some reason (and which has caused support for |
I see some of you are using Prusa Firmware. It does not print arc's well currently if the radius is very small (less than 2 mm), but I'm working on a patch. I just need to finish up testing. FYI, while modifying the G2/G3 commands, I discovered some performance issues inserting items into the planner. The G2/G3 code is highly optimized, but it almost doesn't matter compared to the overhead of adding the segments to the planner. I guess that makes sense since a whole lot of things are going on in there. I don't really understand that code yet, but I'm wondering if there is a way to take advantage of the fact that G2/G3 only produces relatively small segments to increase planner performance? Anyway, I believe the simplest way to increase performance from OctoPrint's perspective is to send fewer lines of gcode, and as @foosel mentions, G2/G3 generally takes care of this. It may take a while for the firmware to catch up though. |
In an ideal world, yes. Similar to the difficulty and attempting to rasterize bitmap images efficiently, the trouble is that curve-fitting to a polygonal 3D object is not a trivial task, somewhat computationally intensive, and generally adds a bit of uncertainty (at least to humans who don't understand the concept of "resolution" :) ). The approaches that have been tried are 2D curve fitting each layer individually, and that has had some marginal success (you can try one such approach here from 2016). That project IS interesting Gina ! Thanks for calling attention to it. But the reality is that G2/G3 was originally envisioned as a means of exporting existing CAM paths directly to gcode. It was never intended for taking tesselated objects and trying to gleam the mathematical arcs that could make them up. G2/G3 support is being dropped mostly by projects that have moved away from CNC control and are focused solely on FDM printing where there is no strong use model for arc support (other than to get around the serial limitation of processing GCODE ping-pong style of course !) |
@fiveangle, I reviewed the source of the project you mentioned in detail AFTER I finished my algorithm (I hadn't heard about it before until someone else pointed it out to me). It can convert arcs in some limited cases, but not everywhere you would expect. Try dropping a benchy in there and compare it to the results you'll find in the thread @foosel linked to (one of the first replies). The creator was definitely on the right track, but I had some additional tools at my disposal from my last project (Octolapse) that made it much easier to detect and convert a many more segments into arcs, all while maintaining a toolpath accuracy that is user controlled. The default settings guarantee that all tool paths will be within +- 0.025mm. You can increase the accuracy at the cost of reducing the number of arcs generated. Additionally, it doesn't take much time to process even a very large file. It could be sped up considerably if it were integrated with a slicer so that layers could be done in parallel (the current process is linear). G2/G3 support is not being dropped. In fact it's being improved (see marlin 2's newer implementation). Nearly all firmware supports it unless it is disabled to save a small bit of memory. The main issue is that most (no??) slicer's support G2/G3 generation. I think it's one of those, I will say that the actual implementations are quite varied. I'm attempting to create a patch for Prusa's fork of Marlin right now so that it can draw more accurate arcs. They are using very old code for G2/G3 segment interpolation. It has some problems currently drawing arcs of a small radius, and there were performance issues increasing the accuracy via the MM_PER_ARC_SEGMENT setting. I've made some improvements since my last post there that allows one to enforce a minimum arc segment length. |
It was just a proof of concept a guy through together as a school project I think. I'm not promoting it at all (and don't use it either).
Marlin is very much focused on continuing to provide CNC based capabilities so no, arc support will not be in jeopardy of being dropped by Marlin in the near term, if ever. I don't think Gina was suggesting so for Marlin specifically, but perhaps by other FDM printer firmware ? (I don't think the proprietary Lerdge FW supports G2/G3.) However, ARC_SUPPORT is absolutely one of the one of the first things to go when users are attempting to configure Marlin for PROGMEM-limited boards. I challenge you to find a CR-10 (Melzi) running Marlin 2.x with it enabled. From: https://github.com/MarlinFirmware/Marlin/blob/0518dec60d0931745efa2812fa388f33d68cfa29/Marlin/Configuration_adv.h#L1632
Is it though ? If the slicer outputs gcode from the tesselated model, it does so in true form to the model and can do so without any curve-fitting trickery. If there were no limitations in gcode string processing (such as when running 32-bit board and reading directly from on-board SPI-connected SD card) then there should be no benefit of curve-fitting. It's only because of the serial limitations and ping-pong command processing of gcode (where XON/XOFF is not implemented and functioning on both ends) where the arc support allows the large number of gcode lines to be represented by very small numbers of gcode arc requests where it makes any sense at all for FDM. That said, I ❤️ your ingenuity in basically creating a gcode-specific version of a fuzzy compression algorithm as a means to bandaid this data throughput problem.
I suspect this will be turtles all the way down... and in the end, tackling the original gcode processing throughput issue might be more fruitful, but again, I love the fact that your idea tries to alleviate the problem today, and in a cool and ingenious way 👍 |
Yes, that's exactly what I was saying. I didn't mean to imply that you were promoting it, I just wanted to point out that it yields very different results. The author did a good job making something that would run on a browser, and our methods were somewhat similar!
I already know of people who have CR-10s running marlin 2 with arc support enabled (check that thread Gina linked to), though I'm not sure about the Melzi board (never heard of it until now).... Users may be trained to disable this feature when trying squeeze out a bit more memory, but the official firmware has it enabled (I know it's 1.x), so vastly more CR10 users have arc support enabled than those who don't. Also, if arc support was something people used, it would not be cut out so easily. Additionally, users who are capable of compiling their own firmware usually don't have much trouble re-compiling it if they want to change something. Lastly, it is enabled in stock marlin, so most likely this will be the typical setting.
Well, there is a lot of fitting trickery going on in the slicer right now, it just doesn't involve G2/G3 as of yet. Also, I've been hearing that kisslicer will be adding G2/G3 support soon, which may add perceived (if not real) value and may push other slicers to add that feature as well. I want to make it clear that I think you make very good points about whether to use or not use G2/G3, and if it's important even, especially when talking about more modern boards. I believe the importance is relative to the use case. You have suggested that the use case may vanish with time. And while I definitely believe that is likely at some point, I think it may be useful to consider that as the hardware/printing capabilities advance, resolution and speed will increase, which will yield more peak segments/gcodes per second, which will require more bandwidth and processing power. More segments would also result in a proportional increase in gcode size. The more segments the better compression you'll get from using G2/G3. We tend to always push the hardware that is available to the limit, no matter what the capabilities are. I think it's quite likely that some new bottleneck will be hit with the next generation of more powerful boards. Perhaps G2/G3 will help with that, perhaps not. I hope I'm not sounding combative here, but I really enjoy a good discussion with someone who can hold their own, and who raises good and thoughtful points. Plus it makes me think a bit more about the future of FDM, which is very important for me when thinking about future projects. Thank you for your participation, your thoughts, and your compliments! Please feel free to contact me if you have other ideas. I feel like I'm starting to derail this thread, so I'm going to stop doing that :) |
Yes, you're right that this discussion is starting to commandeer this thread (although it's already a bit unwieldy after so many years ;) ). We can drop this discussion here, but I wanted to point out that the default Marlin CR10 configuration does not have ARC_SUPPORT due to the 128k PROGMEM size constraint on the Melzi board it uses (similar to Printrboard, and a handful of others): https://github.com/MarlinFirmware/Configurations/blob/c1b2dcd74721c777e96780be752a9e8a8ef4cac8/config/examples/Creality/CR-10/Configuration_adv.h#L1651 But yes, recompiling is always a possibility (if one can find the extra space from somewhere. Unfortunately, it's a juggling act and I really hate to see people start ditching things like protection from long extrusion or cold extrusion prevention, but such is life ! :) |
I enjoyed reading the smart people discuss and highlight nuisances. Really I'm glad I'm on the thread, because I can sorta keep up with ideas and development. |
I use a slicer that doesn't output ridiculously short segments and I model with OpenSCAD with $fs set to half my extrusion width and never have a any problems with serial printing in OctoPrint. There really isn't any point in using segments that are tiny compared to the line width. Filament smooths out the corners. |
I did see one warning somewhere in the Prusa firmware or their gcode manual or somewhere -- that the arc commands ignore bed level mesh. I don't know if that's a valid reason -- but a possible one? What's odd is S3D apparently used to support arcs and it disappeared? At the Marlin level, after looking at the debug output during a stutter -- the planner occasionally gets drained. I don't think anyone is surprised by that? On my Prusa firmware I can't increase the buffer. I think I'm out of memory (it fails during boot). The serial queue is allowed to have four 96 byte commands buffered, but only ever buffers one because Octoprint waits for the ack before sending the next one. I'm not sure why the buffer is 4 x 96 instead of just 1 x 96 unless clients aren't expected to wait for ack's. I wonder if some kind of windowing of the protocol might help with the stalls. Maybe testable by setting the ok timeout near 0 in the octoprint code and turning on simulated ok's? At most that only puts 3 more commands in the Marlin to be added to the planner queue. Adding more serial buffers costs about 96 bytes per buffer. I haven't counted the size of the planning buffer, but there are a lot of fields in it, and you can only increase it by doubling its depth (which is something else that could be looked at -- not requiring the planning buffer length to be a power of 2. It's done for some indexing optimization that might be unnecessary.) (For now, I'm just migrating to a Toshiba FlashAir wifi SD and printing from the SD card in octoprint...) |
All you need is a slicer or gcode filter that combines short segments into longer ones. That is what Skeinforge must do and it always works for me. Segments shorter than half your filament width don't add any more detail to your print as they cant be seen. |
@rrauenza I've asked about this in the Klipper forum, but have not received an official response about the mesh offsets. I did get this link in a comment though: GCodeArcOptimiser @nophead you mean nozzle/line width I think. I can definitely see details that are 0.8mm (half my filament dia) in size. I cannot, however, print reliably at less than 50% of my nozzle width. I gave up on Marlin months ago when sensorless homing broke in 2.x and moved to Klipper. Can't say I regret the choice. I increased the buffer size to 10 and I got a preview-accurate benchy in 47min w/ a slicer setting of 140mm/s and acceleration of 1500, no blobs. Buffer size isn't a panacea, but they sure help for moderately fast printing. I haven't tried the thingiverse vase mode pyramid again since the last config change in Klipper. My first attempt on the stock buffer size "failed" after the 2nd "layer" started. I was down to 25% of the 30mm/s slicer speed and still getting stutters so I stopped the print. That thing has an insane amount of small gcode moves. Here's an explanation of the data density I was given in the Klipper github. It applies to Marlin since Octo has to move plain text ASCII across the USB, and to the internal virtual port that Klipper uses. Klipper sends a compressed protocol over USB to the board so that's unlikely to create a bottleneck. @foosel what is the highest baud rate that can be set on a virtual port: /tmp/printer |
Not details less that half the extrusion width but noticeable bumps in curves that have segments that size. I use 0.5mm extrusion width and 0.25mm minimum segments. I get smooth curves because the filament acts like a French curve and smooths out tiny segments. I print at 50mm/s second so only ever get a maximum of 200 segments per second which doesn't exceed the comms bandwidth, so I never get zits on my prints. |
@ProfEngr For reference, https://help.prusa3d.com/en/article/prusa-specific-g-codes_112173
|
Actually, MBL is performed for every segment. I verified this for a PR im hoping will be approved. The documentation is apparently out of date. |
Reducing maximum resolution in slicer (Cura 4.6) from 0.05mm to 0.5mm greatly reduced stuttering when printing a 14mm diameter cylinder in vase mode (lightsaber segment, thing:3606120), but it was still problematic when opening octoprint web interface or any other slight additional activity. Finally printed from SD card alone perfectly. |
Hi all, i've skimmed through the thread, and this one. I have two raspizeroWs in two of my Prusa MK3Ss, both running same exact gcode for a batch print we're doing. Both running octopi, latest version, all updates done. One was freshly installed, almost no plugins, printing perfectly, visually identical to SD. One was printing extremely poorly, like in link above, very noticeable stuttering in circles and zig-zag moves. I noticed system load avg was about 3. This second pi had about 10 plugins installed, so i rebooted in safe mode, problem solved, perfect print, no stuttering. I then rebooted into normal, disabled a few plugins, while leaving the others enabled, and still no problem. Now my "vanilla" install on the first pi, which is still printing great has a load avg of 0.7 over 15mins, while the other one, with several plugins still enabled, has 1.04 load avg. I read about the G2 / G3 commands, stuttering reducing plugin, new marlin release 2.0.6, and more, but I still have a question: is there a way - beyond visual inspection - to know if system load avg on the PIs is causing slowed down gcode execution during print? I'd like to have a way to tell even if they're being reduced by 1%, therefore not producing visible artifacts, because that way I could rest assured I have not enabled too many plugins, and not having to rely on visual inspection of every piece. Is a load avg over 15mins higher than 1 already a warning sign? Is there a log signaling slowed-down serial command transfer? Thanks to all |
@nordurljosahvida, most of the lagging is due to serial communication delays, which won't cause much/any cpu load. IF your pi is under too much load, that can cause stuttering, but what I'm saying is that low load doesn't mean no stuttering. The only real way to check is to do a side-by-side SD to serial comparison. However, I believe there is a plugin that will show the min/max/average gcodes per second. That is a useful metric, but there is no way I'm aware of to prove there was serial induced stuttering within Octoprint or any other serial printing method. I would love to see a plugin like this, but I'm not sure it's possible. That being said, a pi 0 is likely to cause stuttering, especially with high numbers of gcodes per second. I would recommend you find a small test print with lots of curves (maybe a 10-15 min print) and try a side-by-side: 1 printed with Octoprint, one from SD, and one printed using ArcWelder converted gcode from Octoprint to see if you notice any improvements in quality or print time. |
@FormerLurker thank you so much for the insight. So there's no real way to understand if there was stuttering, whether it was induced by serial lag or system load, right? I suppose checking that the 24h print was executed to the exact minute of print time could help confirm there was [most likely] no stuttering? I'll try running those tests |
Only if some method for profiling OP were implemented would there be any way to determine based on timing if there were any aberrations detected during the print. And here, we are assuming timing as the detection mechanism but that would need to be established. There are profiling methods people have made for Python such as dtrace: https://github.com/paulross/dtrace-py but mostly for 3.x and not 2.x. Regardless, there are likely easier ways for Gina to implement such a detection mechanism in a debug-only fashion, but when I say "easier" I mean, compared to "who knows?" 😄 |
I've been thinking about how to diagnose the issue, and I figured that a mechanism that allows the printer to inform the host if the planner and command buffers underrun would be key in detecting if the issue occurs rather than judging print artifacts. I've opened a PR for Marlin (MarlinFirmware/Marlin#19674) that implements I enabled it to report at 2s intervals, and printed with an identical model (half size 3DBenchy) with the same settings in both Cura 4.6.2 and 4.7.1, as I know 4.7.1 produces dense gcode that causes print quality issues. I stripped extrusion commands from the gcode in order to not waste filament when I ran tests, but I don't think there would be a significant difference in latencies and underruns observed. I redirected the More information and methodology found on this post. We can use this to detect and inform the user regarding when planner buffer underruns occur and serve as a base to understand if attempts at improving the problem is working, which should help! |
I've been trying to figure the "random pausing every few minutes for a second or so" with my freshly loaded OctoPi and Marlin 2.0.9.3 (LPC1769) connected via 115200 baud serial (not USB). What ended up fixing it for me was compiling Marlin with "SERIAL_OVERRUN_PROTECTION" commented out. I don't use Pronterface, so I don't need that feature. No more random pausing! |
I was also able to fix stuttering with a shielded serial cable, 1000000 baud (just on Serial port 3. Others are 115200). Default buffer settings except for TX_BUFFER_SIZE 32 for ADVANCED_OK |
Hi,
I have had a couple of gear prints that fail get bad print results when using OcotPrint. The same gcode in repetier via a pc works perfectly without any small stops or hesitations in the movement.
This is one of the gears in thing:243278

example images of full print from pc/repetier vs stopped print OcotPrint.

A video of the problem. Long straight or circles work without any problems but a few seconds in when it prints the "gears" it starts to stop mid movement.
https://www.youtube.com/watch?v=tbUntyq7djY
The log from the heart gear print
https://gist.github.com/LangBalthazar/11037683
The gcode from the heart gear print:
https://gist.github.com/LangBalthazar/11037928
I am running Branch: devel, Commit: bf9d5ef
I have done a update and upgrade (so the pyserial should be 2.7 and work on baud 250000, correct or could this be my problem?) My printer is a Velleman K8200 / 3drag.
Any ideas on why this happens when printing from octoprint?
Best regards
Balthazar Lang
The text was updated successfully, but these errors were encountered: