Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

indigo/ros-opencog fails to build because CMake is out of date #167

Closed
mwigzell opened this issue Jan 22, 2022 · 42 comments
Closed

indigo/ros-opencog fails to build because CMake is out of date #167

mwigzell opened this issue Jan 22, 2022 · 42 comments

Comments

@mwigzell
Copy link
Contributor

Hey guys, I"m not yet understanding how I might upgrade the version of CMake used. If I run the build.sh I see this error:

CMake Error at CMakeLists.txt:19 (CMAKE_MINIMUM_REQUIRED):
CMake 3.0 or higher is required. You are running version 2.8.12.2

@mwigzell
Copy link
Contributor Author

Ok, so investigation of the ros-base/Dockerfile shows it is FROM Ubuntu: 14.04, and down the line in ros-opencog/Dockerfile the apt-get:
RUN apt-get -y install gcc g++ cmake binutils-dev libiberty-dev
libboost-dev libboost-date-time-dev libboost-filesystem-dev
libboost-program-options-dev libboost-regex-dev
libboost-serialization-dev libboost-system-dev libboost-thread-dev
cxxtest

is bringing in "cmake". So the issue is that somehow the surrounding packages have moved on, but the version from ubuntu's 14.04 container is pulling in version 2.8.12.2 which is too old.
How to proceed?

@mwigzell
Copy link
Contributor Author

I have impression that easiest way forward would be to bump the Ubuntu version. Perhaps a better fix would be to use an Arch container. At least this kind of issue wouldn't happen. But ultimately if the apis used deprecate certain features, then the opencog stuff would break. Either way is not going to work forever I guess. Its disappointing that the docker mechanism did not yield a more robust snapshot of the basic dependencies needed to run.

@mwigzell
Copy link
Contributor Author

Ok, so I tried to use Ubuntu's "xenial" release, but no joy: ROS needs "trusty". It would appear I need to "manually" install a version of "cmake" that is not shipped with "trusty" though: cogutil cmake configuration is requiring a cmake version of > 3.0 which is not in "trusty".

@linas
Copy link
Member

linas commented Jan 23, 2022

Hi Mark.

Yes, your intuitions are correct. So:

  • The base should be updated to run xenial ... but, as you note:
  • indigo is a specific release of ROS, and I guess that maybe trusty is the last release that was supported by indigo.
  • Thus: two changes are needed: we need an indigo-base that uses trusty, and we need a current-base that is 20.04 or similar, plus a port of the indigo containers to the latest ROS version, which seems to be Noetic Ninjemys.

You are bumping into the issue that no one is using these containers any more, and so they are not being maintained. I do think they are notable, and worth maintaining, as they are not a bad place to start for building things .... just that ... no one is doing anything with them at this time.

If you submit pull requests to modernize things, I'll merge them.

@mwigzell
Copy link
Contributor Author

mwigzell commented Jan 23, 2022 via email

@linas
Copy link
Member

linas commented Jan 24, 2022

Is it worth it to continue?

Maybe

What are people working on then?

Ben Goertzel is working on singularity.net, an effort to bring an AI tool platform for the Enterprise. Kind-of an open-source version of the assorted AI tools that Microsoft, IBM and Amazon offer. All that within a crypto marketplace (so you can buy/sell AI services). He is also funding some basic research, but it's unclear to me what is being done there.

David Hanson continues to make Hanson Robotics a profitable enterprise, and that means taking advice from those people who insist that the only good source code is closed-source. People such as VC's and other investors who think they can profit from selling the company (and the closed-source in it) to someone bigger. So he's not using this framework any longer. Might still be using ROS, but I don't know.

One of the more active opencog projects is "rocca" - "Rational OpenCog-Controlled Agent" which uses minecraft for embodiment. Nil Geiswiller is leading that project, several people are helping.

I'm doing basic research on symbolic learning at https://github.com/opencog/learn I'm getting good (great) results, but since there is no whiz-bang demo to impress the crowd, no one notices or cares. I'm unable to get funding or a spot in the lime-light, so mostly, I'm busy getting poor while leading an obscure life. I've concluded it is better to do something useful and good for no money, rather than something pointless and depressing for a salary. So I'm part of the Great Resignation I guess. It's not an economically sound decision, but the choices are bleak, either way.

I think resurrecting the ROS interfaces is a worthwhile task. You should do it if at all interested. However, after this, the question arises: "great, now what?" because you'll have a body, but no brain. You'll have basic sensory input (vision, sound) basic output (a good-quality face animation) but... then what? Well, wiring up some kind of simulacrum is hard. Well, not that hard - the game-AI people have lots of NPC's and Eva in her heyday was a good working NPC. Converting this puppet show into something that learns and reasons is ... well, like I said, I'm doing basic research on learning and reasoning.

Is there an alternative to Eva in a docker container?

The Eva code base always ran just fine outside of containers (although it has bit-rotted. I can provide some advice in resuscitating it, if you are interested.)

understood that there is a networking/port issue between docker and ROS

Docker was/is quirky. There was a work-around/fix for that. Personally, I use LXC for everything.

@mwigzell
Copy link
Contributor Author

mwigzell commented Jan 24, 2022 via email

@mwigzell
Copy link
Contributor Author

mwigzell commented Jan 24, 2022 via email

@linas
Copy link
Member

linas commented Jan 24, 2022

org.slf4j

This is a logging-for-java package, I forget the full name. It might even be the one making the news headlines a few weeks ago. All of the scripts and makefiles and whatnot should already have the classpaths set up, so not sure what the root cause of the error is.

The relex README.md says:

@linas
Copy link
Member

linas commented Jan 24, 2022

For me, it is all about trying to understand what you brains cooked up and
are cooking up.

Plot reveal: turns out the architecture is follow-your-nose fairly straight-forward stuff.

I like the ideas in "atomspace" so far. I have no idea whether it is theoretically a viable way of representing AGI state,

It is. To demystify things a bit: super-popular these days is neural-nets & deep learning, and if you look at what is actually being done, you find that the neural nets generate giant vectors and matrices of floating-point numbers, so if you want to "store that knowledge", you just have to store a bunch of floats, which is not theoretically challenging (there's a grab-bag of practical details, like disk layout, memory access, performance, etc. Standard engineering issues.)

There's also the general realization/observation that humans work "symbolically", with concepts like "cat" "dog" "chair" "table" so you have to be able to work with and store relationships like that too. A prehistoric way of storing this is with a "relational DB" aka SQL. In the early medieval times, Martin Luther nailed a tract about NoSQL to the church doors of Oracle and IBM. Fast-forward a few centuries and one has denominations of all kinds of "graph databases". Which, oddly, use a modified SQL as their query language. How weird is that?

The AtomSpace is a kind-of graph-database, which can store large blobs of floating-point numbers, and also graphs, and as a bonus has lots of other nice features, including a query language that is more powerful than conventional SQL (which is possible because the joins are done under the covers, and so vast amounts of complexity are hidden.) So, yes, the AtomSpace & Atomese are the generally-correct direction for storing data that comes in a variety of formats.

I don't have a vision of how AGI might be achieved. I hope you brains do.

Yes, I do. One step at a time. At the lowest level layers, take a look at the "learn" project. I'm excited cause I think I can unify vision, sound, language and symbolic processing all in one, while still having many/most of the deep-learning concepts embedded/preserved (although from an unusual angle.)

Me, I'm coming to the end of my salaried career soon.

It's hard/impossible to "build software correctly" until you have experience of doing it wrong a bunch of times. And that only comes after a long career. So its very valuable experience. And often underestimated. So I'd kind-of prefer that, to grad students who get the concepts but are terrible programmers.

AGI ... It probably needs to be "fast trained" to get it up to scratch once its been created.

If only. First, lets ponder some historical examples. The theory for neural nets can be written down in half-a-page or a page, and yet a fast, usable implementation with a nice API needs zillions of lines of code running on GPU's, and to scale it to the cloud needs even more. it's hundred-billion-dollar industry by now. Before that, say ... SQL -- the theory for SQL is more like 20 pages, its pretty complex, but 20 pages is tiny compared to the lines of code and the money involved. Another example: compilers: Yikes! compiler theory is more like 50 or 100 pages minimum, but we do have excellent compilers.

AGI theory is at a minimum as complicated as all those things put together. Even when there is a clear vision, its a lot of work to write the code, debug it, run it, crunch the data, figure out why the data crunched wrong, rinse and repeat. It's not like its going to just magically work the minute you're done coding. There's vast amounts of bring-up and experimentation.

So wouldn't it be more like a sort of Borg mind?

Oh, you mean facebook and tiktok? Where everyone has a cell-phone bolted to their skulls, wired directly to their hippocampus? Yes, AGI will be connected to that. We already have proto-AGI working at that scale, its called "algorithmic propaganda" and there were some unhappy Congressional investigations and disturbing Zuckerberg memes surrounding it. The future will be turbulent.

@mwigzell
Copy link
Contributor Author

lol, thanks for all that! It does give me perspective.
I will check out that "learn" project when I get a moment.

@mwigzell
Copy link
Contributor Author

Hi Linas, well I have built ros-opencog and ros-blender. But I have trouble running blender:
root@011adca7c645:/catkin_ws# blender
Error! Unsupported graphics card or driver.
A graphics card and driver with support for OpenGL 3.3 or higher is required.

I already avoided an initial error from LibGL, by setting LIBGL_ALWAYS_INDIRECT=1.
This seems to be the real issue that the docker container probably needs a modern graphics driver.
It was trying to use "swrast", but the export above quelled that.
(My graphics card supports OpenGL 4.6 so I'm not worried about the graphics card.)
Any ideas?

@mwigzell
Copy link
Contributor Author

Hmm, I was thinking wrongly: inside the container we just need to run the X client. So it should connect via the local sockets to the X server. I tried this approach with good ol' "xeyes" and it works! Was fun to see that again. So now I'm trying to recall what would prevent "blender" from working, and "glxgears" too. They must be built with something that is side-stepping the old-style X.

@mwigzell
Copy link
Contributor Author

I should note that without the above mentioned LIBGL_ALWAYS_INDIRECT running blender looks like this:
root@041bb1e0e052:/catkin_ws# blender
libGL error: No matching fbConfigs or visuals found
libGL error: failed to load driver: swrast
Error! Unsupported graphics card or driver.
A graphics card and driver with support for OpenGL 3.3 or higher is required.

@mwigzell
Copy link
Contributor Author

So I get it. Blender doesn't support network rendering anymore. It would need wrapping which is complicated. sigh.

@linas
Copy link
Member

linas commented Jan 27, 2022

In order to get good rendering performance (at least, back in the good old days) OpenGL always did direct rendering, talking straight to the graphics card, bypassing X11.

I don't really recall how we did blender in the docker container, but I have a very vague memory that it was running in direct rendering mode. I could be wrong, but the reason I seem to remember this is because there were some special docker flags that had to be declared to get this to work .... although those docker flags should be in the github scripts or readmes, so it should not be totally mysterious.

Now, I'm 99.9% certain that docker supports direct rendering, because the deep learning neural net industry needs to talk directly to the graphics card, to get to the GPU's to send in the CUDA code that they need to run. It would not make sense for docker to tell the neural net guys "sorry no can do".

Search for "docker direct rendering". There might be some fiddly bits that have to be set just so in some arcane config file.

@mwigzell
Copy link
Contributor Author

mwigzell commented Jan 28, 2022 via email

@linas
Copy link
Member

linas commented Jan 28, 2022

Blender-2.79 should be fine. We're not authoring, so the latest and greatest is not needed.

I don't recall any need for any png files, except in the documentation. What is Eva-1.png needed for?

The only "old code" that I recall was some ROS-to-opencog shims, (written in python) that someone moved from it's original git repo to another git repo. I think I updated the README in the original repo to indicate the new location. In my memory, this would be the biggest "old code" issue, and would be fixed by finding the new repo, and rewiring it up to there.

@linas
Copy link
Member

linas commented Jan 28, 2022

The missing "old-code" repo was called "ros-behavior-scripting" -- the git repo still exists I think but has been gutted. I mention this because it or its replacement will be needed, and I did not see it in the docker files as I scrolled through them.

@mwigzell
Copy link
Contributor Author

mwigzell commented Jan 28, 2022 via email

@linas
Copy link
Member

linas commented Jan 28, 2022

OK, well, this brings us to the wild frontier.

These packages

ros-noetic-openni-camera
ros-noetic-mjpeg-server

along with pi_vision (which itself seems to be stale, deprecated if not outright obsolete) were used to build a face detector. In short, Eva could see one (or more, or zero) faces in front of her, and direct her gaze at what she was seeing. If there were two faces, she would alternate attention between them; if there were none, she'd pout unhappily and eventually fall asleep. There was another plug-in (lost to the winds) that could recognize certain specific people, so that she could engage them by name. This whole pipeline was rather shaky: the face-detection in pi_vision never worked all that well -- failed in bad lighting, or if you turned and showed quarter face, or cocked your head. Standing in front of a bright window ruined everything. Basically, worked only in soft office lighting, and nowhere else. Trying to rescusitate this old code is not a worthy task. Replacing it whole-sale with something modern and much more flexible and accurate would be the right thing to do. You wanted to learn about the architecture of the system: well, there you go!

dynamixel-msgs

Dynamixel is a specific brand of motor controller. Insofar as there are no motors involved here, this was sucked in as a dependency only because someone was too lazy to stub out some code somewhere. Ditch that dependency, and if it causes problems somewhere, stub out those parts. If you see the words "motor safety", this was about making sure the plastic skin of the robot head would not tear, by making sure the motors did not move past those fixed points. This subsystem can be scrapped, as it is particular to a specific (and presumably obsolete) model.

because the opencog ones were not found.

Well, that's ... odd! I'll look into that.

@linas
Copy link
Member

linas commented Jan 28, 2022

One more comment:

Replacing the vision system whole-sale with something modern

You have four choices here:

  1. Build a new face-tracking system, much like the old one. If properly calibrated for camera angle and distance-to-monitor, the blender face will turn to look at you in a rather realistic fashion: it really does look like she's looking at you from the monitor. (Her eyes will even do a depth-of-field thing, instead of staring off at infinity. That is, she'll focus to the right depth, too.)

  2. integrate the above with Zoom or something like that: That way, anyone can zoom-chat with her. I always wanted to do this, but no one else seemed to be into it all that much.

  3. Build a vision system that can process much more than just faces, but also see other things. For the robot, this would have been crowds and trade-show-floor chaos. But since you won't be dragging your desktop webcam into the great outdoors .... this doesn't make much sense.

  4. Do basic research into vision. I'm trying to do that at https://github.com/opencog/learn which would integrate vision, sound and text (and other senses) but its nothing more than a sketch at this time. But I am serious on working on this.

@mwigzell
Copy link
Contributor Author

mwigzell commented Jan 28, 2022 via email

@linas
Copy link
Member

linas commented Jan 28, 2022

The opencog repos are still there, but they were out of date. I synced them up. It would be best if these were used.

@linas
Copy link
Member

linas commented Jan 28, 2022

eva-owyl

No, that code is just plain dead. It was ported to opencog, long ago. A part of it resides in

https://github.com/opencog/ros-behavior-scripting

which I guess is just the opencog <--> ROS shims. The actual behavior scripts are in

https://github.com/opencog/opencog/tree/master/opencog/eva

Stuff you need to know:

  1. this should be split out to it's own git repo, but that would be kind-of hard, just right now.

  2. The behaviors are written in "raw Atomese". Now, raw Atomese is terribly low-level; its not human-friendly, its more like programming in assembly code. (It's meant to be like that: other algos actually need this low-level interface; just that, for human programmers, its tough.)

  3. To partly alleviate the above, something called "ghost" was created. It is modeled on ChatScript (and is faux-compatible with it) ChatScript is a chatbot system. Ghost was supposed to include directives for moving face and arms (and generic robotic stuff) but I don't think that was ever done. ☹️

  4. There's a git repo called "ghost-loving-ai" that has some dialogs to allow Eva to lead a meditation session. It was also supposed to visually mirror the face that the robot was seeing (so if you frown, so does Eva, and so on.) I've been lead to believe it worked, but never saw a working demo myself, so I dunno.

So again, this is the wild frontier. You can pick up pieces-parts and wire them up and try to get them to do stuff.

Some comments about a long-term vision (and about "opencog" vs "atomspace".)

  • The atomspace is a kind-of (hyper-)graph database plus a low-level "prgramming language" called "Atomese". As such it is meant to be ideal for storing memories (aka "knowledge"), and also for integrating run-time actions and perceptions. Atomese algos do the "thinking" and store the "knowledge" in the atomspace. This is why you will see a recurring push to put everything on the AtomSpace: its the central location where everything is easily integrated.

  • OpenCog is a grab-bag of different pieces-parts, a collection of tinker toys, most of which sit on top of the atomspace. It used to be one giant git repo, but has since been split into many parts. What remains in the opencog repo is a grab-bag of natural-language stuff, including ghost, and the Eva behavior trees, and assorted other random stuff. It needs to be split up some more.

@linas
Copy link
Member

linas commented Jan 28, 2022

Anyway, since the pi_vision repo has now ripped out the openni and the ros-mjpeg packages, maybe it is not so hard to get running again. It might be the easiest path to getting a working demo where the face tracking works.

@mwigzell
Copy link
Contributor Author

mwigzell commented Jan 28, 2022 via email

@linas
Copy link
Member

linas commented Jan 28, 2022

vytasrgl is Vytas Krischiunas, he's the one who set up all this stuff originally. (Except for the blender model, that was created by an artist who knew how to sculpt faces in Blender.) So we forked him rather than v.v.

The blend file you found is old and out of date. Don't use it, and it does not belong in that repo, anyway. The blender API contains ONLY the python code that acts as an API to blender; it does NOT contain the blender model itself, that's in the other blender repo. The idea was to be modular, with the API distinct from specific blender models (as there were several/many of these.) The best and greatest Eva model should be called something like Eva2.blend or similar, it was touched up by mthielen (Mark Thielen) who fixed assorted lighting bugs, camera bugs, etc. but also altered some of the skin tone textures in maybe unfortunate ways. I don't recall. There was some less-than-ideal versions in there. Also, some of them worked for some versions of blender, but not others. The blender folks weren't exactly binary-compatible with their own file format. All of these files should be in the opencog repos, somewhere. If they are not showing up, I can do some archeology and find them.

I'm a bit overwhelmed,

Yeah. Sorry. At least now you understand why the docker files were bit-rotted. The architecture is complex, and the sand kept shifting under our collective feet.

@linas
Copy link
Member

linas commented Jan 28, 2022

Oh, and warning: Some of the blender files have different "bones" in them, or at least, different names for different bones, and as a result, the python API does not know how to find them. Which means that some of the animations might not work with some of the files, until you hand-edit either the blender model, or the python code to use the correct names and send the correct messages. Some messages in different models take more or fewer parameters, e.g. how fast she should blink.

Testing is "easy" -- you try each of the named animations one at a time - smile, blink, surprised, lookat xyz gazeat xyz (look turns her neck, gazes move only the eyes) and you see which ones failed to work. To actually fix whatever didn't work, you have two choices: (1) find a different blend file that works, or (2) prowl around in the guts of the blender model, reading the python code, and seeing which strings fail to match up. It's "not that hard" but would take a few days of exploring the zillions of menus that blender has, to find the menu that is hiding the animations. And then figuring out how to dink with it. I'm pretty sure that the Eva2.blend file was fully compatible with the python API wrappers. i.e. everything worked.

Also, some of the models might be missing lip animations. The good models have about 7 or 9 different animations for different sounds: m (lips pursed), p (plosive), o (big open round mouth), k (a smiling-like mouth), etc. These can be mapped to various phonemes, which are generated by at least one of the open-source text-to-speech libraries (I think maybe CMU or flite or MaryTTS, I forget which)

Sorry to flood you with all these details: I mean mostly to provide a flavor of what worked, can be made to work without writing new code, just by re-attaching whatever got deattached. I'm hoping this serves as a motivator, not a demotivator.

@linas
Copy link
Member

linas commented Jan 28, 2022

port mapping issue between Docker and ROS fixed?

There was an adequate work-around. The extended belly-aching was an attempt to shame the docker developers into fixing a docker bug, but they were pretty shameless and wandered off in some wacky direction. Corporate brain-freeze. Which is one reason why I use LXC these days, and why docker has fallen on hard times: their vision did not align with the vision of their users. (ROS was not entirely blameless; they hadn't quite thought things through, at that point. I presume that the latest and greatest ROS has all the latest and best networking, but I dunno.)

@mwigzell
Copy link
Contributor Author

mwigzell commented Jan 29, 2022 via email

@linas
Copy link
Member

linas commented Feb 1, 2022

Eva.blend

If it works, it works! I looked around; I have various copies, and I now believe the stuff in git is the stuff that was the best & finest.

LXC

So, LXC, and its extensions LXD, LXE are container systems (so, like docker) but also very different from docker, in philosophy. Some of the differences are:

  • There is no install/build script, like in docker. (You can use conventional bash scripts for that, if that is what you want)
  • You can stop running containers, and when you restart them, they restart where you left off. This means that e.g. you can resume where you last left off, (without wiping out your data) and also you can migrate existing containers to machines with more RAM, CPU (or less, as needed). (whereas in docker, you always restart with a fresh, brand-new image.)

So its the same base technology (linux containers) but a different vision for how they're used & deployed.

BTW, LXD and LXE make more sense if you have a cloud of hundreds or thousands of machines, I guess. For single-user scenario, like me, LXC is simpler.

@mwigzell
Copy link
Contributor Author

mwigzell commented Feb 2, 2022 via email

@linas
Copy link
Member

linas commented Feb 2, 2022

LXC container bridged to host

Don't know what that means. I usually just ssh into the container. Under ssh, it behaves just like anything else you'd log into. (more or less). If I'm desperate, then I use lxc-attach but here you have to be careful, because your bash environment might get used in the container, and your current env is probably wrong for the container. In that case, lxc-attach --clear-env is safest. ... but mostly, just use ssh.

@mwigzell
Copy link
Contributor Author

mwigzell commented Feb 2, 2022 via email

@linas
Copy link
Member

linas commented Feb 2, 2022

Bridge

Oh, OK. Yes, my containers have IP addrs; yes, they come from DHCP. I don't recall configuring this, it "just worked". Yes, they have full network access. Never used snap inside of them, and it's hard to imagine why that wouldn't work.

Why not just get everything running in the LXC and ship it like that?

Sure, one could do that. The only problems with this are:

  • People are unfamiliar with LXC or don't want to mess with it. The usual politics of technology selection
  • The container itself will be a gigabyte in size or so. Much larger than a few dozen kbytes of docker scripts.
  • Minor risk that the container comes with some exploit installed.

Python

Yes, python is a pain in the neck. Avoid python2 at all costs, if at all possible. Just don't install it. Back in the day, ROS was on python2, and blender was in python3 Now, I assume ROS is fully on python3, these days? Are you saying that parts of ROS don't work with python3.8 (but do work with python3.5 or earlier?)? The Eva subsystem does not use more than half-a-dozen or so ROS bits-n-pieces, although the way it installs, everything gets sucked in. Yes, catkin is a pain in the neck, and I spent many a fine day trying to figure out why it was going haywire for me. That was then; I would like to think its all better now.

In opencog-land, I've mostly avoided the pain of using python by using scheme/guile instead. The only problem there is that most people don't want to learn scheme.

Large complex systems made out of many parts are inherently ... fragile and tough to debug. If there's a better way, I don't know what it is.

@linas
Copy link
Member

linas commented Feb 2, 2022

This bug: https://bugs.archlinux.org/task/73591

The Invalid cross-device link error is, from what I can tell, due to making a hard link (not a soft link) across different file systems. As you may recall, soft-links are made with ln -s and can point anywhere at all, even at non-existing files. By contrast, plain ln creates another inode to the same file, and because its an inode, it must necessarily be in the same file system. Basically, hard-links are just two different names for the same file, while soft-links are pointers.

If you try to ln between two different file systems, you get the invalid cross-device link error.

Please excuse me if you know this stuff (not everyone does) If you do ls -la the number in the second column is the number of hard-links (inodes) to the actual file. So try this for fun and games:

touch a
ln a b
ln b c
ls -la
echo "ring around the roise pocket full of posie" > a
ls -la
cat c
rm a
ls -la

and you'll see the fun.

I cannot tell from your error messages what the two different file systems are. I think it's totally bizarre (and feels wayy wrong) to see ./usr/ in the paths, with a dot in front of the leading slash. That is surely not right???

Also, please set TERM to something reasonable, like TERM=xterm-256color or something like that.

@linas
Copy link
Member

linas commented Feb 2, 2022

OH ... I see ... seems to be related to using overlayfs ... Yes, it is very tempting to use LXC with overlayfs or other overlay-style fs's. The instructions might even recommend this! Don't do it. In practice, it is not worth it.

I see it's here too: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=836211 and as you noted, there is a docker variant of it at docker/for-linux#480

@mwigzell
Copy link
Contributor Author

mwigzell commented Feb 2, 2022 via email

@mwigzell
Copy link
Contributor Author

mwigzell commented Feb 2, 2022 via email

@mwigzell
Copy link
Contributor Author

mwigzell commented Feb 2, 2022 via email

@mwigzell
Copy link
Contributor Author

This issue was fixed, however there are more fixes needed. Closing.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants