Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Consider offering an expanded image #36

Closed
KyleAure opened this issue Oct 27, 2021 · 18 comments
Closed

Consider offering an expanded image #36

KyleAure opened this issue Oct 27, 2021 · 18 comments
Assignees
Labels
question I have a question that I would like to ask

Comments

@KyleAure
Copy link

Currently, at container startup the database goes through a decompression phase at:

if [ -f "${ORACLE_BASE}"/"${ORACLE_SID}".zip ]; then
echo "CONTAINER: uncompressing database data files, please wait..."
EXTRACT_START_TMS=$(date '+%s')
unzip "${ORACLE_BASE}"/"${ORACLE_SID}".zip -d "${ORACLE_BASE}"/oradata/ 1> /dev/null
EXTRACT_END_TMS=$(date '+%s')
EXTRACT_DURATION=$(( EXTRACT_END_TMS - EXTRACT_START_TMS ))
echo "CONTAINER: done uncompressing database data files, duration: ${EXTRACT_DURATION} seconds."
rm "${ORACLE_BASE}"/"${ORACLE_SID}".zip
fi;

I have noticed this decompression phase takes anywhere from 15 seconds to a minute depending on the machine the container is started on.
Myself, and other developers that use these containers for testing, would likely accept the trade off of a bigger image, to reclaim performance at container startup.

@gvenzl gvenzl self-assigned this Oct 29, 2021
@gvenzl gvenzl added the question I have a question that I would like to ask label Oct 29, 2021
@gvenzl
Copy link
Owner

gvenzl commented Nov 27, 2021

Hey @KyleAure, that is a very good comment and a long-standing discussion between users of database images. Some people want the smallest possible image while others want the fastest possible start-up time (which I was always a fan of myself).
I think the discussion comes down to CI/CD environments where the image has to be pulled often between runs, and test environments where the image is pulled once but many containers started up between the test runs.
However, the popularity and demand for the slim images kind of proves that people seem to care more about image size than startup time. Of course, they would want to have the best of both worlds, really.

There is actually a simple trick to get rid of that uncompression time, and I promised @Sanne to document it properly (#53), which is to use a volume between runs, if possible, of course. If the database data files are already present inside the container, the container will not extract them again:

# If database does not yet exist, create directory structure
if [ -z "${DATABASE_ALREADY_EXISTS:-}" ]; then
echo "CONTAINER: first database startup, initializing..."
create_dbconfig
# Otherwise check that symlinks are in place
else
echo "CONTAINER: database already initialized."
remove_config_files
sym_link_dbconfig
fi;

However, one would have to have such a volume ready to mount, which often can't be done or only with some extra steps.

What I really would be interested to know is whether this would work for your test environment as well, or whether that's not good enough?

@KyleAure
Copy link
Author

KyleAure commented Dec 3, 2021

I think having a mountable database would be a good solution for some users. But you are correct that it won't be an option for most depending on the CI/CD system they are using. In our case it might be possible, but would greatly increase the changes of one of our test suites affecting the next if they don't properly cleanup the database before they are finished.

Honestly, I think both images would be valiable in their own right. If we look at the image sizes from the past:
Old Method: 10.7GB
18-full: 6.38GB
18-slim: 2GB

I created a docker image that has the database aleady installed using the 18-slim image as the base image and the size is 4.13GB. To me this sounds like the happy medium when it comes to size and performance.

I'd be willing to help contribute this to the this repo if you think it's a good idea.

@KyleAure KyleAure changed the title Consider offering an expanded images Consider offering an expanded image Dec 3, 2021
@Sanne
Copy link

Sanne commented Dec 3, 2021

which is to use a volume between runs, if possible

Would that imply that state is persisted? We mostly use these for integration tests: it's actually important that the DB is in pristine state at each start.

But I second @KyleAure 's request. It's of course nice to have small images, but bootstrap times are more important in such contexts.

@KyleAure
Copy link
Author

KyleAure commented Dec 3, 2021

@Sanne yes, volumes persist data between container runs. I am unsure if there is a way to mount data (besides a straight copy) to a container without persisting it afterwords.

@astb01
Copy link

astb01 commented Dec 21, 2021

I may be going in the wrong direction but for what it’s worth, I used testcontainers and an embedded oracle xe image whilst integration testing my code. It did take up to 30 seconds at one point but I guess that’s manageable in a CI/CD set up.

@mvorisek
Copy link

Yes, for CI the fastest starup is wanted.

The docker images are stored uncompressed, but compressed when downloading. So as long the size to download will not be significantly larger, the benefit would be that the uncompression will be done only once - by docker, which is usually faster because it is optimized for max. speed, and any subsequent image startup (of already downloaded/decompressed image) will be fast.

@gvenzl
Copy link
Owner

gvenzl commented Aug 21, 2022

Hi @KyleAure, @Sanne, et al,

I finally came around to this and did a couple of tests yesterday thanks to (@mvorisek) bringing this up in another conversation over at #124.

As a test source, I took the 21-slim image. This image with an expanded, ready-to-go database inside it grows from 2.08 GB to 4.5 GB:

REPOSITORY                TAG               IMAGE ID      CREATED            SIZE
localhost/gvenzl/test     21-slim           dea8eb736df2  21 minutes ago     2.08 GB
localhost/gvenzl/test     21-slim-expanded  4172bb3d65eb  About an hour ago  4.5 GB

I think there are no big surprises here as that size is roughly what Kyle pointed out above as well.

The compressed sizes of the layers are equally within the expected area. The expanded image is almost as small compressed as the non-expanded image, which should be the case given that the compression method of the database data files is similar:

  • 21-slim compressed layer size: 862.7 MB
  • 21-slim-expanded compressed layer size: 893.5 MB
[gvenzl@localhost oci-oracle-xe]$ time podman pull gvenzl/test:21-slim
Resolving "gvenzl/test" using unqualified-search registries (/etc/containers/registries.conf)
Trying to pull docker.io/gvenzl/test:21-slim...
Getting image source signatures
Copying blob 903907d5bec2 skipped: already exists
Copying blob 6724b10d2fdd [============>-------------------------] 296.3MiB / 862.7MiB


[gvenzl@localhost oci-oracle-xe]$ time podman pull gvenzl/test:21-slim-expanded
Resolving "gvenzl/test" using unqualified-search registries (/etc/containers/registries.conf)
Trying to pull docker.io/gvenzl/test:21-slim-expanded...
Getting image source signatures
Copying blob 903907d5bec2 skipped: already exists
Copying blob 8754234225f6 [====>---------------------------------] 125.0MiB / 893.5MiB

Additionally, pull tests show that the ~15 - 20 seconds uncompression time moves into the pull phase, as expected, having to decompress the additional database files:

  • 21-slim times:
    • 1m 7s
    • 1m 11s
    • 1m 6s
  • 21-slim-expanded times:
    • 1m 28s
    • 1m 26s
    • 1m 32s
21-slim:

[gvenzl@localhost oci-oracle-xe]$ time podman pull gvenzl/test:21-slim
Resolving "gvenzl/test" using unqualified-search registries (/etc/containers/registries.conf)
Trying to pull docker.io/gvenzl/test:21-slim...
Getting image source signatures
Copying blob 903907d5bec2 skipped: already exists
Copying blob 6724b10d2fdd [============>-------------------------] 296.3MiB / 862.7MiB
Copying config dea8eb736d done
Writing manifest to image destination
Storing signatures
dea8eb736df2c1134ffb7512a00bfc19e44d33c583973bcb30554c865124025d

real	1m7.066s
user	0m55.922s
sys	0m9.498s

[gvenzl@localhost oci-oracle-xe]$ time podman pull gvenzl/test:21-slim
Resolving "gvenzl/test" using unqualified-search registries (/etc/containers/registries.conf)
Trying to pull docker.io/gvenzl/test:21-slim...
Getting image source signatures
Copying blob 903907d5bec2 skipped: already exists
Copying blob 6724b10d2fdd done
Copying config dea8eb736d done
Writing manifest to image destination
Storing signatures
dea8eb736df2c1134ffb7512a00bfc19e44d33c583973bcb30554c865124025d

real	1m11.481s
user	0m55.197s
sys	0m9.099s

[gvenzl@localhost oci-oracle-xe]$ time podman pull gvenzl/test:21-slim
Resolving "gvenzl/test" using unqualified-search registries (/etc/containers/registries.conf)
Trying to pull docker.io/gvenzl/test:21-slim...
Getting image source signatures
Copying blob 903907d5bec2 skipped: already exists
Copying blob 6724b10d2fdd done
Copying config dea8eb736d done
Writing manifest to image destination
Storing signatures
dea8eb736df2c1134ffb7512a00bfc19e44d33c583973bcb30554c865124025d

real	1m6.019s
user	0m55.747s
sys	0m8.824s


21-slim-expanded:

[gvenzl@localhost oci-oracle-xe]$ time podman pull gvenzl/test:21-slim-expanded
Resolving "gvenzl/test" using unqualified-search registries (/etc/containers/registries.conf)
Trying to pull docker.io/gvenzl/test:21-slim-expanded...
Getting image source signatures
Copying blob 903907d5bec2 skipped: already exists
Copying blob 8754234225f6 [====>---------------------------------] 125.0MiB / 893.5MiB
Copying config 4172bb3d65 done
Writing manifest to image destination
Storing signatures
4172bb3d65ebdeb9895e6becb1a54f3502898b708d5f34c4b9fc3e80d7e6c983

real	1m28.604s
user	1m41.860s
sys	0m14.921s


[gvenzl@localhost oci-oracle-xe]$ podman rmi -f dea8eb736df2 4172bb3d65eb
Untagged: docker.io/gvenzl/test:21-slim-expanded
Deleted: 4172bb3d65ebdeb9895e6becb1a54f3502898b708d5f34c4b9fc3e80d7e6c983
Error: dea8eb736df2: image not known
[gvenzl@localhost oci-oracle-xe]$ time podman pull gvenzl/test:21-slim-expanded
Resolving "gvenzl/test" using unqualified-search registries (/etc/containers/registries.conf)
Trying to pull docker.io/gvenzl/test:21-slim-expanded...
Getting image source signatures
Copying blob 903907d5bec2 skipped: already exists
Copying blob 8754234225f6 done
Copying config 4172bb3d65 done
Writing manifest to image destination
Storing signatures
4172bb3d65ebdeb9895e6becb1a54f3502898b708d5f34c4b9fc3e80d7e6c983

real	1m26.727s
user	1m41.024s
sys	0m14.650s


[gvenzl@localhost oci-oracle-xe]$ podman rmi -f dea8eb736df2 4172bb3d65eb
Untagged: docker.io/gvenzl/test:21-slim-expanded
Deleted: 4172bb3d65ebdeb9895e6becb1a54f3502898b708d5f34c4b9fc3e80d7e6c983
Error: dea8eb736df2: image not known
[gvenzl@localhost oci-oracle-xe]$ time podman pull gvenzl/test:21-slim-expanded
Resolving "gvenzl/test" using unqualified-search registries (/etc/containers/registries.conf)
Trying to pull docker.io/gvenzl/test:21-slim-expanded...
Getting image source signatures
Copying blob 903907d5bec2 skipped: already exists
Copying blob 8754234225f6 done
Copying config 4172bb3d65 done
Writing manifest to image destination
Storing signatures
4172bb3d65ebdeb9895e6becb1a54f3502898b708d5f34c4b9fc3e80d7e6c983

real	1m32.739s
user	1m47.881s
sys	0m15.778s

The important point here is that from a timing perspective nothing will change for someone who is pulling an image and starting a container once, as the uncompression phase just moves from the container startup to the image pull operation.
However, one would assume that for CI/CD jobs, the image will only be pulled one time in the beginning and many containers started throughout the jobs, hence the overhead of uncompression will only occur once.

If you would like to test the image, you can currently get it via docker run gvenzl/test:21-slim-expanded

@mvorisek
Copy link

Thank you very much for looking into this feature request. The -expanded variant seems like a very good solution to allow faster boot times, but still keep the smaller local size for storage sensitive environments.

Yes, the uncompression time is moved into the pull phase, my times:

  • stable slim: 18 sec pull, 28 sec boot time
  • testing expanded slim: 30 sec pull, 14 sec boot time

moved - of course only once in the expanded case, then the boot time is about twice as fast

@Sanne
Copy link

Sanne commented Aug 22, 2022

Awesome, thanks a lot!
Since in our use case we download such images at most once a day, but then boot them many times, for us the download times are not too interesting - also mostly limited by bandwith and one can easily add a caching layer if this is limited.

For bootstrap times the new images definitely help:

My experiments:

podman run --rm=true --name=HibernateTestingOracle -p 1521:1521 -e ORACLE_PASSWORD=hibernate_orm_test gvenzl/test:21-slim-expanded

Connected in ~5 seconds.

Previously:

podman run --rm=true --name=HibernateTestingOracle -p 1521:1521 -e ORACLE_PASSWORD=hibernate_orm_test gvenzl/oracle-xe:21-slim

Connected in ~16 seconds.

That's very nice!

@gvenzl
Copy link
Owner

gvenzl commented Aug 22, 2022

Great, thanks both! I will wrap up these images then (including doc, etc.) and add them to the repo. Stay tuned!

@Sanne
Copy link

Sanne commented Aug 22, 2022

Awesome - as soon as they're avaIable I'll switch all Quarkus users to these 👍

gvenzl added a commit that referenced this issue Aug 27, 2022
Signed-off-by: gvenzl <gerald.venzl@gmail.com>
gvenzl added a commit that referenced this issue Aug 27, 2022
Signed-off-by: gvenzl <gerald.venzl@gmail.com>
@gvenzl
Copy link
Owner

gvenzl commented Aug 28, 2022

Alright, so I got to take a look into this and there are now *-faststart images available on the registry for all flavors of these images, for example gvenzl/oracle-xe:slim-faststart.

These faststart images have a new third layer added in which the uncompressed database files are sitting.
For the 21-slim-faststart, that's an extra 454 MB compressed in that layer when pulling them:

[gvenzl@localhost test]$ time podman pull gvenzl/test:21-slim-faststart
Resolving "gvenzl/test" using unqualified-search registries (/etc/containers/registries.conf)
Trying to pull docker.io/gvenzl/test:21-slim-faststart...
Getting image source signatures
Copying blob f4069ceb7689 skipped: already exists
Copying blob 50b5143a8e9f [===========================>----------] 632.6MiB / 862.0MiB
Copying blob f7026eee0f80 [======================>---------------] 270.6MiB / 454.0MiB

The nice thing about this approach is that the other two layers can be reused from the non-faststart version. For example, if someone has already the 21-slim image downloaded, the first two layers will be reused and only the extra 454 MB are pulled:

[gvenzl@localhost test]$ time podman pull gvenzl/test:21-slim-faststart
Resolving "gvenzl/test" using unqualified-search registries (/etc/containers/registries.conf)
Trying to pull docker.io/gvenzl/test:21-slim-faststart...
Getting image source signatures
Copying blob f4069ceb7689 skipped: already exists
Copying blob 50b5143a8e9f skipped: already exists
Copying blob f7026eee0f80 [==============================>-------] 374.7MiB / 454.0MiB

Doing some quick tests on my end here with a relative fast internet, it takes about 50 seconds to pull the faststart image when the non-faststart equivalent is already present:

[gvenzl@localhost test]$ time podman pull gvenzl/test:21-slim-faststart
Resolving "gvenzl/test" using unqualified-search registries (/etc/containers/registries.conf)
Trying to pull docker.io/gvenzl/test:21-slim-faststart...
Getting image source signatures
Copying blob f4069ceb7689 skipped: already exists
Copying blob 50b5143a8e9f skipped: already exists
Copying blob f7026eee0f80 done
Copying config 0ee4bf0137 done
Writing manifest to image destination
Storing signatures
0ee4bf01376a8bec0dd08bc4ded900d6f3149de24d8452356cab51edadab197b

real	0m50.848s
user	0m55.215s
sys	0m8.010s
[gvenzl@localhost test]$ podman rmi 4172bb3d65eb
Error: 4172bb3d65eb: image not known
[gvenzl@localhost test]$ podman rmi 0ee4bf01376a
Untagged: docker.io/gvenzl/test:21-slim-faststart
Deleted: 0ee4bf01376a8bec0dd08bc4ded900d6f3149de24d8452356cab51edadab197b
[gvenzl@localhost test]$ time podman pull gvenzl/test:21-slim-faststart
Resolving "gvenzl/test" using unqualified-search registries (/etc/containers/registries.conf)
Trying to pull docker.io/gvenzl/test:21-slim-faststart...
Getting image source signatures
Copying blob f4069ceb7689 skipped: already exists
Copying blob 50b5143a8e9f skipped: already exists
Copying blob f7026eee0f80 done
Copying config 0ee4bf0137 done
Writing manifest to image destination
Storing signatures
0ee4bf01376a8bec0dd08bc4ded900d6f3149de24d8452356cab51edadab197b

real	0m48.322s
user	0m54.892s
sys	0m8.292s
[gvenzl@localhost test]$ podman rmi 4172bb3d65eb
[gvenzl@localhost test]$ time podman pull gvenzl/test:21-slim-faststart
Resolving "gvenzl/test" using unqualified-search registries (/etc/containers/registries.conf)
Trying to pull docker.io/gvenzl/test:21-slim-faststart...
Getting image source signatures
Copying blob f4069ceb7689 skipped: already exists
Copying blob 50b5143a8e9f skipped: already exists
Copying blob f7026eee0f80 done
Copying config 0ee4bf0137 done
Writing manifest to image destination
Storing signatures
0ee4bf01376a8bec0dd08bc4ded900d6f3149de24d8452356cab51edadab197b

real	0m48.993s
user	0m56.230s
sys	0m8.491s

Comparatively, if no images are yet present on the system, it takes me about 1min 30sec to pull the image, which is about the same as it was in the test from last week above:

[gvenzl@localhost test]$ time podman pull gvenzl/test:21-slim-faststart
Resolving "gvenzl/test" using unqualified-search registries (/etc/containers/registries.conf)
Trying to pull docker.io/gvenzl/test:21-slim-faststart...
Getting image source signatures
Copying blob f4069ceb7689 skipped: already exists
Copying blob f7026eee0f80 done
Copying blob 50b5143a8e9f done
Copying config 0ee4bf0137 done
Writing manifest to image destination
Storing signatures
0ee4bf01376a8bec0dd08bc4ded900d6f3149de24d8452356cab51edadab197b

real	1m26.772s
user	1m46.901s
sys	0m16.653s
[gvenzl@localhost test]$ podman rmi 0ee4bf01376a
Untagged: docker.io/gvenzl/test:21-slim-faststart
Deleted: 0ee4bf01376a8bec0dd08bc4ded900d6f3149de24d8452356cab51edadab197b
[gvenzl@localhost test]$
[gvenzl@localhost test]$
[gvenzl@localhost test]$
[gvenzl@localhost test]$ time podman pull gvenzl/test:21-slim-faststart
Resolving "gvenzl/test" using unqualified-search registries (/etc/containers/registries.conf)
Trying to pull docker.io/gvenzl/test:21-slim-faststart...
Getting image source signatures
Copying blob f4069ceb7689 skipped: already exists
Copying blob f7026eee0f80 done
Copying blob 50b5143a8e9f done
Copying config 0ee4bf0137 done
Writing manifest to image destination
Storing signatures
0ee4bf01376a8bec0dd08bc4ded900d6f3149de24d8452356cab51edadab197b

real	1m23.901s
user	1m47.527s
sys	0m15.984s
[gvenzl@localhost test]$ podman rmi 0ee4bf01376a
Untagged: docker.io/gvenzl/test:21-slim-faststart
Deleted: 0ee4bf01376a8bec0dd08bc4ded900d6f3149de24d8452356cab51edadab197b
[gvenzl@localhost test]$
[gvenzl@localhost test]$
[gvenzl@localhost test]$
[gvenzl@localhost test]$
[gvenzl@localhost test]$
[gvenzl@localhost test]$ time podman pull gvenzl/test:21-slim-faststart
Resolving "gvenzl/test" using unqualified-search registries (/etc/containers/registries.conf)
Trying to pull docker.io/gvenzl/test:21-slim-faststart...
Getting image source signatures
Copying blob f4069ceb7689 skipped: already exists
Copying blob 50b5143a8e9f done
Copying blob f7026eee0f80 done
Copying config 0ee4bf0137 done
Writing manifest to image destination
Storing signatures
0ee4bf01376a8bec0dd08bc4ded900d6f3149de24d8452356cab51edadab197b

real	1m22.764s
user	1m48.417s
sys	0m16.017s
[gvenzl@localhost test]$

The reason for that is because the container runtime is pulling the layers in parallel anyway, so there is no sequential time build-up. By the time the larger layer has been pulled, the smaller one was also already downloaded. However, this is, of course, highly subjective to one's internet connection and host machine processing power. Here again are the pull times from last week's test with the two layers image:

[gvenzl@localhost test]$ time podman pull gvenzl/test:21-slim-expanded
Resolving "gvenzl/test" using unqualified-search registries (/etc/containers/registries.conf)
Trying to pull docker.io/gvenzl/test:21-slim-expanded...
Getting image source signatures
Copying blob 903907d5bec2 skipped: already exists
Copying blob 8754234225f6 [===========>--------------------------] 279.8MiB / 893.5MiB



[gvenzl@localhost test]$ podman rmi 4172bb3d65eb
Untagged: docker.io/gvenzl/test:21-slim-expanded
Deleted: 4172bb3d65ebdeb9895e6becb1a54f3502898b708d5f34c4b9fc3e80d7e6c983
[gvenzl@localhost test]$ time podman pull gvenzl/test:21-slim-expanded
Resolving "gvenzl/test" using unqualified-search registries (/etc/containers/registries.conf)
Trying to pull docker.io/gvenzl/test:21-slim-expanded...
Getting image source signatures
Copying blob 903907d5bec2 skipped: already exists
Copying blob 8754234225f6 done
Copying config 4172bb3d65 done
Writing manifest to image destination
Storing signatures
4172bb3d65ebdeb9895e6becb1a54f3502898b708d5f34c4b9fc3e80d7e6c983

real	1m23.045s
user	1m33.020s
sys	0m14.142s
[gvenzl@localhost test]$ podman rmi 4172bb3d65eb
Untagged: docker.io/gvenzl/test:21-slim-expanded
Deleted: 4172bb3d65ebdeb9895e6becb1a54f3502898b708d5f34c4b9fc3e80d7e6c983
[gvenzl@localhost test]$ time podman pull gvenzl/test:21-slim-expanded
Resolving "gvenzl/test" using unqualified-search registries (/etc/containers/registries.conf)
Trying to pull docker.io/gvenzl/test:21-slim-expanded...
Getting image source signatures
Copying blob 903907d5bec2 skipped: already exists
Copying blob 8754234225f6 done
Copying config 4172bb3d65 done
Writing manifest to image destination
Storing signatures
4172bb3d65ebdeb9895e6becb1a54f3502898b708d5f34c4b9fc3e80d7e6c983

real	1m22.849s
user	1m35.890s
sys	0m14.133s
[gvenzl@localhost test]$ podman rmi 4172bb3d65eb
Untagged: docker.io/gvenzl/test:21-slim-expanded
Deleted: 4172bb3d65ebdeb9895e6becb1a54f3502898b708d5f34c4b9fc3e80d7e6c983
[gvenzl@localhost test]$ time podman pull gvenzl/test:21-slim-expanded
Resolving "gvenzl/test" using unqualified-search registries (/etc/containers/registries.conf)
Trying to pull docker.io/gvenzl/test:21-slim-expanded...
Getting image source signatures
Copying blob 903907d5bec2 skipped: already exists
Copying blob 8754234225f6 done
Copying config 4172bb3d65 done
Writing manifest to image destination
Storing signatures
4172bb3d65ebdeb9895e6becb1a54f3502898b708d5f34c4b9fc3e80d7e6c983

real	1m28.595s
user	1m33.865s
sys	0m13.650s
[gvenzl@localhost test]$

Another benefit of sharing the first two layers is that if the faststart image flavor is already on the system, pulling the non-faststart image becomes a no-op:

[gvenzl@localhost tests]$ time podman pull gvenzl/oracle-xe:21-slim-faststart
Resolving "gvenzl/oracle-xe" using unqualified-search registries (/etc/containers/registries.conf)
Trying to pull docker.io/gvenzl/oracle-xe:21-slim-faststart...
Getting image source signatures
Copying blob c2d136b4d88e skipped: already exists
Copying blob 0ae00596400b done
Copying blob d45a25137769 done
Copying config 59f39a0630 done
Writing manifest to image destination
Storing signatures
59f39a0630e2135ed3f7f87a39e759f164bb029e7b7bf7272f412b23b42e3327

real	1m28.106s
user	1m47.055s
sys	0m17.877s
[gvenzl@localhost tests]$ time podman pull gvenzl/oracle-xe:21-slim
Resolving "gvenzl/oracle-xe" using unqualified-search registries (/etc/containers/registries.conf)
Trying to pull docker.io/gvenzl/oracle-xe:21-slim...
Getting image source signatures
Copying blob 0ae00596400b skipped: already exists
Copying blob c2d136b4d88e [--------------------------------------] 0.0b / 0.0b
Copying config 1fbee71dc3 done
Writing manifest to image destination
Storing signatures
1fbee71dc3570a12285c88e56f15a1ce9ea0f5237e85c0b97515ec934b817c2e

real	0m1.596s
user	0m0.114s
sys	0m0.064s

I hope this is what you were looking for.

gvenzl added a commit that referenced this issue Aug 28, 2022
* Introduce faststart images (ER #36)

Signed-off-by: gvenzl <gerald.venzl@gmail.com>

* Provide images on GHCR (ER #131)

Signed-off-by: gvenzl <gerald.venzl@gmail.com>
@KyleAure
Copy link
Author

This looks really great @gvenzl Thank you so much for looking into this!

@gvenzl
Copy link
Owner

gvenzl commented Aug 30, 2022

Of course, thanks for bringing it up and, of course, for using these images!

@Sanne
Copy link

Sanne commented Aug 30, 2022

Awesome, many thanks @gvenzl !
Switching Quarkus to use these now:

@gvenzl
Copy link
Owner

gvenzl commented Aug 30, 2022

Great, thank you! I'm curious, do you have some timings that you could share of how long tests took before and after?

@Sanne
Copy link

Sanne commented Aug 30, 2022

It would vary significantly, but for example in Quarkus "core" there's at least three different Maven modules that need integration tests to execute in connection to an Oracle RDBMS on each CI run, and we prefer not reusing the instance across modules. So right there we'll save about 1m per CI build. That's very welcome news, as CI build times are a problem - even worse for people testing locally.

End user's workflow - assuming they use Oracle - would imply trigger starting one of these each time a dev-mode session is initiatied, and once (at least - depending on the project structure) for each build.

On my particular machine I'm saving ~18seconds each time there's such a need; I would imagine savings are larger for each user.

But more than about the exact amount of seconds, I'd say it helps with the feeling of smoothness and bring confidence to people using it - it's just much nicer to work with it :) A quick start has a psycological impact on making it feel lightweight rather than "bloated" - even if there's of course no corelation to its actual memory consumption and runtime efficiency.

For reference, I really like the PostgreSQL container image; it starts in about a second which makes it such a pleasure to work with. Mentioning it in case you get bored :-D

@gvenzl
Copy link
Owner

gvenzl commented Sep 2, 2022

Thanks @Sanne, appreciate the feedback and fully understand.

Regarding

A quick start has a psycological impact on making it feel lightweight rather than "bloated" - even if there's of course no corelation to its actual memory consumption and runtime efficiency.

I will send all the people complaining that the image is too bloated because "it's too big" your way from now on! ;)

This issue was closed.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question I have a question that I would like to ask
Projects
None yet
Development

No branches or pull requests

5 participants