Skip to content

Conversation

@fulminemizzega
Copy link
Contributor

This commit fixes issue #528 by adding a default value to parameters layers and outputformat. This change aligns the behavior with podman-remote.

@inknos take a look, I've also added a test for this PR

@inknos
Copy link
Contributor

inknos commented Jun 23, 2025

/packit retest-failed

@inknos inknos self-requested a review June 23, 2025 11:31
Copy link
Contributor

@inknos inknos left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks @fulminemizzega for this PR. please address my comments and we'll move forward :) also don't hesitate to ask questions or raise concerns

Comment on lines 207 to 208
def default(value, def_value):
return def_value if value is None else value
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

having a function for this is overkill, parameters can be defined as default in kwargs calls.

consider using kwargs.get(value, default) like here

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I did not know of this. Makes more sense

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should be ok now. I'm a bit on the fence on this, I started thinking about using an enum type for the outputformat parameter, so I've played a bit with the code in another branch in my fork and removed all the kwargs thing and moved all the keys to function parameters, with type annotations and default values. Why do so many functions in this project use kwargs and a _render_param function? Is this something that could be useful or is it better if I leave it alone?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why do so many functions in this project use kwargs

Historical reasons probably?

@jwhonce might give us a good answer

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@fulminemizzega kwargs was a good escape hatch for podman-py to ease the migration for scripts from docker-py. It allowed developers access to podman features without having to port their whole scripts. From there I suspect there are areas where new development followed the old form even if it didn't make as much sense.

The flexibility of args / kwargs have always been very Pythonic and pre-date type hinting.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see, thanks for your answer. Would there be any value in a PR for moving most of the kwargs to typed parameters for at least images.build? I've tried to do it but my changes do break existing code (I made all the strings that are used for paths become a pathlib.Path, so the argument has to be Path("something") instead of just "something").

self.assertIsNotNone(image)
self.assertIsNotNone(image.id)

def test_build_cache(self):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

more tests are probably needed to ensure that the defaults are passed, and images.build also accepts the other parameters

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are defaults like those above, and then there is dockerfile that is generated randomly... maybe this is something that should be done in a unit test?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are defaults like those above, and then there is dockerfile that is generated randomly... maybe this is something that should be done in a unit test?

I think you can do all through integration tests by inspecting the request that are passed. if you want I can write you some pseudo-code to help understand how I would design it

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes please, I’m not familiar with this kind of stuff… how would I inspect what is requested? I’ve seen how it is mocked in unit tests, is it related?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes please, I’m not familiar with this kind of stuff… how would I inspect what is requested? I’ve seen how it is mocked in unit tests, is it related?

Actually my bad, I was looking into many things at the same time and got confused. You are correct, unit tests is the place where you check you function. Also yes, you said it right, you need to mock a request and test that your image build calls the parameters correctly in the default/non-default cases.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

more tests are probably needed to ensure that the defaults are passed, and images.build also accepts the other parameters

About the other parameters, you mean that, besides the new defaults, there should be a test should that uses all of them and inspects the request? In that case, there is another issue: while experimenting with the parameter-argument conversion (see above), I discovered that _render_params parses the "remote" key, it is not documented, it can not be used alone because _render_params requires either "path" or "fileobj" (and both are meaningless if remote is used), but other than that the build function is able to handle this, because body is initialized with None, and when all the if/elif fail it stays None which is right for remote. But if dockerfile is None, then it is generated randomly in _render_params and this will be yet another issue: "If the URI points to a tarball and the dockerfile parameter is also specified, there must be a file with the corresponding path inside the tarball" from https://docs.podman.io/en/latest/_static/api.html#tag/images/operation/ImageBuildLibpod
I understand this is going quite out of the "cache" issue scope, I apologize in advance.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've added a unit test that checks the default parameters for images.build, with it also I corrected the mock URL for two other test cases

Copy link
Member

@Honny1 Honny1 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, LGTM. Just one nonblocking nit.


def test_build_cache(self):
"""Check that building twice the same image uses caching"""
buffer = io.StringIO("""FROM quay.io/libpod/alpine_labels:latest\nLABEL test=value""")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You can use FROM scratch so you don't rely on an external image. (This is nonblocking)

Suggested change
buffer = io.StringIO("""FROM quay.io/libpod/alpine_labels:latest\nLABEL test=value""")
buffer = io.StringIO("""FROM scratch\nLABEL test=value""")

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've followed your advice and also applied it in test_build

@Honny1
Copy link
Member

Honny1 commented Sep 24, 2025

/packit retest-failed

@inknos
Copy link
Contributor

inknos commented Sep 25, 2025

/packit build

@Honny1
Copy link
Member

Honny1 commented Nov 24, 2025

/packit retest-failed

@Honny1
Copy link
Member

Honny1 commented Nov 24, 2025

It seems like there is an issue with the tests. Any ideas? @inknos @fulminemizzega

=================================== FAILURES ===================================
____________________ ImagesIntegrationTest.test_image_crud _____________________

self = <podman.tests.integration.test_images.ImagesIntegrationTest testMethod=test_image_crud>

    def test_image_crud(self):
        """Test Image CRUD.
    
        Notes:
            Written to maximize reuse of pulled image.
        """
    
        with self.subTest("Pull Alpine Image"):
            image = self.client.images.pull("quay.io/libpod/alpine", tag="latest")
            self.assertIsInstance(image, Image)
            self.assertIn("quay.io/libpod/alpine:latest", image.tags)
            self.assertTrue(self.client.images.exists(image.id))
    
        with self.subTest("Inspect Alpine Image"):
            image = self.client.images.get("quay.io/libpod/alpine")
            self.assertIsInstance(image, Image)
            self.assertIn("quay.io/libpod/alpine:latest", image.tags)
    
        with self.subTest("Retrieve Image history"):
            ids = [i["Id"] for i in image.history()]
>           self.assertIn(image.id, ids)
E           AssertionError: '961769676411f082461f9ef46626dd7a2d1e2b2a38e6a44364bcbecf51e66dd4' not found in ['f88416a00aabae4c9520ee61e1692b50faf551e0d08e2196b736a9e0e63ad060', '<missing>']

podman/tests/integration/test_images.py:69: AssertionError

@fulminemizzega
Copy link
Contributor Author

It seems like there is an issue with the tests. Any ideas? @inknos @fulminemizzega

For this sub case I still do not have any idea. I have had it working in 2 different systems, after resetting my podman environment (with podman system reset, since they've deprecated the boltDB db without any migration tool). Before resetting it I also had this test failing, and now I've just checked, it would fail again. I really have no idea why the libpod/alpine image's history does not contain the id of the image itself.
Just to be sure, I've run again podman system reset, and now after a pull I get:

# podman history libpod/alpine
ID            CREATED      CREATED BY                                     SIZE        COMMENT
961769676411  6 years ago  /bin/sh -c #(nop)  CMD ["/bin/sh"]             0B
<missing>     6 years ago  /bin/sh -c #(nop) ADD file:fe64057fbb83dcc...  5.84MB

which is ok.
On another system it is:

$ podman history libpod/alpine
ID            CREATED      CREATED BY                                     SIZE        COMMENT
d054a073bda4  6 years ago  /bin/sh -c #(nop)  CMD ["/bin/sh"]             0B
<missing>     6 years ago  /bin/sh -c #(nop) ADD file:fe64057fbb83dcc...  5.84MB

There is another failing test where I have a theory, and it is related to caching: subtest "Deleted unused Images" in test_images.py:test_image_crud (line 112) will fail because test_containers.py:test_container_commit leaves behind an image built from libpod/alpine. When the sub test tries to delete all the unused images it fails because the libpod/alpine image is in use by the new localhost/busybox.local:unittest.
I did not catch this because I do not run all the tests on my dev machine since it destroys all running containers... but the image history test case is a mistery.

@fulminemizzega
Copy link
Contributor Author

Disregard what I've written yesterday. The issue is related to caching and an intermediary image left over by test_containers.py:test_container_rm_anonymous_volume. This and some other steps performed in test_images.py are enough to explain everything, I think. I want to reproduce the steps with just podman and see if the results are the same.

@fulminemizzega
Copy link
Contributor Author

I have written most of the details in the last commit, I do not know if how I solved it is reasonable. I think that the cleanup should happen even if test_containers.py:test_container_rm_anonymous_volume fails, otherwise if this test fails leaving behind the built image, then the same tests in test_images.py will fail. On the other hand, the tested behavior (podman deleting anonymous volumes) is not really a concern of podman-py...

@Honny1
Copy link
Member

Honny1 commented Nov 27, 2025

/packit rebuild-failed

Copy link
Member

@Honny1 Honny1 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the comprehensive investigation. LGTM

@Honny1
Copy link
Member

Honny1 commented Nov 27, 2025

PTAL @inknos

@fulminemizzega
Copy link
Contributor Author

I've investigated a bit the pre-commit failing check, the tmt lint step reports an error. I can reproduce it using python 3.14 and tmt 1.39, it works fine with python 3.13. If it is not an issue, updating tmt to the latest version in .pre-commit-config.yaml (I tested 1.62.1 with python 3.14.0) fixes it.

@Honny1
Copy link
Member

Honny1 commented Dec 1, 2025

It seems this needs an update. I will do update.

@Honny1
Copy link
Member

Honny1 commented Dec 1, 2025

PR with bump: #606

@Honny1
Copy link
Member

Honny1 commented Dec 2, 2025

It should be fixed. Can you rebase onto main?

@fulminemizzega
Copy link
Contributor Author

It should be fixed. Can you rebase onto main?

Yes. Can I also use the bot commands to restart tests or do those require more privileges than just being the author of a PR in this repo?

Copy link
Member

@Honny1 Honny1 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, thanks.

@inknos, can you do the final review?

@inknos
Copy link
Contributor

inknos commented Dec 4, 2025

I would prefer the commits to be squashed together, but we don't have it in the contributing doc, (I'll make a change for the future) so it's not a blocker.

Otherwise it's clean, LGTM

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Dec 4, 2025

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: fulminemizzega, Honny1, inknos

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@openshift-ci openshift-ci bot added the approved label Dec 4, 2025
@fulminemizzega
Copy link
Contributor Author

I would prefer the commits to be squashed together, but we don't have it in the contributing doc, (I'll make a change for the future) so it's not a blocker.

Otherwise it's clean, LGTM

I can squash them. Could you give me a rough guideline on how you would like them to be?
I think doing a single commit would look a bit weird, there are things that make sense (first 3 commits, with the actual fix, docs and kwargs.get usage), others less, like the fix in d79786a, or the last one that touches the container running integration test. Maybe I can squash everything in 3 commits, one for code and docs, one for tests and another for the "ancillary" changes that happened while touching this PR? Or instead squash code, docs and tests in one and the other things in another commit? What should the commit message be?

@inknos
Copy link
Contributor

inknos commented Dec 4, 2025

My personal take is that every commit should not break the code, so no partial changes per commit. For example, here you import random but do not use it in the same commit. I know, it's a minor thing, but it should be grouped logically. I think it would be better to have at least the bug fix in one commit, and the small fixes in a separate one.

For the commit message this is a perfectly valid description, and it references the issue too. Other pieces of info that you left in the commit bodies are very useful too, so feel free to concatenate the body commits as you squash them.

Finally, if you end up in a rebase/squash nightmare, I am fine with merging it as is or as a single commit :)

@Honny1
Copy link
Member

Honny1 commented Dec 5, 2025

self = <podman.tests.integration.test_containers.ContainersIntegrationTest testMethod=test_container_rm_anonymous_volume>

        def test_container_rm_anonymous_volume(self):
            with self.subTest("Check anonymous volume is removed"):
                container_file = """
    FROM alpine
    VOLUME myvol
    ENV foo=bar
    """
                tmp_file = tempfile.mkstemp()
>               file = open(tmp_file, 'w')
E               TypeError: expected str, bytes or os.PathLike object, not tuple

podman/tests/integration/test_containers.py:193: TypeError

@fulminemizzega
Copy link
Contributor Author

self = <podman.tests.integration.test_containers.ContainersIntegrationTest testMethod=test_container_rm_anonymous_volume>

        def test_container_rm_anonymous_volume(self):
            with self.subTest("Check anonymous volume is removed"):
                container_file = """
    FROM alpine
    VOLUME myvol
    ENV foo=bar
    """
                tmp_file = tempfile.mkstemp()
>               file = open(tmp_file, 'w')
E               TypeError: expected str, bytes or os.PathLike object, not tuple

podman/tests/integration/test_containers.py:193: TypeError

The ugliest commit ever created, ignore all of this. Should never have existed.

This commit fixes issue containers#528 by adding a default value to parameters
layers and outputformat. This change aligns the behavior with
podman-remote. Function documentation is updated accordingly.
A new integration test (test_images.py:test_build_cache) checks that
caching is enabled and also that disabling it produces a different
output.

Signed-off-by: Federico Rizzo <fulminemizzega@yahoo.it>
This commit also fixes wrong mock URL in the same unit test

Signed-off-by: Federico Rizzo <fulminemizzega@yahoo.it>
- remove image left by integration test
  test_containers.py:test_container_rm_anonymous_volume: when caching is
  enabled, an intermediary image generated by the build function call
  (corresponding to layer created by "VOLUME myvol", see container_file
  in the first lines of function test_container_rm_anonymous_volume)
  breaks test_images.py:test_image_crud sub-test "Delete Image", where
  the same base image (quay.io/libpod/alpine:latest) is supposed to be
  removed, but is instead untagged, as it is in "use" by the other
  layers.
  This also breaks sub-test "Delete unused Images" and, on on successive
  runs, "Retrieve Image history".
- avoid using external images in test_images.py:test_build
- fix deprecation and unused variable warnings

Signed-off-by: Federico Rizzo <fulminemizzega@yahoo.it>
@fulminemizzega
Copy link
Contributor Author

I have sorted a bit the mess, now everything is split in 3 commits, let me know if this is reasonable or if it should be squashed more or differently. Half of the tests failed to provision, I do not know if this is my fault or something else's.

@Honny1
Copy link
Member

Honny1 commented Dec 8, 2025

/packit retest-failed

@fulminemizzega
Copy link
Contributor Author

/packit retest-failed

Can you give it another try? There is still one left

@inknos
Copy link
Contributor

inknos commented Dec 8, 2025

/lgtm

@openshift-ci openshift-ci bot added the lgtm label Dec 8, 2025
@openshift-merge-bot openshift-merge-bot bot merged commit c99ecbd into containers:main Dec 8, 2025
22 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants