Skip to content

Conversation

@aniket-sahu-ibmx
Copy link

@aniket-sahu-ibmx aniket-sahu-ibmx commented Dec 16, 2025

When the script tries to label a newly created partition, it might fail saying the partition is not available, as there is a slight delay between defining the partition and it being available in the block devices list. This PR helps to fix that by adding a wait statement with a 10 second timeout.

Summary by CodeRabbit

  • Tests
    • Improved disk label creation during pool setup: the test now retries label application with a timed wait and feedback, reducing race-condition flakiness and improving error recovery during setup.

✏️ Tip: You can customize this high-level summary in your review settings.

@coderabbitai
Copy link

coderabbitai bot commented Dec 16, 2025

Walkthrough

The change modifies disk label creation during pool pre-setup in virttest/utils_test/libvirt.py. A direct call to mk_label(device_name, disk_label) is replaced with a nested function mk_label_wait that implements retry logic. The new function catches exceptions, returns False on failure, and waits up to 10 seconds for the label to be created, retrying with a specific message. This shifts error handling from immediate failure to a retry-on-failure approach.

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~12 minutes

  • Verify the retry logic correctly waits up to 10 seconds with appropriate timing intervals
  • Confirm exception handling in mk_label_wait captures all relevant failure scenarios
  • Review the specific retry message content and logging behavior
  • Ensure the nested function scope and variable references are correct

Pre-merge checks

❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 50.00% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title clearly and specifically describes the main change: adding wait time for the mk_label check to handle partition initialization delays.

📜 Recent review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 2872a37 and e943377.

📒 Files selected for processing (1)
  • virttest/utils_test/libvirt.py (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • virttest/utils_test/libvirt.py

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@aniket-sahu-ibmx aniket-sahu-ibmx changed the title Add wait time for mk_label check libvirt.py: Add wait time for mk_label check Dec 16, 2025
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (2)
virttest/utils_test/libvirt.py (2)

1071-1076: Refine exception handling for better debugging.

The nested function catches a bare Exception, which is overly broad and could mask unexpected errors. Additionally, the exception variable e is assigned but never used, making debugging more difficult.

Consider these improvements:

 def mk_label_wait():
     try:
         mk_label(device_name, disk_label)
-    except Exception as e:
+    except process.CmdError as e:
+        LOG.debug("mk_label failed: %s. Retrying...", e)
         return False
     return True

If you need to catch multiple exception types, use a tuple:

except (process.CmdError, OSError) as e:

Based on the mk_label function at line 833, it calls process.run() which raises process.CmdError on failure, so catching that specific exception type would be more appropriate.


1077-1078: Check the return value from wait_for to handle timeout scenarios.

The wait_for function returns None if the operation times out, but this return value is not being checked. This could lead to the pool setup continuing even when the disk label wasn't successfully created, potentially causing failures later in the process.

Apply this diff to handle the timeout case:

-            utils_misc.wait_for(mk_label_wait, 10, text="Label not created \
-                                as device is unavailable. Retrying...")
+            result = utils_misc.wait_for(
+                mk_label_wait, 10,
+                text="Label not created as device is unavailable. Retrying..."
+            )
+            if not result:
+                raise exceptions.TestError(
+                    "Failed to create label on %s after 10 seconds" % device_name
+                )

This ensures that if the device doesn't become available within the timeout period, the test fails with a clear error message rather than continuing with an unlabeled disk.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 3c3225a and 2872a37.

📒 Files selected for processing (1)
  • virttest/utils_test/libvirt.py (1 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
virttest/utils_test/libvirt.py (1)
virttest/utils_misc.py (2)
  • wait_for (557-592)
  • wait_for (4186-4234)
🪛 Ruff (0.14.8)
virttest/utils_test/libvirt.py

1074-1074: Do not catch blind exception: Exception

(BLE001)


1074-1074: Local variable e is assigned to but never used

Remove assignment to unused variable e

(F841)

@aniket-sahu-ibmx
Copy link
Author

PR notes
Job log before changes:

01:19:56 WARNING : Overriding user setting and enabling kvm bootstrap as guest tests are requested
01:19:56 INFO    : Check for environment
01:19:57 INFO    : Creating temporary mux dir
01:19:57 INFO    :
01:19:57 INFO    : Running Guest Tests Suite backuprestore
01:19:57 INFO    : Running: /usr/local/bin/avocado run --vt-type libvirt --vt-config /home/stuff/tests/data/avocado-vt/backends/libvirt/cfg/backuprestore.cfg                 --force-job-id f4082c7b900e58fb9cdce11a9a3ff89f5c28b892                 --job-results-dir /home/stuff/tests/results  --vt-only-filter                                             "virtio_scsi virtio_net qcow2 Fedora.43.ppc64le"
No python imaging library installed. Screendump and Windows guest BSOD detection are disabled. In order to enable it, please install python-imaging or the equivalent for your distro.
No python imaging library installed. PPM image conversion to JPEG disabled. In order to enable it, please install python-imaging or the equivalent for your distro.
No python imaging library installed. Screendump and Windows guest BSOD detection are disabled. In order to enable it, please install python-imaging or the equivalent for your distro.
No python imaging library installed. PPM image conversion to JPEG disabled. In order to enable it, please install python-imaging or the equivalent for your distro.
JOB ID     : f4082c7b900e58fb9cdce11a9a3ff89f5c28b892
JOB LOG    : /home/stuff/tests/results/job-2025-12-16T01.19-f4082c7/job.log
 (01/11) io-github-autotest-qemu.unattended_install.import.import.default_install.aio_native: STARTED
 (01/11) io-github-autotest-qemu.unattended_install.import.import.default_install.aio_native: PASS (42.19 s)
 (02/11) type_specific.io-github-autotest-libvirt.virsh.snapshot_disk.delete_test.positive_test.pool_vol.disk_pool.attach_img_qcow2.v_qcow2.snapshot_from_xml.disk_internal.memory_internal: STARTED
 (02/11) type_specific.io-github-autotest-libvirt.virsh.snapshot_disk.delete_test.positive_test.pool_vol.disk_pool.attach_img_qcow2.v_qcow2.snapshot_from_xml.disk_internal.memory_internal: ERROR: Command 'parted -s /dev/sdl mklabel msdos' failed.\nstdout: b''\nstderr: b'Error: Error opening /dev/sdl: No such device or address\n'\nadditional_info: None (42.33 s)
 (03/11) type_specific.io-github-autotest-libvirt.virsh.snapshot_disk.delete_test.positive_test.pool_vol.disk_pool.attach_img_qcow2.v_qcow2.snapshot_from_xml.disk_internal.memory_no: STARTED
 (03/11) type_specific.io-github-autotest-libvirt.virsh.snapshot_disk.delete_test.positive_test.pool_vol.disk_pool.attach_img_qcow2.v_qcow2.snapshot_from_xml.disk_internal.memory_no: ERROR: Command 'parted -s /dev/sdl mklabel msdos' failed.\nstdout: b''\nstderr: b'Error: Error opening /dev/sdl: No such device or address\n'\nadditional_info: None (40.59 s)
 (04/11) type_specific.io-github-autotest-libvirt.virsh.snapshot_disk.delete_test.positive_test.pool_vol.disk_pool.attach_img_qcow2.v_qcow2.snapshot_from_xml.disk_internal.memory_none: STARTED
 (04/11) type_specific.io-github-autotest-libvirt.virsh.snapshot_disk.delete_test.positive_test.pool_vol.disk_pool.attach_img_qcow2.v_qcow2.snapshot_from_xml.disk_internal.memory_none: ERROR: Command 'parted -s /dev/sdl mklabel msdos' failed.\nstdout: b''\nstderr: b'Error: Error opening /dev/sdl: No such device or address\n'\nadditional_info: None (40.68 s)
 (05/11) type_specific.io-github-autotest-libvirt.virsh.snapshot_disk.delete_test.positive_test.pool_vol.disk_pool.attach_img_qcow2.v_qcow2.snapshot_default.current: STARTED
 (05/11) type_specific.io-github-autotest-libvirt.virsh.snapshot_disk.delete_test.positive_test.pool_vol.disk_pool.attach_img_qcow2.v_qcow2.snapshot_default.current: ERROR: Command 'parted -s /dev/sdl mklabel msdos' failed.\nstdout: b''\nstderr: b'Error: Error opening /dev/sdl: No such device or address\n'\nadditional_info: None (45.10 s)
 (06/11) type_specific.io-github-autotest-libvirt.virsh.snapshot_disk.delete_test.positive_test.pool_vol.disk_pool.attach_img_qcow2.v_qcow2.snapshot_default.no_current: STARTED
 (06/11) type_specific.io-github-autotest-libvirt.virsh.snapshot_disk.delete_test.positive_test.pool_vol.disk_pool.attach_img_qcow2.v_qcow2.snapshot_default.no_current: ERROR: Command 'parted -s /dev/sdl mklabel msdos' failed.\nstdout: b''\nstderr: b'Error: Error opening /dev/sdl: No such device or address\n'\nadditional_info: None (40.44 s)
 (07/11) type_specific.io-github-autotest-libvirt.virsh.snapshot_disk.delete_test.positive_test.pool_vol.disk_pool.attach_img_qcow2.v_qcow2.snapshot_default.revert_paused: STARTED
 (07/11) type_specific.io-github-autotest-libvirt.virsh.snapshot_disk.delete_test.positive_test.pool_vol.disk_pool.attach_img_qcow2.v_qcow2.snapshot_default.revert_paused: ERROR: Command 'parted -s /dev/sdl mklabel msdos' failed.\nstdout: b''\nstderr: b'Error: Error opening /dev/sdl: No such device or address\n'\nadditional_info: None (40.61 s)
 (08/11) type_specific.io-github-autotest-libvirt.virsh.snapshot_disk.no_delete.positive_test.pool_vol.disk_pool.attach_img_qcow2.v_qcow2.snapshot_default.current: STARTED
 (08/11) type_specific.io-github-autotest-libvirt.virsh.snapshot_disk.no_delete.positive_test.pool_vol.disk_pool.attach_img_qcow2.v_qcow2.snapshot_default.current: ERROR: Command 'parted -s /dev/sdl mklabel msdos' failed.\nstdout: b''\nstderr: b'Error: Error opening /dev/sdl: No such device or address\n'\nadditional_info: None (43.63 s)
 (09/11) type_specific.io-github-autotest-libvirt.virsh.snapshot_disk.no_delete.positive_test.pool_vol.disk_pool.attach_img_qcow2.v_qcow2.snapshot_default.no_current: STARTED
 (09/11) type_specific.io-github-autotest-libvirt.virsh.snapshot_disk.no_delete.positive_test.pool_vol.disk_pool.attach_img_qcow2.v_qcow2.snapshot_default.no_current: ERROR: Command 'parted -s /dev/sdl mklabel msdos' failed.\nstdout: b''\nstderr: b'Error: Error opening /dev/sdl: No such device or address\n'\nadditional_info: None (41.96 s)
 (10/11) type_specific.io-github-autotest-libvirt.virsh.snapshot_disk.no_delete.positive_test.pool_vol.disk_pool.attach_img_qcow2.v_qcow2.snapshot_default.revert_paused: STARTED
 (10/11) type_specific.io-github-autotest-libvirt.virsh.snapshot_disk.no_delete.positive_test.pool_vol.disk_pool.attach_img_qcow2.v_qcow2.snapshot_default.revert_paused: ERROR: Command 'parted -s /dev/sdl mklabel msdos' failed.\nstdout: b''\nstderr: b'Error: Error opening /dev/sdl: No such device or address\n'\nadditional_info: None (41.78 s)
 (11/11) io-github-autotest-libvirt.remove_guest.without_disk: STARTED
 (11/11) io-github-autotest-libvirt.remove_guest.without_disk: PASS (5.17 s)
RESULTS    : PASS 2 | ERROR 9 | FAIL 0 | SKIP 0 | WARN 0 | INTERRUPT 0 | CANCEL 0
JOB HTML   : /home/stuff/tests/results/job-2025-12-16T01.19-f4082c7/results.html
JOB TIME   : 470.61 s

Test summary:
02-type_specific.io-github-autotest-libvirt.virsh.snapshot_disk.delete_test.positive_test.pool_vol.disk_pool.attach_img_qcow2.v_qcow2.snapshot_from_xml.disk_internal.memory_internal: ERROR
03-type_specific.io-github-autotest-libvirt.virsh.snapshot_disk.delete_test.positive_test.pool_vol.disk_pool.attach_img_qcow2.v_qcow2.snapshot_from_xml.disk_internal.memory_no: ERROR
04-type_specific.io-github-autotest-libvirt.virsh.snapshot_disk.delete_test.positive_test.pool_vol.disk_pool.attach_img_qcow2.v_qcow2.snapshot_from_xml.disk_internal.memory_none: ERROR
05-type_specific.io-github-autotest-libvirt.virsh.snapshot_disk.delete_test.positive_test.pool_vol.disk_pool.attach_img_qcow2.v_qcow2.snapshot_default.current: ERROR
06-type_specific.io-github-autotest-libvirt.virsh.snapshot_disk.delete_test.positive_test.pool_vol.disk_pool.attach_img_qcow2.v_qcow2.snapshot_default.no_current: ERROR
07-type_specific.io-github-autotest-libvirt.virsh.snapshot_disk.delete_test.positive_test.pool_vol.disk_pool.attach_img_qcow2.v_qcow2.snapshot_default.revert_paused: ERROR
08-type_specific.io-github-autotest-libvirt.virsh.snapshot_disk.no_delete.positive_test.pool_vol.disk_pool.attach_img_qcow2.v_qcow2.snapshot_default.current: ERROR
09-type_specific.io-github-autotest-libvirt.virsh.snapshot_disk.no_delete.positive_test.pool_vol.disk_pool.attach_img_qcow2.v_qcow2.snapshot_default.no_current: ERROR
10-type_specific.io-github-autotest-libvirt.virsh.snapshot_disk.no_delete.positive_test.pool_vol.disk_pool.attach_img_qcow2.v_qcow2.snapshot_default.revert_paused: ERROR
01:27:50 INFO    :
01:27:50 INFO    : Summary of test results can be found below:
TestSuite                                                                                TestRun    Summary

guest_backuprestore                                                                      Run        Successfully executed
/home/stuff/tests/results/job-2025-12-16T01.19-f4082c7/job.log
| PASS 2 || CANCEL 0 || ERRORS 9 || FAILURES 0 || SKIP 0 || WARN 0 || INTERRUPT 0 |
01:27:50 INFO    : Removing temporary mux dir

Job log after changes:

01:55:03 WARNING : Overriding user setting and enabling kvm bootstrap as guest tests are requested
01:55:04 INFO    : Check for environment
01:55:05 INFO    : Creating temporary mux dir
01:55:05 INFO    :
01:55:05 INFO    : Running Guest Tests Suite backuprestore
01:55:05 INFO    : Running: /usr/local/bin/avocado run --vt-type libvirt --vt-config /home/stuff/tests/data/avocado-vt/backends/libvirt/cfg/backuprestore.cfg                 --force-job-id f9ae483b135d67bbc3916567c872d577c96c9c70                 --job-results-dir /home/stuff/tests/results  --vt-only-filter                                             "virtio_scsi virtio_net qcow2 Fedora.43.ppc64le"
No python imaging library installed. Screendump and Windows guest BSOD detection are disabled. In order to enable it, please install python-imaging or the equivalent for your distro.
No python imaging library installed. PPM image conversion to JPEG disabled. In order to enable it, please install python-imaging or the equivalent for your distro.
No python imaging library installed. Screendump and Windows guest BSOD detection are disabled. In order to enable it, please install python-imaging or the equivalent for your distro.
No python imaging library installed. PPM image conversion to JPEG disabled. In order to enable it, please install python-imaging or the equivalent for your distro.
JOB ID     : f9ae483b135d67bbc3916567c872d577c96c9c70
JOB LOG    : /home/stuff/tests/results/job-2025-12-16T01.55-f9ae483/job.log
 (01/11) io-github-autotest-qemu.unattended_install.import.import.default_install.aio_native: STARTED
 (01/11) io-github-autotest-qemu.unattended_install.import.import.default_install.aio_native: PASS (47.27 s)
 (02/11) type_specific.io-github-autotest-libvirt.virsh.snapshot_disk.delete_test.positive_test.pool_vol.disk_pool.attach_img_qcow2.v_qcow2.snapshot_from_xml.disk_internal.memory_internal: STARTED
 (02/11) type_specific.io-github-autotest-libvirt.virsh.snapshot_disk.delete_test.positive_test.pool_vol.disk_pool.attach_img_qcow2.v_qcow2.snapshot_from_xml.disk_internal.memory_internal: PASS (106.38 s)
 (03/11) type_specific.io-github-autotest-libvirt.virsh.snapshot_disk.delete_test.positive_test.pool_vol.disk_pool.attach_img_qcow2.v_qcow2.snapshot_from_xml.disk_internal.memory_no: STARTED
 (03/11) type_specific.io-github-autotest-libvirt.virsh.snapshot_disk.delete_test.positive_test.pool_vol.disk_pool.attach_img_qcow2.v_qcow2.snapshot_from_xml.disk_internal.memory_no: PASS (44.58 s)
 (04/11) type_specific.io-github-autotest-libvirt.virsh.snapshot_disk.delete_test.positive_test.pool_vol.disk_pool.attach_img_qcow2.v_qcow2.snapshot_from_xml.disk_internal.memory_none: STARTED
 (04/11) type_specific.io-github-autotest-libvirt.virsh.snapshot_disk.delete_test.positive_test.pool_vol.disk_pool.attach_img_qcow2.v_qcow2.snapshot_from_xml.disk_internal.memory_none: PASS (101.38 s)
 (05/11) type_specific.io-github-autotest-libvirt.virsh.snapshot_disk.delete_test.positive_test.pool_vol.disk_pool.attach_img_qcow2.v_qcow2.snapshot_default.current: STARTED
 (05/11) type_specific.io-github-autotest-libvirt.virsh.snapshot_disk.delete_test.positive_test.pool_vol.disk_pool.attach_img_qcow2.v_qcow2.snapshot_default.current: PASS (100.83 s)
 (06/11) type_specific.io-github-autotest-libvirt.virsh.snapshot_disk.delete_test.positive_test.pool_vol.disk_pool.attach_img_qcow2.v_qcow2.snapshot_default.no_current: STARTED
 (06/11) type_specific.io-github-autotest-libvirt.virsh.snapshot_disk.delete_test.positive_test.pool_vol.disk_pool.attach_img_qcow2.v_qcow2.snapshot_default.no_current: PASS (104.56 s)
 (07/11) type_specific.io-github-autotest-libvirt.virsh.snapshot_disk.delete_test.positive_test.pool_vol.disk_pool.attach_img_qcow2.v_qcow2.snapshot_default.revert_paused: STARTED
 (07/11) type_specific.io-github-autotest-libvirt.virsh.snapshot_disk.delete_test.positive_test.pool_vol.disk_pool.attach_img_qcow2.v_qcow2.snapshot_default.revert_paused: PASS (101.96 s)
 (08/11) type_specific.io-github-autotest-libvirt.virsh.snapshot_disk.no_delete.positive_test.pool_vol.disk_pool.attach_img_qcow2.v_qcow2.snapshot_default.current: STARTED
 (08/11) type_specific.io-github-autotest-libvirt.virsh.snapshot_disk.no_delete.positive_test.pool_vol.disk_pool.attach_img_qcow2.v_qcow2.snapshot_default.current: PASS (99.09 s)
 (09/11) type_specific.io-github-autotest-libvirt.virsh.snapshot_disk.no_delete.positive_test.pool_vol.disk_pool.attach_img_qcow2.v_qcow2.snapshot_default.no_current: STARTED
 (09/11) type_specific.io-github-autotest-libvirt.virsh.snapshot_disk.no_delete.positive_test.pool_vol.disk_pool.attach_img_qcow2.v_qcow2.snapshot_default.no_current: PASS (105.25 s)
 (10/11) type_specific.io-github-autotest-libvirt.virsh.snapshot_disk.no_delete.positive_test.pool_vol.disk_pool.attach_img_qcow2.v_qcow2.snapshot_default.revert_paused: STARTED
 (10/11) type_specific.io-github-autotest-libvirt.virsh.snapshot_disk.no_delete.positive_test.pool_vol.disk_pool.attach_img_qcow2.v_qcow2.snapshot_default.revert_paused: PASS (103.34 s)
 (11/11) io-github-autotest-libvirt.remove_guest.without_disk: STARTED
 (11/11) io-github-autotest-libvirt.remove_guest.without_disk: PASS (5.30 s)
RESULTS    : PASS 11 | ERROR 0 | FAIL 0 | SKIP 0 | WARN 0 | INTERRUPT 0 | CANCEL 0
JOB HTML   : /home/stuff/tests/results/job-2025-12-16T01.55-f9ae483/results.html
JOB TIME   : 967.89 s
02:11:15 INFO    :
02:11:15 INFO    : Summary of test results can be found below:
TestSuite                                                                                 TestRun    Summary

guest_backuprestore                                                                       Run        Successfully executed
/home/stuff/tests/results/job-2025-12-16T01.55-f9ae483/job.log
| PASS 11 || CANCEL 0 || ERRORS 0 || FAILURES 0 || SKIP 0 || WARN 0 || INTERRUPT 0 |
02:11:15 INFO    : Removing temporary mux dir

When the script tries to label a newly created partition,
it might fail saying the partition is not available, as there
is a slight delay between defining the partition and it being
available in the block devices list. This PR helps to fix that
by adding a wait statement with a 10 second timeout.

Signed-off-by: Aniket Sahu <asahu1x@linux.ibm.com>
@aniket-sahu-ibmx aniket-sahu-ibmx force-pushed the add-delay-for-label-creation branch from 2872a37 to e943377 Compare December 16, 2025 08:49
Copy link

@misanjumn misanjumn left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants