Skip to content

Conversation

@harvey0100
Copy link
Contributor

@harvey0100 harvey0100 commented Oct 21, 2025

Migration of Wait Module - New Functional Tests

Reference: #44
Assisted-By: Cursor Claude-4-Sonnet

Summary by CodeRabbit

  • New Features

    • Added a wait utility to poll a callable until it returns a truthy value, with configurable timeout, initial delay, retry interval, argument forwarding, and optional debug logging.
  • Documentation

    • Added developer docs describing the new wait utility.
  • Tests

    • Added unit and functional tests covering timing, argument passing, edge cases, error propagation, and real I/O scenarios.
  • Chores

    • Added module metadata and test helpers for temporary directory management.

✏️ Tip: You can customize this high-level summary in your review settings.

@harvey0100 harvey0100 requested a review from richtja October 21, 2025 13:03
@harvey0100 harvey0100 self-assigned this Oct 21, 2025
@mr-avocado mr-avocado bot moved this to Review Requested in Default project Oct 21, 2025
@coderabbitai
Copy link

coderabbitai bot commented Oct 21, 2025

Walkthrough

Adds a new module autils.devel.wait providing wait_for(func, timeout, first=0.0, step=1.0, text=None, args=None, kwargs=None) that polls a callable until it returns a truthy value or a timeout using time.monotonic(). Supports an initial delay before the first attempt, configurable retry interval, defaults args/kwargs to empty collections, logs via a module-level LOG when text is provided, and propagates exceptions from the callable. Adds unit and functional tests, a docs entry (docs/source/utils.rst), a metadata descriptor (metadata/devel/wait.yml), and test helpers in tests/utils.py (tempfile import, TestCaseTmpDir, improved skip message).

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~30 minutes

Areas to focus review on:

  • autils/devel/wait.py: timeout calculation using time.monotonic(), handling of the first initial delay, retry loop and sleep (step) behavior, defaulting of args/kwargs, return semantics (first truthy result vs None), and propagation of exceptions; verify LOG usage when text is provided.
  • tests/unit/modules/devel/wait.py: timing-sensitive assertions, breadth of edge-case coverage, and logging checks.
  • tests/functional/modules/devel/wait.py: potential flakiness due to background thread and filesystem timing; ensure synchronization and cleanup.
  • tests/utils.py: TestCaseTmpDir lifecycle (setUp/tearDown) and updated skipUnlessPathExists skip-reason formatting.
  • metadata/devel/wait.yml and docs/source/utils.rst: ensure descriptors and docs accurately reflect the API and test paths.

Pre-merge checks

❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 76.92% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title 'Wait Migration to AAutils' clearly reflects the main change: migrating a wait module into the aautils framework, which is evident from the new wait.py module, tests, documentation, and metadata additions.

Tip

📝 Customizable high-level summaries are now available in beta!

You can now customize how CodeRabbit generates the high-level summary in your pull requests — including its content, structure, tone, and formatting.

  • Provide your own instructions using the high_level_summary_instructions setting.
  • Format the summary however you like (bullet lists, tables, multi-section layouts, contributor stats, etc.).
  • Use high_level_summary_in_walkthrough to move the summary from the description to the walkthrough section.

Example instruction:

"Divide the high-level summary into five sections:

  1. 📝 Description — Summarize the main change in 50–60 words, explaining what was done.
  2. 📓 References — List relevant issues, discussions, documentation, or related PRs.
  3. 📦 Dependencies & Requirements — Mention any new/updated dependencies, environment variable changes, or configuration updates.
  4. 📊 Contributor Summary — Include a Markdown table showing contributions:
    | Contributor | Lines Added | Lines Removed | Files Changed |
  5. ✔️ Additional Notes — Add any extra reviewer context.
    Keep each section concise (under 200 words) and use bullet or numbered lists for clarity."

Note: This feature is currently in beta for Pro-tier users, and pricing will be announced later.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 5

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 82f4d9b and 55ada6f.

📒 Files selected for processing (5)
  • autils/devel/wait.py (1 hunks)
  • docs/source/utils.rst (1 hunks)
  • metadata/devel/wait.yml (1 hunks)
  • tests/functional/modules/devel/wait.py (1 hunks)
  • tests/unit/modules/devel/wait.py (1 hunks)
🧰 Additional context used
🧬 Code graph analysis (2)
tests/functional/modules/devel/wait.py (1)
autils/devel/wait.py (1)
  • wait_for (8-97)
tests/unit/modules/devel/wait.py (1)
autils/devel/wait.py (1)
  • wait_for (8-97)
🪛 Ruff (0.14.1)
tests/functional/modules/devel/wait.py

92-92: Consider moving this statement to an else block

(TRY300)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
  • GitHub Check: modules-tests
  • GitHub Check: api-stability-tests
🔇 Additional comments (2)
docs/source/utils.rst (1)

37-40: LGTM! Documentation structure is appropriate.

The Wait subsection is properly placed under the Devel section and follows the existing documentation pattern.

autils/devel/wait.py (1)

7-8: LGTM! The pylint disable is appropriate.

The function has many parameters by design, making it flexible for various use cases. The disable comment is justified.

Comment on lines +79 to +103
args = args or []
kwargs = kwargs or {}

start_time = time.monotonic()
end_time = start_time + timeout

time.sleep(first)

while time.monotonic() < end_time:
if text:
LOG.debug("%s (%.9f secs)", text, (time.monotonic() - start_time))

output = func(*args, **kwargs)
if output:
return output

time.sleep(step)

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Critical: Zero/negative timeout prevents any function execution.

With timeout=0 or negative values, the loop condition time.monotonic() < end_time at Line 87 is immediately false, so func is never called even once. This behavior may surprise users who expect at least one attempt, especially if the condition could succeed immediately.

Consider calling func at least once before checking the timeout, or explicitly document this behavior more prominently in the docstring (currently buried in the :param timeout: description).

Example fix to ensure at least one attempt:

-    while time.monotonic() < end_time:
+    while True:
         if text:
             LOG.debug("%s (%.9f secs)", text, (time.monotonic() - start_time))
 
         output = func(*args, **kwargs)
         if output:
             return output
+        
+        if time.monotonic() >= end_time:
+            break
 
         time.sleep(step)

Alternatively, if the current behavior is intentional, add a prominent note at the start of the docstring:

     """Wait until a function returns a truthy value or timeout expires.
+    
+    Note: If timeout is zero or negative, the function will not be called at all.
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
args = args or []
kwargs = kwargs or {}
start_time = time.monotonic()
end_time = start_time + timeout
time.sleep(first)
while time.monotonic() < end_time:
if text:
LOG.debug("%s (%.9f secs)", text, (time.monotonic() - start_time))
output = func(*args, **kwargs)
if output:
return output
time.sleep(step)
args = args or []
kwargs = kwargs or {}
start_time = time.monotonic()
end_time = start_time + timeout
time.sleep(first)
while True:
if text:
LOG.debug("%s (%.9f secs)", text, (time.monotonic() - start_time))
output = func(*args, **kwargs)
if output:
return output
if time.monotonic() >= end_time:
break
time.sleep(step)

Comment on lines +10 to +13
supported_platforms:
- CentOS Stream 9
- Fedora 36
- Fedora 37
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

Verify that the supported platform versions are current.

Fedora 36 and Fedora 37 are end-of-life as of October 2025. Please update to currently supported Fedora versions (e.g., Fedora 40, 41) if this module is intended for active use.


🌐 Web query:

What are the currently supported Fedora versions as of October 2025?

💡 Result:

As of October 21, 2025 the supported Fedora releases are Fedora 41 and Fedora 42 (Fedora 42 is the current release). [1][2]

Sources:
[1] Fedora Project — End of Life Releases / Unsupported list.
[2] Fedora Project — Release life cycle / current release information.


Update supported Fedora versions to currently maintained releases.

As of October 2025, the supported Fedora releases are Fedora 41 and Fedora 42. Fedora 36 and Fedora 37 are end-of-life and no longer supported. Update the file to use Fedora 41 and Fedora 42 instead.

🤖 Prompt for AI Agents
In metadata/devel/wait.yml around lines 10 to 13, the supported_platforms list
contains outdated Fedora versions (Fedora 36 and 37); replace them with the
currently maintained releases (Fedora 41 and Fedora 42) so the list reads:
CentOS Stream 9, Fedora 41, Fedora 42. Ensure indentation and YAML list
formatting are preserved.

Comment on lines 82 to 98
def test_network_port_becomes_available(self):
"""Test waiting for a network port to become available."""
import socket

def is_port_free(port):
"""Check if a port is free."""
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
try:
sock.bind(("127.0.0.1", port))
sock.close()
return True
except OSError:
return False

# Find a port that's likely free
test_port = 54321

# Initially the port should be free
if not is_port_free(test_port):
self.skipTest(f"Port {test_port} is not available for testing")

result = wait.wait_for(lambda: is_port_free(test_port), timeout=1.0, step=0.1)
self.assertTrue(result)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Test doesn't match its description.

The test name test_network_port_becomes_available suggests waiting for a port to transition from unavailable to available, but the test only checks if an already-free port is detected as free. Unlike other functional tests in this file, there's no background thread simulating the port becoming available after a delay.

Consider either:

  1. Renaming to test_network_port_is_available, or
  2. Simulating the actual scenario with a background thread that temporarily holds the port and then releases it

Example of improved test matching the description:

def test_network_port_becomes_available(self):
    """Test waiting for a network port to become available."""
    import socket
    import threading
    
    test_port = 54321
    
    def hold_port_then_release():
        sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
        sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
        try:
            sock.bind(("127.0.0.1", test_port))
            time.sleep(0.3)  # Hold port briefly
        finally:
            sock.close()
    
    def is_port_free(port):
        sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
        try:
            sock.bind(("127.0.0.1", port))
            sock.close()
            return True
        except OSError:
            return False
    
    thread = threading.Thread(target=hold_port_then_release)
    thread.start()
    time.sleep(0.1)  # Ensure port is held
    
    result = wait.wait_for(lambda: is_port_free(test_port), timeout=2.0, step=0.1)
    thread.join()
    self.assertTrue(result)
🧰 Tools
🪛 Ruff (0.14.1)

92-92: Consider moving this statement to an else block

(TRY300)

Comment on lines +208 to +217
def test_lambda_function(self):
"""Test wait_for works with lambda functions."""
counter = {"value": 0}

def increment():
counter["value"] += 1
return counter["value"] >= 2

result = wait.wait_for(increment, timeout=5, step=0.1)
self.assertTrue(result)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Docstring doesn't match implementation.

The docstring says "Test wait_for works with lambda functions" but the test uses a regular named function increment, not a lambda.

-    def test_lambda_function(self):
-        """Test wait_for works with lambda functions."""
+    def test_closure_function(self):
+        """Test wait_for works with closure functions."""
         counter = {"value": 0}

Or actually use a lambda:

     def test_lambda_function(self):
         """Test wait_for works with lambda functions."""
         counter = {"value": 0}
-
-        def increment():
-            counter["value"] += 1
-            return counter["value"] >= 2
-
-        result = wait.wait_for(increment, timeout=5, step=0.1)
+        
+        result = wait.wait_for(
+            lambda: (counter.__setitem__("value", counter["value"] + 1), counter["value"] >= 2)[1],
+            timeout=5, step=0.1
+        )
         self.assertTrue(result)
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
def test_lambda_function(self):
"""Test wait_for works with lambda functions."""
counter = {"value": 0}
def increment():
counter["value"] += 1
return counter["value"] >= 2
result = wait.wait_for(increment, timeout=5, step=0.1)
self.assertTrue(result)
def test_closure_function(self):
"""Test wait_for works with closure functions."""
counter = {"value": 0}
def increment():
counter["value"] += 1
return counter["value"] >= 2
result = wait.wait_for(increment, timeout=5, step=0.1)
self.assertTrue(result)
Suggested change
def test_lambda_function(self):
"""Test wait_for works with lambda functions."""
counter = {"value": 0}
def increment():
counter["value"] += 1
return counter["value"] >= 2
result = wait.wait_for(increment, timeout=5, step=0.1)
self.assertTrue(result)
def test_lambda_function(self):
"""Test wait_for works with lambda functions."""
counter = {"value": 0}
result = wait.wait_for(
lambda: (counter.__setitem__("value", counter["value"] + 1), counter["value"] >= 2)[1],
timeout=5, step=0.1
)
self.assertTrue(result)
🤖 Prompt for AI Agents
In tests/unit/modules/devel/wait.py around lines 208 to 217 the test docstring
claims it's testing a lambda but the test uses a named function; either update
the docstring to accurately describe that it uses a regular function (e.g.,
"Test wait_for works with callables / regular functions") or change the
implementation to use a lambda (e.g., replace the named increment function with
a lambda that mutates the counter and returns the condition) so docstring and
implementation match.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (1)
tests/functional/modules/devel/wait.py (1)

77-98: Past review concern remains unaddressed.

The previous review correctly identified that this test doesn't match its description. The test name suggests waiting for a port to transition from unavailable to available, but it only verifies that an already-free port is detected as free—there's no background thread simulating the port becoming available.

Additionally, the static analysis hint suggests a minor style improvement at line 86.

Apply this diff to address both the semantic issue and the style hint:

     def test_network_port_becomes_available(self):
-        """Test waiting for a network port to become available."""
+        """Test waiting for a network port to become available after being held."""
+        import threading
 
         def is_port_free(port):
             """Check if a port is free."""
             sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
             try:
                 sock.bind(("127.0.0.1", port))
-                sock.close()
-                return True
             except OSError:
                 return False
+            else:
+                sock.close()
+                return True
 
-        # Find a port that's likely free
         test_port = 54321
 
-        # Initially the port should be free
-        if not is_port_free(test_port):
-            self.skipTest(f"Port {test_port} is not available for testing")
+        def hold_port_then_release():
+            """Hold the port briefly, then release it."""
+            sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
+            sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
+            try:
+                sock.bind(("127.0.0.1", test_port))
+                time.sleep(0.3)
+            finally:
+                sock.close()
+        
+        # Start thread that holds port then releases it
+        thread = threading.Thread(target=hold_port_then_release)
+        thread.start()
+        time.sleep(0.1)  # Ensure port is held before we start waiting
 
-        result = wait.wait_for(lambda: is_port_free(test_port), timeout=1.0, step=0.1)
+        result = wait.wait_for(lambda: is_port_free(test_port), timeout=2.0, step=0.1)
+        thread.join()
         self.assertTrue(result)
🧹 Nitpick comments (2)
tests/functional/modules/devel/wait.py (2)

292-309: Thread safety relies on CPython's GIL.

These tests use shared mutable structures (shared_list, shared_dict) without explicit synchronization. While operations like list.append() and dict __setitem__ are atomic in CPython due to the GIL, this isn't guaranteed by the Python language specification.

Consider adding explicit locks for clarity and portability, similar to test_counter_reaches_value:

def test_list_accumulation(self):
    """Test waiting for a list to accumulate enough items."""
    shared_list = []
    lock = threading.Lock()

    def add_items_slowly():
        for i in range(10):
            time.sleep(0.05)
            with lock:
                shared_list.append(i)

    thread = threading.Thread(target=add_items_slowly)
    thread.start()

    def check_list_size():
        with lock:
            return len(shared_list) >= 5

    result = wait.wait_for(check_list_size, timeout=2.0, step=0.05)

    thread.join()
    self.assertTrue(result)
    with lock:
        self.assertGreaterEqual(len(shared_list), 5)

Also applies to: 311-332


220-235: Use time.monotonic() for elapsed time measurement.

The test measures elapsed time with time.time(), which can be affected by system clock adjustments. Since wait_for internally uses time.monotonic(), this test should too for consistency and reliability.

Apply this diff:

     def test_timeout_in_real_scenario(self):
         """Test that timeout works correctly in real scenario."""
         tmpdir = tempfile.mkdtemp()
         filepath = os.path.join(tmpdir, "nonexistent.txt")
 
         # Wait for a file that will never be created
-        start = time.time()
+        start = time.monotonic()
         result = wait.wait_for(lambda: os.path.exists(filepath), timeout=0.5, step=0.1)
-        elapsed = time.time() - start
+        elapsed = time.monotonic() - start
 
         self.assertIsNone(result)
         self.assertGreaterEqual(elapsed, 0.5)
         self.assertLess(elapsed, 0.7)
 
         # Cleanup
         os.rmdir(tmpdir)
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 55ada6f and d2c347b.

📒 Files selected for processing (5)
  • autils/devel/wait.py (1 hunks)
  • docs/source/utils.rst (1 hunks)
  • metadata/devel/wait.yml (1 hunks)
  • tests/functional/modules/devel/wait.py (1 hunks)
  • tests/unit/modules/devel/wait.py (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (4)
  • metadata/devel/wait.yml
  • tests/unit/modules/devel/wait.py
  • docs/source/utils.rst
  • autils/devel/wait.py
🧰 Additional context used
🧬 Code graph analysis (1)
tests/functional/modules/devel/wait.py (1)
autils/devel/wait.py (1)
  • wait_for (8-97)
🪛 Ruff (0.14.1)
tests/functional/modules/devel/wait.py

86-86: Consider moving this statement to an else block

(TRY300)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
  • GitHub Check: api-stability-tests
  • GitHub Check: modules-tests
🔇 Additional comments (2)
tests/functional/modules/devel/wait.py (2)

370-381: LGTM! Good stress test.

This test effectively validates that wait_for stops immediately when the condition becomes true, without overshooting. The rapid polling interval (1ms) stresses the mechanism while the assertion on the exact counter value confirms precise termination.


14-44: Excellent test coverage for real-world scenarios.

These functional tests comprehensively exercise wait_for across diverse scenarios: filesystem operations, process simulation, environment variables, and data structure conditions. The use of background threads to simulate asynchronous state changes accurately reflects real-world usage patterns.

Also applies to: 46-75, 100-128, 130-150, 152-184, 186-218, 237-257, 259-290, 334-368

Comment on lines 14 to 44
def test_file_appears(self):
"""Test waiting for a file to appear in filesystem."""
# Create a temporary directory
tmpdir = tempfile.mkdtemp()
filepath = os.path.join(tmpdir, "test_file.txt")

# Create file after a delay
def create_file_delayed():
time.sleep(0.3)
with open(filepath, "w", encoding="utf-8") as f:
f.write("test content")

# Start file creation in background
thread = threading.Thread(target=create_file_delayed)
thread.start()

# Wait for file to exist
result = wait.wait_for(
lambda: os.path.exists(filepath),
timeout=2.0,
step=0.1,
text="Waiting for file to appear",
)

thread.join()
self.assertTrue(result)
self.assertTrue(os.path.exists(filepath))

# Cleanup
os.remove(filepath)
os.rmdir(tmpdir)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion | 🟠 Major

Use addCleanup to ensure resource cleanup on test failure.

Multiple tests create temporary files and directories with cleanup code after assertions. If assertions fail, cleanup won't execute, leaving resources in /tmp.

Use self.addCleanup() to register cleanup functions that run even when tests fail:

 def test_file_appears(self):
     """Test waiting for a file to appear in filesystem."""
-    # Create a temporary directory
     tmpdir = tempfile.mkdtemp()
+    self.addCleanup(os.rmdir, tmpdir)
     filepath = os.path.join(tmpdir, "test_file.txt")
+    self.addCleanup(lambda: os.path.exists(filepath) and os.remove(filepath))
 
     # Create file after a delay
     def create_file_delayed():
         time.sleep(0.3)
         with open(filepath, "w", encoding="utf-8") as f:
             f.write("test content")
 
     # Start file creation in background
     thread = threading.Thread(target=create_file_delayed)
     thread.start()
 
     # Wait for file to exist
     result = wait.wait_for(
         lambda: os.path.exists(filepath),
         timeout=2.0,
         step=0.1,
         text="Waiting for file to appear",
     )
 
     thread.join()
     self.assertTrue(result)
     self.assertTrue(os.path.exists(filepath))
-
-    # Cleanup
-    os.remove(filepath)
-    os.rmdir(tmpdir)

Apply the same pattern to other tests with temporary resources: test_file_size_reaches_threshold, test_process_completion_simulation, test_environment_variable_appears, test_multiple_conditions_combined, test_timeout_in_real_scenario, test_directory_creation, test_file_content_matches, and test_with_complex_condition_function.

Also applies to: 46-75, 100-128, 130-150, 186-218, 220-235, 237-257, 259-290, 334-368

🤖 Prompt for AI Agents
tests/functional/modules/devel/wait.py lines 14-44: the test creates a temp
directory, starts a background thread and performs assertions but performs
cleanup only after assertions, so failures leak temp files/threads; replace the
post-assert manual cleanup with self.addCleanup registrations: call
self.addCleanup(os.remove, filepath) and self.addCleanup(os.rmdir, tmpdir) (or
wrapper functions that check existence), and register thread.join via
self.addCleanup(thread.join) immediately after starting the thread so cleanup
always runs; apply the same addCleanup pattern to the other tests listed (lines
46-75, 100-128, 130-150, 186-218, 220-235, 237-257, 259-290, 334-368) to ensure
temporary files/directories and background threads are always cleaned up on
failure.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

♻️ Duplicate comments (4)
metadata/devel/wait.yml (1)

10-13: Update supported Fedora versions to currently maintained releases.

Fedora 36 and Fedora 37 are end-of-life as of October 2025. Update to currently supported Fedora versions (Fedora 41 and Fedora 42).

autils/devel/wait.py (1)

86-104: Zero/negative timeout prevents function execution.

With timeout=0 or negative values, the while loop condition at Line 94 is immediately false, so func is never called. This behavior may surprise users expecting at least one attempt. A past review comment has already flagged this issue.

tests/unit/modules/devel/wait.py (2)

106-116: Misleading comment in test.

Line 115 states "Should have values left since it stopped early" but Line 116 asserts len(values) == 0. All values are consumed before finding the truthy value, so the comment is inaccurate.


208-217: Docstring doesn't match implementation.

The docstring at Line 209 says "Test wait_for works with lambda functions" but the test uses a regular named function increment, not a lambda.

🧹 Nitpick comments (4)
tests/utils.py (1)

6-10: Verify parameter type in temp_dir_prefix.

The function accesses klass.__class__.__name__, suggesting klass is an instance. However, when called from TestCaseTmpDir.setUp() at Line 22, self is passed, which is correct. The naming is slightly misleading - consider renaming the parameter to instance or self_instance for clarity, or document that it expects an instance.

Apply this diff for clarity:

-def temp_dir_prefix(klass):
+def temp_dir_prefix(instance):
     """
     Returns a standard name for the temp dir prefix used by the tests
     """
-    return f"autils_{klass.__class__.__name__}_"
+    return f"autils_{instance.__class__.__name__}_"
tests/functional/modules/devel/wait.py (3)

13-37: Consider using addCleanup for thread.join().

The test correctly uses TestCaseTmpDir for automatic file cleanup. However, thread.join() at Line 35 should ideally be registered with self.addCleanup() to ensure the thread is always joined even if assertions fail, preventing thread leaks.

Apply this pattern:

 thread = threading.Thread(target=create_file_delayed)
 thread.start()
+self.addCleanup(thread.join)
 
 # Wait for file to exist
 result = wait.wait_for(
     lambda: os.path.exists(filepath),
     timeout=2.0,
     step=0.1,
     text="Waiting for file to appear",
 )
 
 thread.join()

39-65: Apply addCleanup pattern for thread.

Same recommendation as the previous test - register thread.join() with addCleanup to ensure threads are always joined.


80-109: LGTM! Excellent complex scenario test with addCleanup recommendation.

This test validates complex real-world behavior including file I/O, parsing, and error handling. Well-designed functional test. Consider adding self.addCleanup(thread.join) for consistency.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between d2c347b and 9416423.

📒 Files selected for processing (6)
  • autils/devel/wait.py (1 hunks)
  • docs/source/utils.rst (1 hunks)
  • metadata/devel/wait.yml (1 hunks)
  • tests/functional/modules/devel/wait.py (1 hunks)
  • tests/unit/modules/devel/wait.py (1 hunks)
  • tests/utils.py (1 hunks)
🧰 Additional context used
🧬 Code graph analysis (2)
tests/functional/modules/devel/wait.py (2)
tests/utils.py (1)
  • TestCaseTmpDir (13-26)
autils/devel/wait.py (1)
  • wait_for (15-104)
tests/unit/modules/devel/wait.py (1)
autils/devel/wait.py (1)
  • wait_for (15-104)
🔇 Additional comments (18)
docs/source/utils.rst (1)

37-40: LGTM! Documentation addition is clear.

The new Wait subsection properly documents the autils.devel.wait module under the Devel section, following the existing documentation pattern.

tests/utils.py (1)

13-26: LGTM! Excellent test utility pattern.

The TestCaseTmpDir base class provides clean automatic temporary directory management with proper setup and teardown, following unittest best practices.

metadata/devel/wait.yml (1)

1-17: Metadata structure looks good overall.

The metadata file properly defines the module name, description, category, maintainer information, and test paths.

autils/devel/wait.py (2)

1-12: LGTM! Clean module structure.

Module docstring clearly describes the purpose, and logging setup follows Python best practices.


14-85: LGTM! Comprehensive and well-structured docstring.

The docstring thoroughly documents parameters, behavior, return values, exceptions, and provides helpful examples. The notes about time.monotonic() and step sleep not being interrupted are particularly valuable.

tests/functional/modules/devel/wait.py (3)

1-11: LGTM! Clean test setup.

Imports are appropriate and the test class correctly inherits from TestCaseTmpDir for automatic temporary directory management.


67-78: LGTM! Good timeout validation test.

This test properly validates that wait_for respects the timeout and returns None when the condition never becomes true. The timing assertions are reasonable with appropriate tolerances.


111-122: LGTM! Good edge case test for fast polling.

This test validates that wait_for handles very fast polling intervals correctly and stops immediately upon success.

tests/unit/modules/devel/wait.py (10)

1-10: LGTM! Clean test module structure.

Imports are appropriate and the test class is well-organized with descriptive docstring.


11-44: LGTM! Excellent basic functionality tests.

These tests properly validate immediate success, value returns, timeout behavior, and retry logic with clear assertions and good use of mocks.


46-67: LGTM! Good timing parameter tests.

The test_first_delay, test_step_interval, and test_zero_timeout tests properly validate the timing parameters with appropriate assertions.


69-87: LGTM! Comprehensive argument passing tests.

These tests thoroughly validate that positional arguments, keyword arguments, and combinations are correctly passed to the target function.


89-104: LGTM! Well-designed logging tests.

Both tests properly validate that logging occurs when text is provided and doesn't occur when it's None, using the assertLogs context manager correctly.


118-142: LGTM! Thorough return type validation.

These tests properly validate that wait_for correctly handles various return types including zero, empty strings, lists, and dictionaries.


144-149: LGTM! Proper exception propagation test.

This test correctly validates that exceptions raised by the target function are propagated to the caller.


151-176: LGTM! Excellent edge case coverage.

These tests validate negative timeout, large step vs timeout, and very small timeout scenarios with clear documentation and appropriate assertions.


178-206: LGTM! Comprehensive None/empty args and callable object tests.

These tests properly validate handling of None/empty arguments and callable objects with clear assertions.


219-228: LGTM! Good timing precision validation.

This test validates that the timeout is reasonably accurate with appropriate tolerance for system variance.

@harvey0100 harvey0100 force-pushed the wait branch 2 times, most recently from db631e6 to 00fe17a Compare November 3, 2025 16:11
@harvey0100 harvey0100 force-pushed the wait branch 2 times, most recently from a8df24a to 5546282 Compare November 10, 2025 15:54
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (3)
tests/functional/modules/devel/wait.py (1)

24-37: Ensure thread cleanup with addCleanup.

The thread is started at line 24, but thread.join() is called at line 35 after assertions. If any assertion fails, the join won't execute. Although the thread is short-lived, using self.addCleanup(thread.join) immediately after thread.start() ensures cleanup runs even on test failure.

Apply this diff:

 # Start file creation in background
 thread = threading.Thread(target=create_file_delayed)
 thread.start()
+self.addCleanup(thread.join)

 # Wait for file to exist
 result = wait.wait_for(
     lambda: os.path.exists(filepath),
     timeout=2.0,
     step=0.1,
     text="Waiting for file to appear",
 )

-thread.join()
 self.assertTrue(result)
 self.assertTrue(os.path.exists(filepath))
tests/unit/modules/devel/wait.py (2)

115-116: Fix the misleading comment.

The comment on line 115 claims "Should have values left since it stopped early" but the assertion on line 116 verifies that len(values) == 0. All five values [False, 0, None, "", "found"] must be consumed before the truthy value is found, so no values remain.

Note: This issue was flagged in a previous review and marked as addressed, but the incorrect comment is still present in the code.

Apply this diff to correct the comment:

     result = wait.wait_for(func_with_sequence, timeout=5, step=0.05)
     self.assertEqual(result, "found")
-    # Should have values left since it stopped early
+    # All values were consumed to find the truthy one
     self.assertEqual(len(values), 0)

208-217: Docstring doesn't match implementation.

The docstring claims "Test wait_for works with lambda functions" but the test uses a regular named function increment, not a lambda.

Consider either solution:

Solution 1 (simpler): Update the docstring to match the implementation

-    def test_lambda_function(self):
-        """Test wait_for works with lambda functions."""
+    def test_closure_function(self):
+        """Test wait_for works with closure functions."""
         counter = {"value": 0}

Solution 2 (match docstring): Actually use a lambda

     def test_lambda_function(self):
         """Test wait_for works with lambda functions."""
         counter = {"value": 0}
-
-        def increment():
-            counter["value"] += 1
-            return counter["value"] >= 2
-
-        result = wait.wait_for(increment, timeout=5, step=0.1)
+        
+        result = wait.wait_for(
+            lambda: (counter.__setitem__("value", counter["value"] + 1), counter["value"] >= 2)[1],
+            timeout=5,
+            step=0.1
+        )
         self.assertTrue(result)
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between a8df24a and 5546282.

📒 Files selected for processing (5)
  • autils/devel/wait.py (1 hunks)
  • docs/source/utils.rst (1 hunks)
  • metadata/devel/wait.yml (1 hunks)
  • tests/functional/modules/devel/wait.py (1 hunks)
  • tests/unit/modules/devel/wait.py (1 hunks)
✅ Files skipped from review due to trivial changes (1)
  • docs/source/utils.rst
🚧 Files skipped from review as they are similar to previous changes (2)
  • autils/devel/wait.py
  • metadata/devel/wait.yml
🧰 Additional context used
🧬 Code graph analysis (2)
tests/unit/modules/devel/wait.py (1)
autils/devel/wait.py (1)
  • wait_for (15-104)
tests/functional/modules/devel/wait.py (2)
tests/utils.py (1)
  • TestCaseTmpDir (13-26)
autils/devel/wait.py (1)
  • wait_for (15-104)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
  • GitHub Check: modules-tests
  • GitHub Check: api-stability-tests

Comment on lines +44 to +50
start = time.time()
result = wait.wait_for(lambda: os.path.exists(filepath), timeout=0.5, step=0.1)
elapsed = time.time() - start

self.assertIsNone(result)
self.assertGreaterEqual(elapsed, 0.5)
self.assertLess(elapsed, 0.7)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Upper bound timing assertion may be flaky.

The assertion self.assertLess(elapsed, 0.7) checks that execution completes within 0.2 seconds of the 0.5s timeout. On slower systems or under load (CI environments), this 0.2s margin might be insufficient, leading to intermittent test failures.

Consider either removing the upper bound check or increasing the margin to ~1.0 second to account for system variance.

Apply this diff to increase the margin:

 self.assertIsNone(result)
 self.assertGreaterEqual(elapsed, 0.5)
-self.assertLess(elapsed, 0.7)
+self.assertLess(elapsed, 1.0)  # More lenient for slow systems
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
start = time.time()
result = wait.wait_for(lambda: os.path.exists(filepath), timeout=0.5, step=0.1)
elapsed = time.time() - start
self.assertIsNone(result)
self.assertGreaterEqual(elapsed, 0.5)
self.assertLess(elapsed, 0.7)
start = time.time()
result = wait.wait_for(lambda: os.path.exists(filepath), timeout=0.5, step=0.1)
elapsed = time.time() - start
self.assertIsNone(result)
self.assertGreaterEqual(elapsed, 0.5)
self.assertLess(elapsed, 1.0) # More lenient for slow systems
🤖 Prompt for AI Agents
In tests/functional/modules/devel/wait.py around lines 44 to 50, the upper-bound
timing assertion self.assertLess(elapsed, 0.7) is too tight and can be flaky on
slow/loaded CI; replace that assertion with a larger margin — e.g. change it to
self.assertLess(elapsed, 1.5) (or remove the upper-bound check entirely) so the
test tolerates ~1.0s of extra variance beyond the 0.5s timeout.

Added TestCaseTmpDir Class to Autils

Signed-off-by: Harvey Lynden <hlynden@redhat.com>
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

♻️ Duplicate comments (3)
tests/unit/modules/devel/wait.py (2)

106-116: Fix the misleading comment.

The comment on line 115 states "Should have values left since it stopped early" but the assertion on line 116 verifies len(values) == 0. All five values are consumed before finding the truthy value "found", so the comment is inaccurate.

Apply this diff:

         result = wait.wait_for(func_with_sequence, timeout=5, step=0.05)
         self.assertEqual(result, "found")
-        # Should have values left since it stopped early
+        # All values were consumed to find the truthy one
         self.assertEqual(len(values), 0)

208-217: Update docstring or use an actual lambda.

The docstring claims "Test wait_for works with lambda functions" but the test uses a regular named function increment, not a lambda.

Either update the docstring:

-    def test_lambda_function(self):
-        """Test wait_for works with lambda functions."""
+    def test_closure_function(self):
+        """Test wait_for works with closure functions."""
         counter = {"value": 0}

Or use an actual lambda (though this is more complex):

     def test_lambda_function(self):
         """Test wait_for works with lambda functions."""
         counter = {"value": 0}
-
-        def increment():
-            counter["value"] += 1
-            return counter["value"] >= 2
-
-        result = wait.wait_for(increment, timeout=5, step=0.1)
+        result = wait.wait_for(
+            lambda: (counter.__setitem__("value", counter["value"] + 1), counter["value"] >= 2)[1],
+            timeout=5, step=0.1
+        )
         self.assertTrue(result)
tests/functional/modules/devel/wait.py (1)

39-50: Timing assertion may be flaky on slower systems.

The assertion self.assertLess(elapsed, 0.7) on line 50 enforces a 0.2-second margin beyond the 0.5-second timeout. On slower systems or under load (CI environments), this tight margin might cause intermittent failures.

Consider increasing the margin:

     self.assertIsNone(result)
     self.assertGreaterEqual(elapsed, 0.5)
-    self.assertLess(elapsed, 0.7)
+    self.assertLess(elapsed, 1.0)  # More lenient for slow systems
🧹 Nitpick comments (2)
tests/utils.py (1)

6-10: Rename parameter for clarity.

The parameter klass suggests a class object, but the function receives an instance (self from line 22). The name should reflect this.

Apply this diff to improve clarity:

-def temp_dir_prefix(klass):
+def temp_dir_prefix(instance):
     """
     Returns a standard name for the temp dir prefix used by the tests
     """
-    return f"autils_{klass.__class__.__name__}_"
+    return f"autils_{instance.__class__.__name__}_"
tests/functional/modules/devel/wait.py (1)

13-37: Consider using addCleanup for thread synchronization.

While TestCaseTmpDir automatically cleans up the temporary directory, the thread join at line 35 won't execute if an earlier assertion fails. This could leave the thread running beyond the test lifetime.

Apply this diff to ensure thread cleanup:

     # Start file creation in background
     thread = threading.Thread(target=create_file_delayed)
     thread.start()
+    self.addCleanup(thread.join)

     # Wait for file to exist
     result = wait.wait_for(
         lambda: os.path.exists(filepath),
         timeout=2.0,
         step=0.1,
         text="Waiting for file to appear",
     )

-    thread.join()
     self.assertTrue(result)
     self.assertTrue(os.path.exists(filepath))
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 49e64e2 and df1786c.

📒 Files selected for processing (6)
  • autils/devel/wait.py (1 hunks)
  • docs/source/utils.rst (1 hunks)
  • metadata/devel/wait.yml (1 hunks)
  • tests/functional/modules/devel/wait.py (1 hunks)
  • tests/unit/modules/devel/wait.py (1 hunks)
  • tests/utils.py (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (2)
  • autils/devel/wait.py
  • metadata/devel/wait.yml
🧰 Additional context used
🧬 Code graph analysis (2)
tests/unit/modules/devel/wait.py (1)
autils/devel/wait.py (1)
  • wait_for (15-104)
tests/functional/modules/devel/wait.py (2)
tests/utils.py (1)
  • TestCaseTmpDir (13-26)
autils/devel/wait.py (1)
  • wait_for (15-104)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: modules-tests
🔇 Additional comments (2)
docs/source/utils.rst (1)

41-44: LGTM!

The Wait subsection follows the established documentation pattern and properly references the new autils.devel.wait module.

tests/utils.py (1)

13-26: LGTM!

The TestCaseTmpDir base class provides clean temporary directory management with proper lifecycle handling via setUp and tearDown.

Copy link
Contributor

@richtja richtja left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @harvey0100, can you please fix the static-checks error. Thanks

Migration of Wait Module - New Functional Tests

Reference: avocado-framework#44
Assisted-By: Cursor Claude-4-Sonnet
Signed-off-by: Harvey Lynden <hlynden@redhat.com>
@harvey0100
Copy link
Contributor Author

Hi @richtja oh i did not notice this thank you for noticing sorted that now! :)

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (3)
tests/functional/modules/devel/wait.py (1)

44-50: Relax upper‑bound elapsed‑time assertion in test_timeout_when_condition_never_true.

The assertion self.assertLess(elapsed, 0.7) for a 0.5s timeout gives only ~0.2s slack and has already been flagged as a potential flakiness source on slow CI. Since this is a functional test, timing variance can easily exceed that margin.

Consider either:

  • Dropping the upper‑bound check and only asserting elapsed >= 0.5, or
  • Making the bound more generous (e.g. elapsed < 1.5) to tolerate slow environments.

This keeps the test robust while still verifying that wait_for respects the timeout.

tests/unit/modules/devel/wait.py (2)

106-116: Fix misleading comment about remaining values in test_returns_first_truthy_value.

The test correctly asserts len(values) == 0, meaning all values are consumed before returning "found", but the comment says “Should have values left since it stopped early”, which is the opposite of what happens. This was flagged previously and still looks inconsistent.

Suggested fix:

-        # Should have values left since it stopped early
-        self.assertEqual(len(values), 0)
+        # All values should be consumed to reach the first truthy value
+        self.assertEqual(len(values), 0)

208-217: Align test_lambda_function name/docstring with implementation.

The test is named test_lambda_function and its docstring mentions lambda functions, but it actually uses a regular nested function increment. This was called out before and is still a bit misleading when reading test output.

One simple way to align everything is to rename the test and docstring to describe what it really does:

-    def test_lambda_function(self):
-        """Test wait_for works with lambda functions."""
+    def test_closure_function(self):
+        """Test wait_for works with closure / stateful functions."""
         counter = {"value": 0}

(You can keep the body as‑is; renaming the method and docstring is enough to make the intent clear.)

🧹 Nitpick comments (2)
autils/devel/wait.py (1)

52-57: Optional: use explicit is None checks for args / kwargs.

Using args = args or [] and kwargs = kwargs or {} is idiomatic but will also normalize other falsy values (e.g. an empty tuple) to a list/dict, which is slightly broader than the docstring’s “If None, defaults to empty …”.

If you want behavior that’s strictly aligned with the docs and avoids any surprise from falsy containers, consider:

-    args = args or []
-    kwargs = kwargs or {}
+    if args is None:
+        args = []
+    if kwargs is None:
+        kwargs = {}

Not required for correctness, just a small clarity/precision improvement.

Also applies to: 86-87

tests/unit/modules/devel/wait.py (1)

24-33: Loosen or drop tight upper‑bound timing assertions to reduce flakiness.

Both test_timeout_returns_none (elapsed < 0.7 for a 0.5s timeout) and test_timing_precision (elapsed < timeout_val * 1.2 for 1.0s) assert fairly tight upper bounds on wall‑clock time. On slower or heavily loaded CI, these can intermittently fail even when wait_for behaves correctly.

Consider either:

  • Removing the upper‑bound checks and only asserting elapsed >= timeout, or
  • Relaxing them substantially (e.g. elapsed < 1.5 for the 0.5s case, and something like < timeout_val * 2.0 for the 1.0s case) to account for scheduling variance.

This keeps the tests focused on “not returning too early” without being brittle about exact timing.

Also applies to: 219-228

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between df1786c and 3702ba4.

📒 Files selected for processing (6)
  • autils/devel/wait.py (1 hunks)
  • docs/source/utils.rst (1 hunks)
  • metadata/devel/wait.yml (1 hunks)
  • tests/functional/modules/devel/wait.py (1 hunks)
  • tests/unit/modules/devel/wait.py (1 hunks)
  • tests/utils.py (1 hunks)
✅ Files skipped from review due to trivial changes (1)
  • docs/source/utils.rst
🚧 Files skipped from review as they are similar to previous changes (2)
  • tests/utils.py
  • metadata/devel/wait.yml
🧰 Additional context used
🧬 Code graph analysis (2)
tests/unit/modules/devel/wait.py (1)
autils/devel/wait.py (1)
  • wait_for (15-104)
tests/functional/modules/devel/wait.py (2)
tests/utils.py (1)
  • TestCaseTmpDir (13-27)
autils/devel/wait.py (1)
  • wait_for (15-104)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
  • GitHub Check: modules-tests
  • GitHub Check: api-stability-tests
🔇 Additional comments (1)
autils/devel/wait.py (1)

15-65: Good, self‑contained polling utility with clear behavior.

The overall shape of wait_for (monotonic clock, configurable first/step, optional logging, and rich docstring with examples) looks solid and matches typical polling helpers. No correctness issues spotted in the main control flow.

Comment on lines +34 to +37
:param timeout: Maximum time in seconds to wait for func to return a
truthy value. Must be a non-negative number. If timeout
expires before func returns truthy, None is returned.
:type timeout: float or int
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Clarify timeout semantics for zero/negative values and alignment with tests.

The docstring states that timeout “Must be a non-negative number”, but the implementation accepts negative values and the unit test test_negative_timeout explicitly exercises timeout=-1 and expects None. For timeout <= 0 (or first > timeout), end_time <= start_time so the loop never runs and func is never called, but this subtle behavior is not clearly documented.

To avoid surprises for callers and future maintainers, I’d suggest rewording the timeout docs to describe the current behavior explicitly instead of implying validation that isn’t performed, for example:

-    :param timeout: Maximum time in seconds to wait for func to return a
-                    truthy value. Must be a non-negative number. If timeout
-                    expires before func returns truthy, None is returned.
+    :param timeout: Maximum time in seconds to wait for func to return a
+                    truthy value. If the timeout has already expired
+                    (for example, a zero or negative timeout, or when the
+                    initial ``first`` delay exceeds the timeout), ``func``
+                    will not be called and ``None`` is returned immediately.
+                    Otherwise, if the timeout expires before ``func`` returns
+                    a truthy value, ``None`` is returned.

(Adjust wording as you prefer, but making the “no call when already expired” behavior explicit would help a lot.)

Also applies to: 86-92

🤖 Prompt for AI Agents
In autils/devel/wait.py around lines 34-37 (and similarly 86-92), the docstring
misleadingly says "Must be a non-negative number" while the implementation
permits negative timeouts and the test suite expects timeout=-1 to result in no
calls and return None; update the docstrings to explicitly state the observed
behavior: timeout is interpreted as seconds (float|int), no validation is
performed, and if timeout <= 0 (or the computed end_time is already <=
start_time) the function will not call func and will return None immediately;
keep wording concise and mention alignment with existing tests.

@harvey0100
Copy link
Contributor Author

Dependency to Process Module

@richtja richtja self-requested a review December 1, 2025 14:57
@richtja richtja merged commit 95cce4a into avocado-framework:main Dec 1, 2025
6 checks passed
@github-project-automation github-project-automation bot moved this from Review Requested to Done 113 in Default project Dec 1, 2025
@codecov
Copy link

codecov bot commented Dec 1, 2025

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 0.00%. Comparing base (5c091a0) to head (3702ba4).
⚠️ Report is 4 commits behind head on main.

Additional details and impacted files
@@    Coverage Diff     @@
##   main   #95   +/-   ##
==========================
==========================

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

Status: Done 113

Development

Successfully merging this pull request may close these issues.

2 participants