Skip to content

Conversation

@luisacicolini
Copy link
Contributor

This PR enables running the script run_mca.py for the evaluation of the RISCV backend verification multithread.

@github-actions
Copy link
Contributor

bv_decide solved 0 theorems.
bitwuzla solved 0 theorems.
bv_decide found 0 counterexamples.
bitwuzla found 0 counterexamples.
bv_decide only failed on 0 problems.
bitwuzla only failed on 0 problems.
both bitwuzla and bv_decide failed on 0 problems.
In total, bitwuzla saw 0 problems.
In total, bv_decide saw 0 problems.
ran rg 'LeanSAT provided a counter' | wc -l, this file found 0, rg found 0, SUCCESS
ran rg 'Bitwuzla provided a counter' | wc -l, this file found 0, rg found 0, SUCCESS
ran rg 'LeanSAT proved' | wc -l, this file found 0, rg found 0, SUCCESS
ran rg 'Bitwuzla proved' | wc -l, this file found 0, rg found 0, SUCCESS
The InstCombine benchmark contains 4520 theorems in total.
Saved dataframe at: /home/runner/work/lean-mlir/lean-mlir/bv-evaluation/raw-data/InstCombine/instcombine_ceg_data.csv
all_files_solved_bitwuzla_times_stddev avg: nan | stddev: nan
all_files_solved_bv_decide_times_stddev avg: nan | stddev: nan
all_files_solved_bv_decide_rw_times_stddev avg: nan | stddev: nan
all_files_solved_bv_decide_bb_times_stddev avg: nan | stddev: nan
all_files_solved_bv_decide_sat_times_stddev avg: nan | stddev: nan
all_files_solved_bv_decide_lratt_times_stddev avg: nan | stddev: nan
all_files_solved_bv_decide_lratc_times_stddev avg: nan | stddev: nan
mean of percentage stddev/av: nan%

@alexkeizer
Copy link
Collaborator

This looks good to make use of all cores on a single machine, but it's not quite what is needed to scale over multiple machines.

You can have a look at the changes I made in #1629, where I parallelized the LLVM evaluation using a stride/offset strategy: In that script there was a clear set of files to be processed, so we could parallelize it over multiple machines just by distributing the files to be processed.

To be more precise, each job (i.e., invocation of the script) was passed an offset CLI argument that ranged from 0 to n-1 (inclusive) and a stride argument equal to n, where n is the number of jobs. Each job started with the same total list of all files to be processed, dropped the first offset-many files and then only processed every nth file, such that combined each file was processed by exactly one of the n jobs running.

AFAICT, here too we have a fixed set of files to be processed? So we ought to be able to follow a similar strategy. Does the above make sense? If not, let's jump on a call and pair on it!

@alexkeizer
Copy link
Collaborator

Actually, I had a go at this myself, at #1688. That PR implements both multi-threading and multi-machine parallelization, so it would supersede this PR.

@alexkeizer
Copy link
Collaborator

Superseded by #1668

@alexkeizer alexkeizer closed this Sep 19, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants