Skip to content

Conversation

@SuhailB
Copy link
Contributor

@SuhailB SuhailB commented Jul 3, 2025

Hi @codelion, this is the rebased version of #63

One thing I noticed from the visualization is that with parallel iterations, the depth of the tree (number of generations) seems to be smaller. I am not sure what the impact on performance is, but from the experiments I did here is how it performs.

I've tested the updated-parallel-iteration version for the circle_packing_with_artifacts example for 50 iterations for each stage instead of 100 to reduce API costs. I used gemini-2.0-flash-lite and gemini-2.0-flash-lite.

Results:
Sequential:
Total Runtime: 30 minutes
Best Score (sum_radii): 1.88
Stage1:

{
  "id": "788596e9-6d2b-4134-ade9-c309fdf812c2",
  "generation": 2,
  "iteration": 6,
  "timestamp": 1751574058.320835,
  "parent_id": "6eb73296-9b3c-42e1-91da-2063ce7acfaf",
  "metrics": {
    "validity": 1.0,
    "sum_radii": 1.8801588665719113,
    "target_ratio": 0.7135327766876325,
    "combined_score": 0.7135327766876325,
    "eval_time": 0.11612915992736816
  },
  "language": "python",
  "saved_at": 1751574423.6830008
}

Stage2:

{
  "id": "084465de-efce-4e42-81b0-79f6a044e819",
  "generation": 0,
  "iteration": 0,
  "timestamp": 1751574425.2020102,
  "parent_id": null,
  "metrics": {
    "validity": 1.0,
    "sum_radii": 1.8801588665719113,
    "target_ratio": 0.7135327766876325,
    "combined_score": 0.7135327766876325,
    "eval_time": 0.11661648750305176
  },
  "language": "python",
  "saved_at": 1751575856.2608302
}

Parallel (25 cores):
Total Runtime: 2 minutes (~15x speedup)
Best score (sum_radii): 2.038
Stage1:

{
  "id": "b96d6e67-76bb-4765-9f51-584d92e5d2c2",
  "generation": 1,
  "iteration": 20,
  "timestamp": 1751577546.0471964,
  "parent_id": "49307652-3ead-485f-8c21-1ce11a16ec29",
  "metrics": {
    "validity": 1.0,
    "sum_radii": 1.8598312591600656,
    "target_ratio": 0.7058183146717517,
    "combined_score": 0.7058183146717517,
    "eval_time": 0.22881340980529785
  },
  "language": "python",
  "saved_at": 1751577556.0780315
}

Stage2:

{
  "id": "1a044db7-811c-4ea6-92e3-1918c34b28a1",
  "generation": 1,
  "iteration": 16,
  "timestamp": 1751577569.0512633,
  "parent_id": "74a8ac96-f519-49e5-b3da-f4b9058cdc2e",
  "metrics": {
    "validity": 1.0,
    "sum_radii": 2.038107874942713,
    "target_ratio": 0.7734754743615609,
    "combined_score": 0.7734754743615609,
    "eval_time": 0.3030838966369629
  },
  "language": "python",
  "saved_at": 1751577604.0499
}

@CLAassistant
Copy link

CLAassistant commented Jul 3, 2025

CLA assistant check
All committers have signed the CLA.

@codelion
Copy link
Member

codelion commented Jul 3, 2025

I left a few comments to clarify, but otherwise looks good. Thank you for the contributions.

@SuhailB
Copy link
Contributor Author

SuhailB commented Jul 3, 2025

Let's wait for the original contributor, otherwise, we can revert these modifications.

@MashAliK
Copy link
Contributor

MashAliK commented Jul 4, 2025

@codelion @SuhailB I've left my reasoning for these changes in the comments here. I added these when I was using an older version of the library so some things might have been fixed since then, like the edit distance bug I was encountering.
Ultimately, none of these changes are related to the actual parallel iteration functionality and were quick tweaks I made to try to improve my training. They can be reverted.

@codelion
Copy link
Member

codelion commented Jul 4, 2025

Ultimately, none of these changes are related to the actual parallel iteration functionality and were quick tweaks I made to try to improve my training. They can be reverted.

Yeah let's revert these changes, the edit distance bug was fixed in main. @SuhailB sorry would need one more round edits from you.

@SuhailB
Copy link
Contributor Author

SuhailB commented Jul 4, 2025

Thank you very much, @MashAliK for the feedback. @codelion I just reverted those changes.

Copy link
Contributor

@MashAliK MashAliK left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks a lot for rebasing and making the changes. I just have some very minor suggestions here.

SuhailB and others added 3 commits July 5, 2025 13:45
Replaces process-based parallelism with a new thread-based parallel controller using shared memory for improved performance and reliability. Removes filelock usage and related code from the database, as thread-based parallelism does not require file-based locking. Updates the main controller to use the new parallel system, adds checkpoint resume support, and adapts iteration logic for thread safety. Cleans up dependencies by removing filelock from requirements.
@codelion
Copy link
Member

codelion commented Jul 7, 2025

While testing, I was running to race conditions with the parallel process and file based approach, I moved to a shared memory and threads approach here which seems to work better.

Update project version from 0.0.11 to 0.0.12 in both pyproject.toml and setup.py for a new release.
@codelion codelion merged commit 92c7f7c into algorithmicsuperintelligence:main Jul 7, 2025
0x0f0f0f pushed a commit to 0x0f0f0f/openevolve that referenced this pull request Jul 7, 2025
…ted-parallel-iterations

Updated parallel iterations
wangcheng0825 pushed a commit to wangcheng0825/openevolve that referenced this pull request Sep 15, 2025
…ted-parallel-iterations

Updated parallel iterations
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants