Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
62 commits
Select commit Hold shift + click to select a range
e6f6d6b
feat: bump flame-sdk version
antidodo Aug 7, 2025
2ccf992
feat: bump flame-sdk version
antidodo Aug 28, 2025
78e09fe
feat: bump flame-sdk version
antidodo Aug 28, 2025
9aa9c85
feat: bump flame-sdk version
antidodo Aug 28, 2025
f1d0ede
feat: bump flame-sdk version
antidodo Aug 28, 2025
1743b72
feat: bump flame-sdk version
antidodo Aug 29, 2025
34c613a
feat: bump flame-sdk version
antidodo Sep 18, 2025
e440841
feat: bump flame-sdk version
antidodo Sep 22, 2025
e3161f0
feat: bump flame-sdk version
antidodo Sep 22, 2025
94a84b6
feat: bump flame-sdk version
antidodo Sep 23, 2025
19d73ca
feat: bump flame-sdk version
antidodo Sep 23, 2025
998249f
feat: bump flame-sdk version
antidodo Sep 23, 2025
c9d52f6
feat: bump flame-sdk version
antidodo Sep 23, 2025
3d19717
feat: bump flame-sdk version
antidodo Sep 23, 2025
01f00d4
feat: bump flame-sdk version
antidodo Sep 24, 2025
87d0f44
feat: bump flame-sdk version
antidodo Sep 24, 2025
32d1d1c
feat: bump flame-sdk version
antidodo Sep 25, 2025
fc4fbf4
feat: bump flame-sdk version
antidodo Sep 29, 2025
4b3f86d
feat: bump flame-sdk version
antidodo Sep 29, 2025
bc75289
feat: bump flame-sdk version
antidodo Oct 17, 2025
1e0cd40
feat: bump flame-sdk version
antidodo Oct 17, 2025
9e7905e
feat: bump flame-sdk version
antidodo Oct 17, 2025
0a66d16
feat: bump flame-sdk version
Nightknight3000 Oct 30, 2025
38880d6
feat: bump flame-sdk version
Nightknight3000 Oct 31, 2025
95e78e5
test: empty commit
Nightknight3000 Nov 3, 2025
a6babd9
feat: bump flame-sdk version
Nightknight3000 Nov 6, 2025
1650326
feat: bump flame-sdk version
Nightknight3000 Nov 6, 2025
a71cde6
feat: bump flame-sdk version
Nightknight3000 Nov 6, 2025
57799d7
feat: bump flame-sdk version
Nightknight3000 Nov 6, 2025
c07cd24
refactor: fix typos in comments
Nightknight3000 Nov 19, 2025
48cf411
Update README with project title and description
antidodo Nov 24, 2025
84942ec
feat: bump to request_timeout branch
antidodo Jan 19, 2026
827f567
feat: bump to request_timeout branch
antidodo Jan 19, 2026
be1697f
Merge branch 'main' into canary
Nightknight3000 Jan 19, 2026
188d9d2
fix: merge from main
Nightknight3000 Jan 19, 2026
fa2fb8b
feat: bump to canary branch
antidodo Jan 21, 2026
69819cf
feat: bump to canary branch
antidodo Jan 21, 2026
f8fa6fc
feat: bump to canary branch
antidodo Jan 21, 2026
141d3cd
feat: update node_finished method to log completion status
antidodo Jan 21, 2026
54945b3
feat: modify node_finished method and add infinite loop for orderly s…
antidodo Jan 21, 2026
ba41a88
feat: remove unused variable in star_model.py to streamline data anal…
antidodo Jan 21, 2026
e3a4dfc
feat: bump version to 0.4.1
antidodo Jan 21, 2026
cf22bd4
feat: bump version to request_timeout latest
antidodo Jan 22, 2026
44fc4a3
Merge remote-tracking branch 'origin/main' into canary
antidodo Jan 26, 2026
cad909b
feat: update poetry.lock and pyproject.toml
antidodo Jan 26, 2026
a02d737
feat: bump to canary branch
antidodo Feb 6, 2026
64dc2a6
feat: bump to canary branch sdk back to canary
antidodo Feb 6, 2026
afdc79d
feat: bump to canary branch
antidodo Feb 9, 2026
4e2852f
feat: bump to canary branch
antidodo Feb 9, 2026
63a78f3
feat: bump to canary branch
antidodo Feb 9, 2026
6aa2a0b
feat: bump to canary branch
antidodo Feb 9, 2026
672a0eb
Merge branch 'main' into canary
Nightknight3000 Feb 9, 2026
3f5f264
feat: update sdk
Nightknight3000 Feb 9, 2026
568e199
feat: bump sdk version to 0.4.2
Nightknight3000 Mar 3, 2026
a7927f3
Merge remote-tracking branch 'origin/main' into canary
antidodo Mar 10, 2026
94a47ef
feat: enable parallelization in local testing
antidodo Mar 10, 2026
2734901
feat: add collective termination and error logging in multi-threaded …
antidodo Mar 13, 2026
ff69c56
feat: bump canary sdk version
Nightknight3000 Mar 16, 2026
01c4fbb
feat: update flamesdk dependency to version 0.4.2
antidodo Mar 17, 2026
0fbb53d
Merge remote-tracking branch 'origin/main' into canary
antidodo Mar 17, 2026
a972033
feat: update flamesdk dependency to version 0.4.2
antidodo Mar 17, 2026
b65c69e
feat: reset shared stop event for thread failure handling before start
antidodo Mar 17, 2026
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
40 changes: 33 additions & 7 deletions flame/star/star_model_tester.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,8 +2,10 @@
import threading
import uuid
from typing import Any, Type, Literal, Optional, Union
import traceback

from flame.star import StarModel, StarLocalDPModel, StarAnalyzer, StarAggregator
from flame.utils.mock_flame_core import MockFlameCoreSDK


class StarModelTester:
Expand All @@ -28,6 +30,9 @@ def __init__(self,
participant_ids = [str(uuid.uuid4()) for _ in range(len(node_roles) + 1)]

threads = []
thread_errors = {}
results_queue = []
Comment on lines +33 to +34
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Thread-safety issue: thread_errors and results_queue are shared across threads without synchronization.

Both thread_errors (dict) and results_queue (list) are accessed concurrently by multiple threads. While CPython's GIL makes individual operations like list.append() and dict.__setitem__() atomic, relying on this is fragile and non-portable. Consider using threading.Lock or thread-safe collections like queue.Queue for results_queue.

🔒 Suggested fix using Queue
 import pickle
 import threading
 import uuid
+from queue import Queue
 from typing import Any, Type, Literal, Optional, Union
 import traceback
 ...
         threads = []
         thread_errors = {}
-        results_queue = []
+        results_queue = Queue()
 ...
-                    results_queue.append(flame.final_results_storage)
+                    results_queue.put(flame.final_results_storage)
 ...
-        if results_queue:
-            self.write_result(results_queue[0], output_type, result_filepath, multiple_results)
+        if not results_queue.empty():
+            self.write_result(results_queue.get(), output_type, result_filepath, multiple_results)
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
thread_errors = {}
results_queue = []
thread_errors = {}
results_queue = Queue()
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@flame/star/star_model_tester.py` around lines 33 - 34, thread_errors and
results_queue are accessed by multiple threads without synchronization; replace
results_queue list with a thread-safe queue.Queue instance and protect
modifications to thread_errors with a threading.Lock (or alternatively push
error records into the same Queue). Concretely, import queue and threading,
change results_queue = [] to results_queue = queue.Queue(), replace all
results_queue.append(...) calls with results_queue.put(...), introduce
thread_errors_lock = threading.Lock() and wrap any reads/writes to thread_errors
(e.g., thread_errors[...] = err or del thread_errors[...]) inside with
thread_errors_lock: blocks (or convert thread_errors to a queue of (thread_id,
error) tuples and avoid the dict entirely).

MockFlameCoreSDK.stop_event = [] # shared stop event for all threads in case of failure in any thread
for i, participant_id in enumerate(participant_ids):
test_kwargs = {
'analyzer': analyzer,
Expand All @@ -54,13 +59,28 @@ def __init__(self,
test_kwargs['epsilon'] = epsilon
test_kwargs['sensitivity'] = sensitivity

results_queue = []
def run_node(kwargs=test_kwargs, use_dp=use_local_dp):
if not use_dp:
flame = StarModel(**kwargs).flame
else:
flame = StarLocalDPModel(**kwargs).flame
results_queue.append(flame.final_results_storage)
try:
if not use_dp:
flame = StarModel(**kwargs).flame
else:
flame = StarLocalDPModel(**kwargs).flame
results_queue.append(flame.final_results_storage)
except Exception:
stop_event = MockFlameCoreSDK.stop_event
if not stop_event:
stack_trace = traceback.format_exc()#.replace('\n', '\\n').replace('\t', '\\t')
thread_errors[(kwargs['test_kwargs']['role'],
kwargs['test_kwargs']['node_id'])] = f"\033[31m{stack_trace}\033[0m"
stop_event.append(kwargs['test_kwargs']['node_id'])
Comment on lines +70 to +75
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Class-level stop_event is not reset between test runs.

MockFlameCoreSDK.stop_event is a class-level mutable list that persists across all instances. If a test fails and appends to stop_event, subsequent StarModelTester instantiations will see the non-empty list, causing all threads to immediately fail in await_messages.

Consider resetting MockFlameCoreSDK.stop_event = [] at the start of StarModelTester.__init__ (along with other class-level state like message_broker and final_results_storage).

🐛 Proposed fix to reset shared state

Add at the beginning of __init__, before creating threads:

# Reset shared mock state for fresh test run
MockFlameCoreSDK.stop_event = []
MockFlameCoreSDK.message_broker = {}
MockFlameCoreSDK.final_results_storage = None
MockFlameCoreSDK.num_iterations = IterationTracker()
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@flame/star/star_model_tester.py` around lines 69 - 74, Reset the shared
MockFlameCoreSDK class-level mutable state at the start of
StarModelTester.__init__ (before threads are created) to avoid cross-test
contamination: set MockFlameCoreSDK.stop_event to an empty list,
MockFlameCoreSDK.message_broker to an empty dict,
MockFlameCoreSDK.final_results_storage to None, and
MockFlameCoreSDK.num_iterations to a fresh IterationTracker() instance; update
StarModelTester.__init__ to perform these resets so await_messages and other
methods see a clean slate for each test run.

mock = MockFlameCoreSDK(test_kwargs=kwargs['test_kwargs'])
mock.__pop_logs__(failure_message=True)
else:
thread_errors[(kwargs['test_kwargs']['role'],
kwargs['test_kwargs']['node_id'])] = (Exception("Another thread already failed, "
"stopping this thread as well."))
return
Comment on lines +69 to +82
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

TOCTOU race condition on stop_event check-then-append.

The check if not stop_event (Line 70) and subsequent stop_event.append() (Line 74) is not atomic. Multiple threads could pass the check simultaneously before any append occurs, causing multiple threads to believe they are the "first" to fail and each recording their stack traces.

Additionally, Line 79-80 stores an Exception object directly rather than a string, which will print as Exception("Another thread...") rather than the message itself.

🔒 Suggested fix for atomicity and consistent error format
+        error_lock = threading.Lock()
+        first_failure = [False]  # Use list for mutable closure
 ...
             except Exception:
                 stop_event = MockFlameCoreSDK.stop_event
-                if not stop_event:
-                    stack_trace = traceback.format_exc()
-                    thread_errors[(kwargs['test_kwargs']['role'],
-                                   kwargs['test_kwargs']['node_id'])] = f"\033[31m{stack_trace}\033[0m"
-                    stop_event.append(kwargs['test_kwargs']['node_id'])
-                    mock = MockFlameCoreSDK(test_kwargs=kwargs['test_kwargs'])
-                    mock.__pop_logs__(failure_message=True)
-                else:
-                    thread_errors[(kwargs['test_kwargs']['role'],
-                                   kwargs['test_kwargs']['node_id'])] = (Exception("Another thread already failed, "
-                                                                                   "stopping this thread as well."))
+                with error_lock:
+                    is_first = not first_failure[0]
+                    if is_first:
+                        first_failure[0] = True
+                        stop_event.append(kwargs['test_kwargs']['node_id'])
+                if is_first:
+                    stack_trace = traceback.format_exc()
+                    thread_errors[(kwargs['test_kwargs']['role'],
+                                   kwargs['test_kwargs']['node_id'])] = f"\033[31m{stack_trace}\033[0m"
+                    mock = MockFlameCoreSDK(test_kwargs=kwargs['test_kwargs'])
+                    mock.__pop_logs__(failure_message=True)
+                else:
+                    thread_errors[(kwargs['test_kwargs']['role'],
+                                   kwargs['test_kwargs']['node_id'])] = "Another thread already failed, stopping this thread as well."
                 return
🧰 Tools
🪛 Ruff (0.15.6)

[warning] 68-68: Do not catch blind exception: Exception

(BLE001)

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@flame/star/star_model_tester.py` around lines 68 - 81, The code has a TOCTOU
race on MockFlameCoreSDK.stop_event and stores Exception objects in
thread_errors; fix by making the check-and-append atomic using a lock (e.g., use
or add MockFlameCoreSDK.stop_event_lock and wrap the check and append in a with
MockFlameCoreSDK.stop_event_lock: block) so only one thread can claim first
failure, and ensure you store plain string messages in thread_errors (replace
Exception("Another thread...") with the message string) and include the
formatted stack_trace string for the winning thread; update the block around
stop_event, stop_event.append(...), and thread_errors assignments accordingly.


thread = threading.Thread(target=run_node)
threads.append(thread)

Expand All @@ -70,8 +90,14 @@ def run_node(kwargs=test_kwargs, use_dp=use_local_dp):
for thread in threads:
thread.join()


# write final results
self.write_result(results_queue[0], output_type, result_filepath, multiple_results)
if results_queue:
self.write_result(results_queue[0], output_type, result_filepath, multiple_results)
else:
print("No results to write. All threads failed with errors:")
for (role, node_id), error in thread_errors.items():
print(f"\t{(role if role != 'default' else 'analyzer').capitalize()} {node_id}: {error}")


@staticmethod
Expand Down
29 changes: 24 additions & 5 deletions flame/utils/mock_flame_core.py
Original file line number Diff line number Diff line change
Expand Up @@ -44,11 +44,23 @@ def __init__(self, test_kwargs) -> None:
self.finished: bool = False


class IterationTracker:
def __init__(self):
self.iter = 0

def increment(self):
self.iter += 1

def get_iterations(self):
return self.iter


class MockFlameCoreSDK:
num_iterations: int = 0
num_iterations: IterationTracker = IterationTracker()
logger: dict[str, list[str]] = {}
message_broker: dict[str, list[dict[str, Any]]] = {}
final_results_storage: Optional[Any] = None
stop_event: list[tuple[str]] = []
Comment on lines +59 to +63
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Class-level mutable attributes persist across test runs and may cause test pollution.

The class-level num_iterations, logger, message_broker, final_results_storage, and stop_event are shared across all instances and persist between test runs. This is intentional for intra-test thread communication, but problematic for test isolation:

  • num_iterations will accumulate across multiple StarModelTester instantiations
  • stop_event will remain populated after failures (flagged in star_model_tester.py)

Consider either:

  1. Resetting these in StarModelTester.__init__ before each test run, or
  2. Adding a class method reset_state() to be called before tests
🔧 Proposed class reset method
`@classmethod`
def reset_state(cls):
    """Reset all shared state for a fresh test run."""
    cls.num_iterations = IterationTracker()
    cls.logger = {}
    cls.message_broker = {}
    cls.final_results_storage = None
    cls.stop_event = []
🧰 Tools
🪛 Ruff (0.15.6)

[warning] 60-60: Mutable default value for class attribute

(RUF012)


[warning] 61-61: Mutable default value for class attribute

(RUF012)


[warning] 63-63: Mutable default value for class attribute

(RUF012)

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@flame/utils/mock_flame_core.py` around lines 59 - 63, The class-level mutable
state (num_iterations, logger, message_broker, final_results_storage,
stop_event) is shared across instances and must be reset between tests; update
StarModelTester to either (a) reinitialize those attributes at the start of
StarModelTester.__init__ by setting num_iterations = IterationTracker(), logger
= {}, message_broker = {}, final_results_storage = None, stop_event = [] or (b)
add a classmethod reset_state() that performs those exact assignments
(reset_state should set cls.num_iterations = IterationTracker(), cls.logger =
{}, cls.message_broker = {}, cls.final_results_storage = None, cls.stop_event =
[]) and ensure tests call StarModelTester.reset_state() before each run. Ensure
you reference the IterationTracker type when reinitializing.


def __init__(self, test_kwargs):
self.sanity_check(test_kwargs)
Expand Down Expand Up @@ -202,6 +214,8 @@ def await_messages(self,
break
raise KeyError
except KeyError:
if self.stop_event:
raise Exception
time.sleep(.01)
pass

Expand Down Expand Up @@ -323,12 +337,17 @@ def _node_finished(self) -> bool:
self.config.finished = True
return self.config.finished

def __pop_logs__(self) -> None:
print(f"--- Starting Iteration {self.num_iterations} ---")
def __pop_logs__(self, failure_message: bool = False) -> None:
print(f"--- Starting Iteration {self.__get_iteration__()} ---")
if failure_message:
self.flame_log("Exception was raised (see Stacktrace)!", log_type='error')
for k, v in self.logger.items():
role, log = self.logger[k]
print(f"Logs for {'Analyzer' if role == 'default' else role.capitalize()} {k}:")
self.logger[k] = [role, '']
print(log, end='')
print(f"--- Ending Iteration {self.num_iterations} ---\n")
self.num_iterations += 1
print(f"--- Ending Iteration {self.__get_iteration__()} ---\n")
self.num_iterations.increment()

def __get_iteration__(self):
return self.num_iterations.get_iterations()
58 changes: 29 additions & 29 deletions poetry.lock

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

4 changes: 2 additions & 2 deletions pyproject.toml
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
[tool.poetry]
name = "flame"
version = "0.6.0"
version = "0.6.1"
description = ""
authors = ["Alexander Röhl <alexander.roehl@uni-tuebingen.de>", "David Hieber <david.hieber@uni-tuebingen.de>"]
readme = "README.md"
Expand All @@ -9,7 +9,7 @@ packages = [{ include = "flame" }]

[tool.poetry.dependencies]
python = ">=3.9,<4.0"
flamesdk = {git = "https://github.com/PrivateAIM/python-sdk.git", tag = "0.4.1"}
flamesdk = {git = "https://github.com/PrivateAIM/python-sdk.git", tag = "0.4.2"}
opendp = ">=0.12.1,<0.13.0"

[tool.poetry.group.dev.dependencies]
Expand Down