You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
the longest part of preprocessing is the detect_bad_channels part. even if i slice the recording to just 30 seconds (384 channels)
can it be optimized? perhaps using parallelization?
thanks
The text was updated successfully, but these errors were encountered:
methods = ["coherence+psd", "std", "mad", "neighborhood_r2"]
for method in methods:
# Create a Profile object
profiler = cProfile.Profile()
# Start profiling
profiler.enable()
# Run the function you want to profile
function_to_profile()
# Stop profiling
profiler.disable()
# Save the profiling results to a file in the correct format
profiler.dump_stats(f"profile_{method}_results.prof")
# Create a Stats object from the saved .prof file
stats = pstats.Stats(f"profile_{method}_results.prof")
# Print the stats to the console
stats.print_stats()
the longest part of preprocessing is the detect_bad_channels part. even if i slice the recording to just 30 seconds (384 channels)
can it be optimized? perhaps using parallelization?
thanks
The text was updated successfully, but these errors were encountered: