Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Increase maximum number of harts #429

Merged
merged 1 commit into from
Dec 6, 2019

Conversation

bluewww
Copy link
Contributor

@bluewww bluewww commented Dec 3, 2019

OpenOCD can't deal with systems that have more than 32 harts.

OpenOCD can't deal with systems that have more than 32 harts.
@timsifive
Copy link
Collaborator

How much more memory does OpenOCD use when you do this? To keep supporting -rtos riscv (which is deprecated and will be removed eventually) I think memory usage is O(n^2) on the number of harts. Once we get rid of that we should be able to remove the limit altogether.

@bluewww
Copy link
Contributor Author

bluewww commented Dec 3, 2019

It increases memory usage from 2.4MiB to 45.1MiB on my machine.

@timsifive
Copy link
Collaborator

How many harts are actually configured in that measurement?

@bluewww
Copy link
Contributor Author

bluewww commented Dec 4, 2019

One. Aren't we interested how this patch affects the regular case (low hart count/low hart id number since that seems to be the same)?

@timsifive
Copy link
Collaborator

So if 1 hart uses 43MiB more after this change, then (assuming somebody was using OpenOCD for) a 32-hart system using might incur 1.3GiB of extra memory usage. That's kind of high.

Do you need the full 1024 harts, or did you just pick a number that should be good for a long time? I'd be a lot more comfortable with half or a quarter of this memory increase.

Over the long term -rtos riscv is going to go away, we can get rid of RISCV_MAX_HARTS, and memory usage won't blow up the way it does currently when increasing that number.

@bluewww
Copy link
Contributor Author

bluewww commented Dec 4, 2019

Well in our case it's a combination of both: We have setups with more than 32 cores and the maximum hartid is just slightly below 1000 ish.
Are there any low hanging fruits to alleviate the memory issues?

@timsifive
Copy link
Collaborator

Turns out there's some very low hanging fruit: #431
Once that merges, your change will be fine.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants