Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Workaround for an error: conflicting erase polarities, was 0x00, requested 0xFF #329

Open
xaionaro opened this issue Feb 15, 2021 · 9 comments

Comments

@xaionaro
Copy link
Member

Hello!

I'm parsing firmwares to analyze corruptions. But there is a non-corrupted firmware (provided by our ODM), which is non-parsible by fiano, it returns an error:

conflicting erase polarities, was 0x00, requested 0xFF

Also I read multiple firmwares in a concurrent way (within concurrent routines), so switching the Attributes.ErasePolarity value is not a very good option. So I'm wondering: is it possible to workaround this problem?

@GanShun
Copy link
Member

GanShun commented Feb 15, 2021

I vaguely remember that there's supposed to be some header which tells you what the polarity is, I think it might be in the firmware volume header. We should read that and automatically apply it, however it raises the question of what happens if we see conflicting volumes. The erase polarity should really be stored in a global header of some sort instead of the fvheader I think.

@xaionaro
Copy link
Member Author

xaionaro commented Feb 15, 2021

OK, thanks for the response.

Since I do not modify firmwares: is it safe to just ignore this error (just patch the local copy of fiano to avoid this check)?

@GanShun
Copy link
Member

GanShun commented Mar 9, 2021

yes it should be safe. If it becomes a pain, we can work that into some kind of warning that doesn't terminate the parsing

@trynity
Copy link
Collaborator

trynity commented Jan 18, 2022

yes it should be safe. If it becomes a pain, we can work that into some kind of warning that doesn't terminate the parsing

It has now become a bit of a pain, since we're wanting to try and get back to vanilla upstream rather than relying on our fork. Would it be possible to see about having a warning rather than a hard error?

@trynity
Copy link
Collaborator

trynity commented Mar 4, 2022

@GanShun Following up on this again, should we just make a PR for what we have to bypass this, and continue work there? It'd be grand to be back on upstream and not have to deal with forks

@rihter007
Copy link
Collaborator

Maybe it is time to add per-call settings instead of global?

@GanShun
Copy link
Member

GanShun commented Jun 30, 2022

I'll look into making it a per call setting next week!

@xaionaro
Copy link
Member Author

Do not want to annoy, but I'm curious if I can help to move forward with this problem somehow :)

@xaionaro
Copy link
Member Author

xaionaro commented Jan 17, 2023

I've just created a PR out of the code I patched our internal fork/derivative with in 2021. Feel free to reject it, just sharing :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants