Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Make math more clear and documented #2

Closed
Pipeliner opened this issue Sep 8, 2017 · 8 comments
Closed

Make math more clear and documented #2

Pipeliner opened this issue Sep 8, 2017 · 8 comments
Assignees

Comments

@Pipeliner
Copy link

weight = (((senderCumulative / 5) * absRating) / 10) + multiplier;

Since senderCumulative is senderScore rounded to whole 100s here, the expression is equivalent to
(senderCumulative / 50 * absRating) + multiplier, ranging from 100 to 150. It's unclear where these exact calculations originate from and why were they chosen.

@mikeshultz
Copy link
Member

It's a little sloppy, yeah. I'll clean up that line a bit in the next release. I'll use this issue to do a full review of all inline documentation as well.

The point of this calculation is to give weight according to the score of the rater. So if the rater has a good cumulative score, their vote has more impact towards the positive or negative.

That said, I don't have a lot of justification for why this calculation specifically other than it worked well during testing. I'm open to suggestions for improvement and pull requests for better ideas.

@Pipeliner
Copy link
Author

Pipeliner commented Sep 10, 2017

As far as I can see, either the math does not really work or the dapp interface is confusing.

Simple scenario:
user1 votes for user2
user2 votes for user1

user1 has now cumulative score of 750.
user1 votes for user2, user2 votes for user1, and so on. Their ratings quickly approach infinity.

Is this an intended behavior? My first impression was that rating should never go over +-5.

@mikeshultz
Copy link
Member

I thought I built this into the contract, but apparently it was built into the dapp. Perhaps I just wanted to leave that detail to the dapp.

https://github.com/gointollc/etherep/blob/master/etherep/src/js/etherep.js#L433

But that's where it was handled. Just by limiting the displayed rating to a maximum of five.

It would not be to infinity, as the max score with full weight was 650(IIRC). After that gets averaged out, there is a significant impact, but that's kind of the point. That someone with a full score of 5 would have a stronger rating than someone with a 0.

Or at least that was the intention but now that I see that the max 5 isn't implemented in the contract, that perhaps the rater's score that's being used could be too high. I clearly need to do some more testing to double-check this math. I'll probably be able to get to that in a few days.

Thanks for the feedback, I built this in a vacuum and probably left a significant error or two.

@Pipeliner
Copy link
Author

If you don't mind me asking, how much time have you spent designing and developing contracts and the dapp?

I'm currently comparing Ethereum and eGaaS / Apla as platforms, and have chosen your application to port to our system since it seemed to be of reasonable complexity.

@mikeshultz
Copy link
Member

I don't mind at all but I'm not sure how best to answer your question. If you mean how much time was put into etherep, combined it was probably a little under a man-month. The timespan was much longer(think I started in July) because I released on Ropsten and tried to get some reviews/critiques from third-parties. Though that never came, so I went and released to the main net to generate interest and eyeballs.

It was fully my expectation to have to upgrade the Etherep contract in the future, though, so I tried to build in migration paths that won't impact data. So I appreciate any criticism and critique and I'll do my best to address any errors or shortcomings.

@mikeshultz
Copy link
Member

mikeshultz commented Oct 1, 2017

Sorry for the delay. After further testing, it was a lot worse than I thought. I'm not sure where along the way that got all chunked up, but thank you for pointing it out. After some testing, I came up with a much simpler and more effective calculation(in that it actually works). Commit cb85082 has the major math changes with a few minor docstring updates.

Here's the output of the new rating algorithm test. I'd appreciate any thoughts on it.

I'm still going to do the documentation review, but wanted to get this message out there first in case you wanted to provide feedback. I'm hoping to roll out a new version ASAP.

@mikeshultz
Copy link
Member

I made a bunch of docstring updates as well to correct some inaccuracies and add clarity where I thought it would help. Let me know if there's anything in particular you think would help.

@mikeshultz
Copy link
Member

Deployed. New address 0x9d7fbcc17a1c9adc5b9601d871d348b5c7d3bb61. Going to close this issue, but feel free to leave any related feedback here and I'll reopen if needed. Thanks again for pointing out the issues.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants