You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
for (i=0; (i<A_MAX) && ((x-=g.urole.attrdist[i]) >0); i++)
The result of rn2(100) is in [0,99]. This should either be x = rnd(100) (which is in [1,100]) or the for loop condition should be >= 0 instead of > 0.
Currently there is a slight bias in favor of the first attribute (strength) and against the last (charisma). If both STR and CHA are defined to have 10% probability, then STR is chosen when x is in [0,10] (11 possible values), but CHA would be chosen when x is in [91,99] (9 possible values).
I found this when I was simulating the mean and standard deviation of the starting attribute values (the results are in this NetHackWiki edit), and was wondering why there was a ~0.3 difference between the simulated Wizards' mean starting CHA and WIS, and WIS and STR, when all of these attributes are defined to have the same base and distribution values in role.c (7 starting and 10% distribution). It seems like a small difference, but after running 10 million samples multiple times, I was pretty sure it was statistically significant, and yet I could not explain it. After some hours trying to debug my simulation program (I was suspecting RNG bias at first), I figured out that the bug existed in the original NetHack code that I based my simulation on. Replacing > 0 with >= 0 in my simulation code resulted in the simulated Wizards' STR, WIS, and CHA having similar means, as expected.
The similar logic (selecting a random number and subtracting item probabilities until the number reaches 0) in
NetHack/src/attrib.c
Lines 637 to 638 in 44d5be6
The result of
rn2(100)
is in [0,99]. This should either bex = rnd(100)
(which is in [1,100]) or thefor
loop condition should be>= 0
instead of> 0
.Currently there is a slight bias in favor of the first attribute (strength) and against the last (charisma). If both STR and CHA are defined to have 10% probability, then STR is chosen when x is in [0,10] (11 possible values), but CHA would be chosen when x is in [91,99] (9 possible values).
I found this when I was simulating the mean and standard deviation of the starting attribute values (the results are in this NetHackWiki edit), and was wondering why there was a ~0.3 difference between the simulated Wizards' mean starting CHA and WIS, and WIS and STR, when all of these attributes are defined to have the same base and distribution values in role.c (7 starting and 10% distribution). It seems like a small difference, but after running 10 million samples multiple times, I was pretty sure it was statistically significant, and yet I could not explain it. After some hours trying to debug my simulation program (I was suspecting RNG bias at first), I figured out that the bug existed in the original NetHack code that I based my simulation on. Replacing
> 0
with>= 0
in my simulation code resulted in the simulated Wizards' STR, WIS, and CHA having similar means, as expected.The similar logic (selecting a random number and subtracting item probabilities until the number reaches 0) in
NetHack/src/makemon.c
Lines 1779 to 1780 in 44d5be6
rnd
and<= 0
correctly.The text was updated successfully, but these errors were encountered: