New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
MAJOR DISCUSSION. Standardize all random checks to a single json object and set of function calls #44738
Comments
Some more thoughts about your calculation about difficulty of the recipe. You know normal curve has drawbacks. You never get 100% success rate. No matter how high level you has, there is always a chance to fail. Normal curve is curve. It is easy to understand but not so easy in calculating. We can make it more friendly for recipe writers. For example, when you are writing a recipe, you may think, "well I decide squeezing fruit juice needs some practice but is not very hard, so skill level 0 and 1 may fail, skill 2 (or above) will always success with edidable results, but skill 2 and 3 has no chance in making delicious(top quality) fruit juice." It's better to just write 2 and 4 in the recipe, and do not need to solve the problem "choose the params of the normal curve, so the success probability is 99.9% when skill level is 2, and the masterwork probability is 99.9% when skill level is 4" |
Normal curves are already managed through the function normal_roll in the code. This issue would not require any changes to recipe JSON in the slightest Having a miniscule chance to fail is fine. If you're trying a recipe where your skill level is 4+ levels above the difficulty, you're effectively never going to fail. |
It is unintuitive that the difficulty score indicates the aggregate skill required to succeed half of the time. If a recipe has a difficulty of 1, I need to be level 3 to have a 96% success chance and level 4 for the 99.7% (focus and intelligence mods nonwithstanding). A more intuitive approach might to have the recipe take two parameters, a difficulty to indicate the skill required for a guaranteed success, and a standard deviation - the mean score is then difficulty - (3*stdev). You can increase the multiplier from 3 if you want more randomness for failure chance, but 99.7% is pretty damn close to 100%. This allows for a few things:
|
Have you got any examples of recipes where we would want to change the stdev? I'm not particularly opposed to the idea but I think it adds a large degree of variability that recipe designers are not necessarily going to follow. The specifics of where exactly things should fall numerically are up for debate, but I think the difficulty being the turning point whereafter you are more likely to succeed than fail is pretty dang intuitive |
It is unintuitive to use stdev with "guaranteed success level". If you increase stdev but don't modify guaranteed success level, it's going to be less randomly, because you are moving the curve to the left, decreasing the half success level. But although the difficulty being the turning point is intuitive in math, it is not intuitive in gameplay. A recipe is not very useful if it can fail you in 50% chance. When your skill level equals to the half success level, you have only 75% chance to success if you prepare 2 times the ingredients, and to achieve 99% success chance you have to prepare 7 times the ingredients. It's more intuitive to specify two parameters: the guaranteed success level(gsl) and the half success level(hsl). People don't need any mathematical knowledge to understand gsl and hsl. But mathematical parameters of the normal distribution can be easily deducted from them. An example of recipes where we would want to change the stdev: some simple assambly tasks at high skill level would have smaller stdev than normal high level tasks. Making forgerig from forge needs a moderate skill level. But once you know how to do it, it is unlikely you fail and destroy the forge. So forgerig recipe can have gsl=hsl=4, while normal level 4(gsl) recipes have hsl=2. |
I have massively expanded the concept here (as I previously mentioned I would). The earlier discussion does still apply, and you will note I included standard deviation variance in the criteria for the newly proposed standardized Of note, I have adjusted the formula a little here. In this version, your attributes just add to your skill roll, by default +1 per every 4 points (averaged between whatever attributes we use, so +2 for a default character with 8 in everything). The reason for that is to avoid the counterintuitive bit pointed out before: When your |
You could always just display the chance of success instead of relying on people's intuition. |
That is often not feasible in the UI. We're talking about a function that will apply every time something random is going to happen. How would we display the chance of spotting something at a distance? Or every time you interact with an object in a way that has a chance of failing? once it's more easily calculated it might be reasonable to display chance of success in more common places though, like in crafting. |
it would be standard enough to have a function to calculate the actual chance display to be placed in locations where it makes sense |
@I-am-Erk: I think @SunshineDistillery was referring to things like crafting and skill levels. |
Question: why do skills, attributes and proficiencies even have their own places instead of them all being in additional_factors? (e: with them being just "factors") They basically do the same thing, flat roll adjust which is allowed in that section. |
Skills and attributes use a different mechanic as a weighted average. Proficiencies are separate because they're syntactically different to keep them the same as recipes: the listed effects are what happens if you don't have the proficiency. Everything in "additional factors" is what happens if you do have the factor in question. Keeping them separate helps avoid confusion there. |
Okay, got the idea about placing. Still don't get why s/a are designed to only work as flat roll adjustment using only weighted sum (e: of unmodified values) but not a multiplier and/or any other mechanic from additional group. Is it because most skill checks currently work like this? |
Basically, we need some source of "original numbers", which currently pretty much always comes from skills. You can't have a multiplier if there's nothing to multiply. In addition, what we're aiming for is standardisation, so we want some kind of clear backbone to how we do these checks. That said, I'm looking at a different way to write the block up now that I've tested out a basic version of this. Something more like breaking it into roll multipliers and roll modifiers, rather than breaking it up by source. Time modifiers should be moved out of here and be part of activity definitions. I'll be editing the post in a bit. |
I like the idea of a data driven approach, but the problem I could see in the future is feature creep causing this to end up as some kind of super detailed DSL to cover all the different use cases. Like "I want to roll night vision level at weight X but only if the light level is below Y, otherwise factor glare protection if light is above Z". You could define a whole json grammar, with nestable operators, conditionals, and variable testers, but I'm not sure if that will solve your problem of testability, or ease of understanding. I know there was some history with lua but honestly it sounds like you might want a full scripting language for this? Either way, I like standardizing the checks for a few reasons:
|
We already have a JSON grammar for that, it's the JSON conditional syntax. There's possibly some argument for including it here, but I'm not convinced it adds anything beyond what simply describing a modifier related to lighting and a modifier related to glare would add. UI wise I would suggest we could have a standardized non interrupting "chance of success, time to complete" dialogue that pops up when preparing to do the action, with an expand hotkey that shows modifiers. |
Here are some technical alternatives for some of the terms used in the proposal:
|
This one actually is important not to confuse - various distributions have "shape parameters", which can change things like skewness and kurtosis (as opposed to mean and variance, in general). |
I picked you up on curve-->distribution and shape-->family. Thanks for pointing these out. There's some debate on "linear" vs "uniform". While the latter is more correct, the former is unlikely to cause confusion and is more likely to be understood by people adding content. I think roll-->random is a lateral change that just adds letters. I suggested 'roll' in the hopes that it will help people understand this is analogous to dice in our game. |
Well, "linear" actually confused me when I first saw it. "Flat" might work; "linear" could mean a linearly increasing or decreasing probability.
Agreed - about the only concern I'd have on it is to remind people that we're not limited to what one can get with (a standard set of, or indeed any) dice (continuous is possible, for instance). |
Is your feature request related to a problem? Please describe.
There are a number of very standard things we want to do when the player takes an action:
check
.Although these are all pretty standard things, in the code these are handled entirely on a case-by-case basis with no standardization at all. For example, consider this beauty from the disarming traps code:
Wow. I do not even know how to begin trying to calculate the effect of dexterity and perception on this die roll.
Describe the solution you'd like
What we need is a standardized system for all the common elements of an
action
in game, regardless of what that action is. When the player takes an action, the code need only call a single line stating which JSON object contains the data for the action or random check, and then we'd move all the information for how that roll is processed out into JSON. It is possible to make virtually every action and random roll in the game a data-driven process. In theory, it isn't even that hard! The trick of course will be the gradual revision of virtually every part of the game.Note that I am not going to touch combat mechanics in this proposal. Combat has a lot of very comlpex systems for targeting that I think can be adapted to this system, but I don't want to work it out right now.
STAGE 1: A new
roll_check
json objectWe need a generic factory for a
roll_check
json object.Such an object would look a little something like this:
The primary goal of this function is to produce a
roll
stat and tell the game what to do with it. Theroll
stat is generally the average roll we assume the player is getting for performing an action.Let's break this down term by term.
1.
type
andid
Standard header info.
2.
distribution
What shape (
family
) is this random roll going to take? The default should benormal
, as in a normal distribution, but we probably want a few other options. I would suggestlinear
(uniform
is a more correct version, I'm not convinced it's a better term for general use),exponential_increase
,diminishing_returns
, and perhapslogistics
.normal
: Your calculatedroll
is the average value you will get when you make this check, and it gets increasingly unlikely to get the extreme highs and lows compared to this. This is about the same as rolling a lot of small dice, like 10d6. The point where yourroll
equals the taskdifficulty
is the point where you tip over into the "more likely to succeed than fail" end of the curve. In this case we specify astdev
which is the width of the curve. Most of your rolls are going to fall in the stdev and the further they get from that the less likely they become. Basically the stdev tells the game how wide a range of values we will get from this function. I'm not yet certain where the default should be... Probably around 2-3, because most things are going to involve a skill level and an average of +2 from attributes, and we don't want to have too high of a guaranteed success rate when your skill equals the difficulty and your attributes are dead average.linear
: Your chance of getting a high or low random roll is always the same as getting an average roll, like if you were rolling 1d20. This would be controlled with arange
attribute, which is how much higher or lower than theroll
we can get. So for example, if theroll
is 10, and therange
is 5, then we're gonna do arng (5, 15)
to determine the final success/fail state.The following curves were requested by others and I have not yet looked enough at them to work out how I'd math them.
exponential_increase
: The higher your roll bonuses get, the exponentially higher your roll gets.diminishing_returns
: The higher your roll bonuses get, the less the bonuses add.logistics
: An S-curve where at low or high levels the roll bonuses don't add much, but the probability increases a lot in the mid range.Another feature we may want to consider, but should debate the applications of first, is
skew
- bending the curve in one direction or another so that higher or lower results are more common. While this could be a potent tool I think we should first consider what purpose we'd put it to. Too many features delay implementation and confuse content addition.3.
base_difficulty
If not specified anywhere else, this is how difficult the task is going to be, the value the
roll
has to beat in the end. This will need to be modifiable by all kinds of things. For example, if we're doing an iexamine action on a furniture, the furniture would pass arguments that modify the base_difficulty of our check.4.
skills
andskills_value
,attributes
andattributes_value
These are the skills that, usually at least, form the backbone of determining the
roll
. This is an optional field though: we could do an attribute check (eg perception for spotting or strength for prying) by leaving this term out entirely.weight
is how much each skill matters. The final contribution of skills toroll
is the weighted average of the terms. So if a character with devices 4, mechanics 2 is picking a lock, their skills add (44 + 12)/5 =3.6
to theroll
.skills_value
, default 1, is how much skills contribute to the roll. Normally this optional field won't be needed, but we may want it in areas where the skill contributes only a little.For crafting and perhaps some other complex checks, we might want a flag here like
inherit
that tells the process to pull the skill requirements from the recipe rather than from the roll_check.Attributes work the same as skills for weight and value. The only difference is that the default
attributes_value
is 0.25, rather than 1. By default your skills contribute more than your attributes on a skill check. However if we're doing something like a raw strength check, we might want to flip this (eg. a prying check that relies on strength but allows athletics skill to contribute a bit)5.
proficiencies
Update: Having made some prototype normal_rolls for traps and lockpicking now, I think a better way to do this would be to define a penalty applied for no proficiency, and then the effective bonus applied by each proficiency. I am finding this much more intuitive. I will edit this to suit shortly.
This block works exactly as it does in a recipe definition, except it will also allowroll_modifier
androll_adjust
attributes. here is a list of all the allowed attributes in here:time_multiplier
,fail_multiplier
: These are worse the higher they are. Time multiplier increases the duration of the activity, and fail multiplier decreases your entireroll
. We're calling it a multiplier but the math would probably beroll/fail_multiplier
, the term is kept this way for standardization between these and recipes. In proficiencies, as in recipes, these are applied when you lack the proficiency.required
: If true, you automatically fail this check without the listed proficiency. Default false.6.
additional_factors
In some ways this is actually the meat of where we get into standardizing. In this block you specify anything else that could contribute to the check, and how it could contribute.
Each element here can contain any of the following attributes, although they don't always make sense and will be ignored if they don't:
time_multiplier
: Multiple the time required for the task by this amount if this factor is present.roll_multiplier
: This is good the higher it is. It multiplies all other factors in your roll, increasing your success.roll_adjust
: This is good the higher it is and can be negative. It is basically a flat skill bonus.per_level
: true/false. Only works for things like qualities that have levels. Whatever attributes are listed in this factor are multiplied by the level of the highest level item you have (eg lockpicking 3). If referring to an NPC, then we calculate the NPC'sroll
using the same function, and whatever theirroll
is is passed into yourcheck
as a level.level_adjust
: Usually negative. reduce the quality level of whatever items you have by this amount before applying the per_level benefits. In the example, this is to keep a level 1 lockpick (like a bobby pin) from giving you a bonus: you need that item just to do the task, you don't get any boosts from having the bare minimum.I have given some examples of possible additional_factors, I don't think it's possible to list them all. However, basically, we want:
overall_status
, which creates a standardized amalgam of all of these general factors and ascribes a level that can then be smoothly applied to thecheck
. This is obviously material for a separate issue and does not need to be implemented in the first pass.What does this accomplish?
First and foremost, this makes adding things like iuse and iexamine actions, as well as all kinds of other interactions, incredibly easy, like it removes huge swaths of code and replaces them with a standardized function call. This will reduce a ton of the weird black box stuff of our game that requires a lot of testing to make sure crazy randomization functions like the one I shared above actually work. Instead we can just know they all work because they all use the same format. Players can also get a much better understanding of what skills and attributes actually mean and how they interact.
From a content addition end, this would be the beginning - and most of the work - of making it possible to have interactions that are defined entirely in JSON without ever entering C++. State the target, state the check, state the fail/success states and what they do - bam, done.
It opens up core game mechanics to modding, which is cool. By making this JSON, Magiclysm or Aftershock can get a much different game feel just by adjusting how certain skill checks work. Eventually, when this applies to all checks, mods could even change basic combat rules by adjusting the check JSON.
From an AI perspective, this standardizes how to determine if an action will pass or fail. We only need to write a single algorithm for the AI to examine the
check
and decide if the action is going to work or not. It's enormous, especially when we look at things like that trap disarm roll... how, in the current code, do we instruct an NPC AI in whether or not to decide it's safe to disarm a trap when the roll looks like that?(int-8)/4
.Second: Determine where your roll actually falls on the normal curve.
use normal_roll function in rng.cpp, with mean_roll as the mean (obvs) and stdev of 1.
I thought this section would be larger but apparently this is quite easy.
Finally: Judge success
normal_roll
is greater than the difficulty of the recipe, you succeeded! if your mean roll equals the difficulty of the recipe, this is 50% of the time.Stage 2: Pass the results of roll_check to the function calling it.
For our first point, we would make roll_check a function we'd call from within C++, passing the ID of the roll we want and whatever appropriate modifiers we need. The roll_check function in C++ would then return a success/fail state: our returned value would be negative if the roll had failed, positive if it had passed, and the further from 0 it is, the more severe or impressive the success or failure.
Describe alternatives you've considered
I'll be honest: After looking through this, I don't think this is optional. We need this. The only details to sort out are what exactly we consider the standard roll_check and stuff.
Additional context
We will need a further Issue posted for how to integrate roll_check with the existing activity_type object, the next step in making it possible to define unique actions entirely in json.
The text was updated successfully, but these errors were encountered: