Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

MAJOR DISCUSSION. Standardize all random checks to a single json object and set of function calls #44738

Open
I-am-Erk opened this issue Oct 8, 2020 · 20 comments
Assignees
Labels
Game: Mechanics Change Code that changes how major features work Mechanics: Character / Player Character / Player mechanics Mechanics: Effects / Skills / Stats Effects / Skills / Stats (P4 - Low) Low priority issues: things which are e.g exotic, minor and/or hard to encounter <Suggestion / Discussion> Talk it out before implementing

Comments

@I-am-Erk
Copy link
Contributor

I-am-Erk commented Oct 8, 2020

Is your feature request related to a problem? Please describe.

There are a number of very standard things we want to do when the player takes an action:

  • Consider a number of factors such as skill level(s), attribute level(s), mutations, proficiencies, and item qualities, and make a random check of some sort to see if these are enough to exceed a difficulty level. Let's call this a check.
  • Have actions take a length of time.
  • Have actions grant XP to skills and proficiencies, modified by difficulty level and sometimes by success/fail states

Although these are all pretty standard things, in the code these are handled entirely on a case-by-case basis with no standardization at all. For example, consider this beauty from the disarming traps code:

while( ( rng( 5, 20 ) < player_character.per_cur ||  rng( 1, 20 ) < player_character.dex_cur ) && roll < 50 ) {
  roll++;
}

Wow. I do not even know how to begin trying to calculate the effect of dexterity and perception on this die roll.

Describe the solution you'd like

What we need is a standardized system for all the common elements of an action in game, regardless of what that action is. When the player takes an action, the code need only call a single line stating which JSON object contains the data for the action or random check, and then we'd move all the information for how that roll is processed out into JSON. It is possible to make virtually every action and random roll in the game a data-driven process. In theory, it isn't even that hard! The trick of course will be the gradual revision of virtually every part of the game.

Note that I am not going to touch combat mechanics in this proposal. Combat has a lot of very comlpex systems for targeting that I think can be adapted to this system, but I don't want to work it out right now.

STAGE 1: A new roll_check json object

We need a generic factory for a roll_check json object.

Such an object would look a little something like this:

[
  {
    "type": "roll_check",
    "id": "check_lockpicking",
    "distribution": { "family": "normal", "stdev": 3 },
    "base_difficulty": 2,
    "skills": [ { "skill": "devices", "weight": 4 }, { "skill": "mechanics", "weight": 1 } ],
    "skills_value": 1,
    "attributes": [ { "stat": "dexterity", "weight": 2 }, { "stat": "perception", "weight": 1 } ],
    "attributes_value": 0.25,
    "NPC_value": { "success": 0.1, "time": 0.1 },
    "proficiencies": [
      { "proficiency": "prof_lockpicking", "required": true },
      { "proficiency": "prof_lockpicking_expert", "required": false, "time_multiplier": 3, "fail_multiplier": 2 }
    ],
    "additional_factors": [
      { "environment": "light", "per_level": true, "time_multiplier": 0.99 },
      { "environment": "NPC_helper", "per_level": true, "time_multiplier": 0.98, "roll_adjust": 0.1 },
      { "quality": "LOCKPICK", "per_level": true, "level_adjust": -1, "roll_adjust": 1, "time_multiplier": 0.95 },
      { "trait": "NOPAIN", "roll_multiplier": 0.8 },
      { "enchantment": "sneakihands", "roll_multiplier": 1.1 },
      { "item": "lockpickers_loupe", "roll_adjust": 1 }
    ]
  }
]

The primary goal of this function is to produce a roll stat and tell the game what to do with it. The roll stat is generally the average roll we assume the player is getting for performing an action.

Let's break this down term by term.

1. type and id

Standard header info.

2. distribution

What shape (family) is this random roll going to take? The default should be normal, as in a normal distribution, but we probably want a few other options. I would suggest linear (uniform is a more correct version, I'm not convinced it's a better term for general use), exponential_increase, diminishing_returns, and perhaps logistics.

  • normal: Your calculated roll is the average value you will get when you make this check, and it gets increasingly unlikely to get the extreme highs and lows compared to this. This is about the same as rolling a lot of small dice, like 10d6. The point where your roll equals the task difficulty is the point where you tip over into the "more likely to succeed than fail" end of the curve. In this case we specify a stdev which is the width of the curve. Most of your rolls are going to fall in the stdev and the further they get from that the less likely they become. Basically the stdev tells the game how wide a range of values we will get from this function. I'm not yet certain where the default should be... Probably around 2-3, because most things are going to involve a skill level and an average of +2 from attributes, and we don't want to have too high of a guaranteed success rate when your skill equals the difficulty and your attributes are dead average.
  • linear: Your chance of getting a high or low random roll is always the same as getting an average roll, like if you were rolling 1d20. This would be controlled with a range attribute, which is how much higher or lower than the roll we can get. So for example, if the roll is 10, and the range is 5, then we're gonna do a rng (5, 15) to determine the final success/fail state.

The following curves were requested by others and I have not yet looked enough at them to work out how I'd math them.

  • exponential_increase: The higher your roll bonuses get, the exponentially higher your roll gets.
  • diminishing_returns: The higher your roll bonuses get, the less the bonuses add.
  • logistics: An S-curve where at low or high levels the roll bonuses don't add much, but the probability increases a lot in the mid range.

Another feature we may want to consider, but should debate the applications of first, is skew - bending the curve in one direction or another so that higher or lower results are more common. While this could be a potent tool I think we should first consider what purpose we'd put it to. Too many features delay implementation and confuse content addition.

3. base_difficulty

If not specified anywhere else, this is how difficult the task is going to be, the value the roll has to beat in the end. This will need to be modifiable by all kinds of things. For example, if we're doing an iexamine action on a furniture, the furniture would pass arguments that modify the base_difficulty of our check.

4. skills and skills_value, attributes and attributes_value

These are the skills that, usually at least, form the backbone of determining the roll. This is an optional field though: we could do an attribute check (eg perception for spotting or strength for prying) by leaving this term out entirely.

  • weight is how much each skill matters. The final contribution of skills to roll is the weighted average of the terms. So if a character with devices 4, mechanics 2 is picking a lock, their skills add (44 + 12)/5 = 3.6 to the roll.
  • skills_value, default 1, is how much skills contribute to the roll. Normally this optional field won't be needed, but we may want it in areas where the skill contributes only a little.

For crafting and perhaps some other complex checks, we might want a flag here like inherit that tells the process to pull the skill requirements from the recipe rather than from the roll_check.

Attributes work the same as skills for weight and value. The only difference is that the default attributes_value is 0.25, rather than 1. By default your skills contribute more than your attributes on a skill check. However if we're doing something like a raw strength check, we might want to flip this (eg. a prying check that relies on strength but allows athletics skill to contribute a bit)

5. proficiencies

Update: Having made some prototype normal_rolls for traps and lockpicking now, I think a better way to do this would be to define a penalty applied for no proficiency, and then the effective bonus applied by each proficiency. I am finding this much more intuitive. I will edit this to suit shortly.

This block works exactly as it does in a recipe definition, except it will also allow roll_modifier and roll_adjust attributes. here is a list of all the allowed attributes in here:
time_multiplier, fail_multiplier: These are worse the higher they are. Time multiplier increases the duration of the activity, and fail multiplier decreases your entire roll. We're calling it a multiplier but the math would probably be roll/fail_multiplier, the term is kept this way for standardization between these and recipes. In proficiencies, as in recipes, these are applied when you lack the proficiency.
required: If true, you automatically fail this check without the listed proficiency. Default false.

6. additional_factors

In some ways this is actually the meat of where we get into standardizing. In this block you specify anything else that could contribute to the check, and how it could contribute.

Each element here can contain any of the following attributes, although they don't always make sense and will be ignored if they don't:

  • time_multiplier: Multiple the time required for the task by this amount if this factor is present.
  • roll_multiplier: This is good the higher it is. It multiplies all other factors in your roll, increasing your success.
  • roll_adjust: This is good the higher it is and can be negative. It is basically a flat skill bonus.
  • per_level: true/false. Only works for things like qualities that have levels. Whatever attributes are listed in this factor are multiplied by the level of the highest level item you have (eg lockpicking 3). If referring to an NPC, then we calculate the NPC's roll using the same function, and whatever their roll is is passed into your check as a level.
  • level_adjust: Usually negative. reduce the quality level of whatever items you have by this amount before applying the per_level benefits. In the example, this is to keep a level 1 lockpick (like a bobby pin) from giving you a bonus: you need that item just to do the task, you don't get any boosts from having the bare minimum.

I have given some examples of possible additional_factors, I don't think it's possible to list them all. However, basically, we want:

  • item qualities
  • mutations
  • CBMs
  • specific items you have access to
  • environmental factors such as light level, temperature, weather, friendly NPCs
  • status factors like specific effects, morale, focus, pain, fatigue, hunger, thirst, weariness. Note that overall, most of these should work automatically by applying penalties to attributes and speed and things, but sometimes we may want a specific bonus or penalty from one. I would also suggest we include a general entry for overall_status, which creates a standardized amalgam of all of these general factors and ascribes a level that can then be smoothly applied to the check. This is obviously material for a separate issue and does not need to be implemented in the first pass.

What does this accomplish?

First and foremost, this makes adding things like iuse and iexamine actions, as well as all kinds of other interactions, incredibly easy, like it removes huge swaths of code and replaces them with a standardized function call. This will reduce a ton of the weird black box stuff of our game that requires a lot of testing to make sure crazy randomization functions like the one I shared above actually work. Instead we can just know they all work because they all use the same format. Players can also get a much better understanding of what skills and attributes actually mean and how they interact.

From a content addition end, this would be the beginning - and most of the work - of making it possible to have interactions that are defined entirely in JSON without ever entering C++. State the target, state the check, state the fail/success states and what they do - bam, done.

It opens up core game mechanics to modding, which is cool. By making this JSON, Magiclysm or Aftershock can get a much different game feel just by adjusting how certain skill checks work. Eventually, when this applies to all checks, mods could even change basic combat rules by adjusting the check JSON.

From an AI perspective, this standardizes how to determine if an action will pass or fail. We only need to write a single algorithm for the AI to examine the check and decide if the action is going to work or not. It's enormous, especially when we look at things like that trap disarm roll... how, in the current code, do we instruct an NPC AI in whether or not to decide it's safe to disarm a trap when the roll looks like that?


Below this line is my original issue, which was about using normal_roll for crafting checks only. I am leaving it up both because it's helpful for understanding the comments, and also because it breaks down a lot more of the details about what a specific normal_roll based check would look like. Under the above proposal this would still be written as a standardized roll_check.

**First**: Calculate what your `mean_roll` is. This is pretty much your skill level plus/minus modifiers. 1. Your starting mean_roll equals: `3*(primary_skill_level)+(sum of all secondary_skill_levels)`/`(number of secondary skills +3 )`, or basically, a weighted average where the primary skill level is 3x as important as any secondary skills.
  • This is a change from the current ratio where the primary skill is always 75% of your success: the more secondary skills are needed, the less weight the primary skill has. However when you have one primary and one secondary skill, it remains the same as current.
  1. Add in modifiers as currently used in crafting_success_roll: For example, the current code penalizes electronics and tailoring crafts specifically based on myopic and hyperopic. This should probably become data-driven, with a recipe flag based on skill, but keep that for now. instead of a dice penalty it is a penalty to mean_roll.
  2. Adjust a value based on intelligence. Currently intelligence increases the size of your dice. Instead, let's modify your mean_roll by ± (int-8)/4.
  3. If focus >50, add 20% to mean_roll. If focus >100, add 40%.
  4. Add crafting helper modifiers, same as current except instead of boosting your dice, they are increasing your mean_roll.
  5. Divide mean_roll by any fail_multipliers in proficiencies for the recipe.

Second: Determine where your roll actually falls on the normal curve.
use normal_roll function in rng.cpp, with mean_roll as the mean (obvs) and stdev of 1.

I thought this section would be larger but apparently this is quite easy.

Finally: Judge success

  • If your final value after the normal_roll is greater than the difficulty of the recipe, you succeeded! if your mean roll equals the difficulty of the recipe, this is 50% of the time.
  • if your final value is less than 1 point below the difficulty of the recipe, you fail but waste nothing. This represents about 34% of values if your mean_roll equals the difficulty.
  • If your final value is more than 1 point below the difficulty of the recipe, you fail and waste some components. This represents about 13% of rolls if your mean_roll equals the difficulty.
  • If your final value is more than 2 points below the difficulty of the recipe, you fail and destroy your in-progress craft and all components. This represents the final 3% of rolls if your mean_roll equals the difficulty.
---

Stage 2: Pass the results of roll_check to the function calling it.

For our first point, we would make roll_check a function we'd call from within C++, passing the ID of the roll we want and whatever appropriate modifiers we need. The roll_check function in C++ would then return a success/fail state: our returned value would be negative if the roll had failed, positive if it had passed, and the further from 0 it is, the more severe or impressive the success or failure.

Describe alternatives you've considered

I'll be honest: After looking through this, I don't think this is optional. We need this. The only details to sort out are what exactly we consider the standard roll_check and stuff.

Additional context

We will need a further Issue posted for how to integrate roll_check with the existing activity_type object, the next step in making it possible to define unique actions entirely in json.

@I-am-Erk I-am-Erk added Game: Mechanics Change Code that changes how major features work Mechanics: Effects / Skills / Stats Effects / Skills / Stats Mechanics: Character / Player Character / Player mechanics (P4 - Low) Low priority issues: things which are e.g exotic, minor and/or hard to encounter <Suggestion / Discussion> Talk it out before implementing labels Oct 8, 2020
@lcy03406
Copy link
Contributor

lcy03406 commented Oct 9, 2020

Some more thoughts about your calculation about difficulty of the recipe.

You know normal curve has drawbacks.

You never get 100% success rate. No matter how high level you has, there is always a chance to fail.

Normal curve is curve. It is easy to understand but not so easy in calculating.

We can make it more friendly for recipe writers.

For example, when you are writing a recipe, you may think, "well I decide squeezing fruit juice needs some practice but is not very hard, so skill level 0 and 1 may fail, skill 2 (or above) will always success with edidable results, but skill 2 and 3 has no chance in making delicious(top quality) fruit juice."

It's better to just write 2 and 4 in the recipe, and do not need to solve the problem "choose the params of the normal curve, so the success probability is 99.9% when skill level is 2, and the masterwork probability is 99.9% when skill level is 4"

@I-am-Erk
Copy link
Contributor Author

I-am-Erk commented Oct 9, 2020

Normal curves are already managed through the function normal_roll in the code. This issue would not require any changes to recipe JSON in the slightest

Having a miniscule chance to fail is fine. If you're trying a recipe where your skill level is 4+ levels above the difficulty, you're effectively never going to fail.

@natsirt721
Copy link
Contributor

It is unintuitive that the difficulty score indicates the aggregate skill required to succeed half of the time. If a recipe has a difficulty of 1, I need to be level 3 to have a 96% success chance and level 4 for the 99.7% (focus and intelligence mods nonwithstanding). A more intuitive approach might to have the recipe take two parameters, a difficulty to indicate the skill required for a guaranteed success, and a standard deviation - the mean score is then difficulty - (3*stdev). You can increase the multiplier from 3 if you want more randomness for failure chance, but 99.7% is pretty damn close to 100%. This allows for a few things:

  • A recipe can have a high difficulty and large stdev, allowing a low-skill player to attempt it earlier then they might otherwise do and have a small chance of success
  • A recipe can have a very small stdev, indicating that the task is basically pass/fail - either you have the skill to do it successfully, or you don't and you will fail (almost) every time.
    Additionally, you can skip the call to normal_roll if the skill is 6 stdevs below the difficulty (guaranteed fail) or greater than or equal to the difficulty (guaranteed success)

@I-am-Erk
Copy link
Contributor Author

Have you got any examples of recipes where we would want to change the stdev? I'm not particularly opposed to the idea but I think it adds a large degree of variability that recipe designers are not necessarily going to follow.

The specifics of where exactly things should fall numerically are up for debate, but I think the difficulty being the turning point whereafter you are more likely to succeed than fail is pretty dang intuitive

@lcy03406
Copy link
Contributor

It is unintuitive to use stdev with "guaranteed success level". If you increase stdev but don't modify guaranteed success level, it's going to be less randomly, because you are moving the curve to the left, decreasing the half success level.

But although the difficulty being the turning point is intuitive in math, it is not intuitive in gameplay. A recipe is not very useful if it can fail you in 50% chance. When your skill level equals to the half success level, you have only 75% chance to success if you prepare 2 times the ingredients, and to achieve 99% success chance you have to prepare 7 times the ingredients.

It's more intuitive to specify two parameters: the guaranteed success level(gsl) and the half success level(hsl). People don't need any mathematical knowledge to understand gsl and hsl. But mathematical parameters of the normal distribution can be easily deducted from them.

An example of recipes where we would want to change the stdev: some simple assambly tasks at high skill level would have smaller stdev than normal high level tasks. Making forgerig from forge needs a moderate skill level. But once you know how to do it, it is unlikely you fail and destroy the forge. So forgerig recipe can have gsl=hsl=4, while normal level 4(gsl) recipes have hsl=2.

@I-am-Erk I-am-Erk changed the title change crafting skill check from dice to a normal curve. (WIP) Standardize all random checks to a single json object and set of function calls Oct 15, 2020
@I-am-Erk I-am-Erk changed the title (WIP) Standardize all random checks to a single json object and set of function calls Standardize all random checks to a single json object and set of function calls Oct 15, 2020
@I-am-Erk I-am-Erk changed the title Standardize all random checks to a single json object and set of function calls MAJOR DISCUSSION. Standardize all random checks to a single json object and set of function calls Oct 15, 2020
@I-am-Erk I-am-Erk pinned this issue Oct 15, 2020
@KorGgenT KorGgenT self-assigned this Oct 15, 2020
@I-am-Erk
Copy link
Contributor Author

I-am-Erk commented Oct 15, 2020

I have massively expanded the concept here (as I previously mentioned I would). The earlier discussion does still apply, and you will note I included standard deviation variance in the criteria for the newly proposed standardized roll_check.

Of note, I have adjusted the formula a little here. In this version, your attributes just add to your skill roll, by default +1 per every 4 points (averaged between whatever attributes we use, so +2 for a default character with 8 in everything). The reason for that is to avoid the counterintuitive bit pointed out before: When your roll equals the difficulty, you are at a 50/50 chance of success/failure. However, your roll will be higher than your average skill level, because your attributes give you a bit of a boost. Thus your skill level being equal to the difficulty will mean your chance of success is well over 50, unless you somehow have 0 for your attributes. In this way we keep the nice, intuitive probability bit where roll=difficulty is the tip between "likely to succeed" and "likely to fail", but we also keep it intuitive for players where having a skill that equals the task difficulty means you expect to succeed more often than not.

@SunshineDistillery
Copy link
Contributor

You could always just display the chance of success instead of relying on people's intuition.

@I-am-Erk
Copy link
Contributor Author

That is often not feasible in the UI. We're talking about a function that will apply every time something random is going to happen. How would we display the chance of spotting something at a distance? Or every time you interact with an object in a way that has a chance of failing?

once it's more easily calculated it might be reasonable to display chance of success in more common places though, like in crafting.

@KorGgenT
Copy link
Member

it would be standard enough to have a function to calculate the actual chance display to be placed in locations where it makes sense

@Zireael07
Copy link
Contributor

@I-am-Erk: I think @SunshineDistillery was referring to things like crafting and skill levels.

@GGgatherer
Copy link
Contributor

GGgatherer commented Oct 16, 2020

Question: why do skills, attributes and proficiencies even have their own places instead of them all being in additional_factors? (e: with them being just "factors") They basically do the same thing, flat roll adjust which is allowed in that section.
And why are skills/attributes only allowed to result in a flat adjustment but not a multiplier? Are there zero interactions in the game where that would be the case?

@I-am-Erk
Copy link
Contributor Author

Skills and attributes use a different mechanic as a weighted average. Proficiencies are separate because they're syntactically different to keep them the same as recipes: the listed effects are what happens if you don't have the proficiency. Everything in "additional factors" is what happens if you do have the factor in question. Keeping them separate helps avoid confusion there.

@GGgatherer
Copy link
Contributor

GGgatherer commented Oct 18, 2020

Okay, got the idea about placing. Still don't get why s/a are designed to only work as flat roll adjustment using only weighted sum (e: of unmodified values) but not a multiplier and/or any other mechanic from additional group. Is it because most skill checks currently work like this?

@I-am-Erk
Copy link
Contributor Author

Basically, we need some source of "original numbers", which currently pretty much always comes from skills. You can't have a multiplier if there's nothing to multiply. In addition, what we're aiming for is standardisation, so we want some kind of clear backbone to how we do these checks.

That said, I'm looking at a different way to write the block up now that I've tested out a basic version of this. Something more like breaking it into roll multipliers and roll modifiers, rather than breaking it up by source. Time modifiers should be moved out of here and be part of activity definitions. I'll be editing the post in a bit.

@Squishums
Copy link
Contributor

Squishums commented Nov 13, 2020

I like the idea of a data driven approach, but the problem I could see in the future is feature creep causing this to end up as some kind of super detailed DSL to cover all the different use cases. Like "I want to roll night vision level at weight X but only if the light level is below Y, otherwise factor glare protection if light is above Z". You could define a whole json grammar, with nestable operators, conditionals, and variable testers, but I'm not sure if that will solve your problem of testability, or ease of understanding. I know there was some history with lua but honestly it sounds like you might want a full scripting language for this?

Either way, I like standardizing the checks for a few reasons:

  • If you separate the threshold determination (all these skill weights and other factors) and the random number generator (your curve shape), you could display the success chance for even complicated situations, which could improve the UI in some places. I could see there being a complete set of functions to act on a roll definition.
  • Somehow conveying to the player what's influencing their action in a completely data driven way. Tired and doing and action with a high fatigue weight -> "You're very tired. Your ability to focus on this task is reduced."

@I-am-Erk
Copy link
Contributor Author

I-am-Erk commented Nov 15, 2020

We already have a JSON grammar for that, it's the JSON conditional syntax. There's possibly some argument for including it here, but I'm not convinced it adds anything beyond what simply describing a modifier related to lighting and a modifier related to glare would add.

UI wise I would suggest we could have a standardized non interrupting "chance of success, time to complete" dialogue that pops up when preparing to do the action, with an expand hotkey that shows modifiers.

@rohanlean
Copy link
Contributor

Here are some technical alternatives for some of the terms used in the proposal:

current term technical term
curve (probability) distribution
linear uniform
roll check random check
shape family

@actual-nh
Copy link
Contributor

actual-nh commented May 31, 2021

shape -> family

This one actually is important not to confuse - various distributions have "shape parameters", which can change things like skewness and kurtosis (as opposed to mean and variance, in general).

@I-am-Erk
Copy link
Contributor Author

I-am-Erk commented Jun 1, 2021

Here are some technical alternatives for some of the terms used in the proposal:
current term technical term
curve (probability) distribution
linear uniform
roll check random check
shape family

I picked you up on curve-->distribution and shape-->family. Thanks for pointing these out.

There's some debate on "linear" vs "uniform". While the latter is more correct, the former is unlikely to cause confusion and is more likely to be understood by people adding content.

I think roll-->random is a lateral change that just adds letters. I suggested 'roll' in the hopes that it will help people understand this is analogous to dice in our game.

@actual-nh
Copy link
Contributor

Here are some technical alternatives for some of the terms used in the proposal:
current term technical term
curve (probability) distribution
linear uniform
roll check random check
shape family

I picked you up on curve-->distribution and shape-->family. Thanks for pointing these out.

There's some debate on "linear" vs "uniform". While the latter is more correct, the former is unlikely to cause confusion and is more likely to be understood by people adding content.

Well, "linear" actually confused me when I first saw it. "Flat" might work; "linear" could mean a linearly increasing or decreasing probability.

I think roll-->random is a lateral change that just adds letters. I suggested 'roll' in the hopes that it will help people understand this is analogous to dice in our game.

Agreed - about the only concern I'd have on it is to remind people that we're not limited to what one can get with (a standard set of, or indeed any) dice (continuous is possible, for instance).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Game: Mechanics Change Code that changes how major features work Mechanics: Character / Player Character / Player mechanics Mechanics: Effects / Skills / Stats Effects / Skills / Stats (P4 - Low) Low priority issues: things which are e.g exotic, minor and/or hard to encounter <Suggestion / Discussion> Talk it out before implementing
Development

No branches or pull requests

10 participants