Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Request for sigfig support #629

Closed
CharlesBClarke opened this issue Mar 9, 2024 · 2 comments
Closed

Request for sigfig support #629

CharlesBClarke opened this issue Mar 9, 2024 · 2 comments

Comments

@CharlesBClarke
Copy link

CharlesBClarke commented Mar 9, 2024

Feature

Add significant figures (sigfigs) support to Quaculate for calculations.

Why?

To ensure accuracy in scientific and engineering calculations by automatically adjusting output precision based on input sigfigs.

How?

  • Automatic Sigfig Detection: Automatically determine and apply sigfigs from input to output.
  • Manual Sigfig Setting: Allow manual specification of sigfigs for output.
  • Examples & Guidance: Provide examples and brief explanations on sigfig application.

Examples

  1. Input: 1.23 * 2.1
    Current Output: 2.583
    Desired Output: 2.6 (2 sigfigs)
  2. Input: 12.0 / 3.4
    Current Output: 3.52941176
    Desired Output: 3.5 (2 sigfigs)

Benefits

  • Enhances calculation precision.
  • Promotes understanding of sigfigs.
  • Differentiates Quaculate from other calculators.
@hanna-kn
Copy link
Contributor

hanna-kn commented Mar 9, 2024

This is already supported by the "read precision" (readprec) setting. This can be activated in qalc using set readprec. The number of significant digits in output can be changed using the precision or max decimals settings.

> help readprec

read precision (readprec)
If activated, numbers are interpreted as approximate with precision equal to the number of significant digits
(3.20 = 3.20+/-0.005).
(0* = off, 1 = always, 2 = when decimals)

> set readprec
> 1.23*2.1

  ≈ 1.2300±0.0050 × 2.100±0.050 ≈ 2.583±0.062

Qalculate also supports specification of the precision/uncertainty/error of a value using ± (or +/-), or the interval() and uncertainty() functions.

By default the precision of the result (uncertainty propagation) is determined using the variance formula, but interval arithmetic can also be used (this is controlled by the "interval arithmetic", or ia, setting). The common simple method of using the same number of significant digits in output as the number with the least number of significant digits in input is not supported.

With "read precision" enabled 1.23*2.1 will return 2.583±0.062 (or 2.583±0.072 with interval arithmetic enabled). Note that ± will by default be omitted for values with higher precision (e.g. 1.23 * 2.10 ≈ 2.6)

@CharlesBClarke
Copy link
Author

Apologies for adding a request without doing the proper research. The real implantation is better than what I requested.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants