DreamPirates logo DreamPirates
world

A hackers guide to numerical analysis

- By usitvhdSD
Publish Date : 2021-03-20 14:51:58
A hackers guide to numerical analysis

A hacker's guide to numerical analysis

Life may toss us ill-conditioned problems, but it is too short to settle for unstable algorithms. - D.P. O'Leary

Measures of error

If xxx is a number and x^hat xx^ is its approximation, then the are two notions of error:

  1. absolute errror: ∣x−x^∣|x - hat x|∣x−x^∣.
  2. relative error: ∣x−x^∣/∣x∣|x - hat x|/|x|∣x−x^∣/∣x∣.

Since the relative error is invariant under scaling (x↦αx)(x mapsto alpha x)(x↦αx), we will mostly be interested in relative error.

Significant digits

The significant digits in a number are the first nonzero digit and all succeeding digits. Thus 1.7320 has five significant digits. 0.0491 has only three significant digits. It is not transparent to me why this definition is sensible.

Correct Significant digits --- a first stab

We can naively define x^hat xx^ agrees to xxx upto ppp significant digits if x^hat xx^ and xxx round to the same number upto ppp significant digits. This definition is seriously problematic. Consider the numbers:

  • x=0.9949x = 0.9949x=0.9949, x1=1.0x_1 = 1.0x1​=1.0, x2=0.99x_2 = 0.99x2​=0.99, x3=0.9950x_3 = 0.9950x3​=0.9950
  • y=0.9951y = 0.9951y=0.9951, y1=1.0y_1 = 1.0y1​=1.0, y2=1.0y_2 = 1.0y2​=1.0, y3=0.9950y_3 = 0.9950y3​=0.9950

Here, yyy has correct one and three significant digits relative to xxx, but incorrect 2 significant digits, since the truncation at x2x_2x2​ and y2y_2y2​ do not agree even to the first significant digit.Correct Significant digits --- the correct definition

We say that x^hat xx^ agress to xxx upto ppp significant digits if ∣x−x^∣|x - hat x|∣x−x^∣ is less than half a unit in the pth significant digit of xxx.

Accuracy v/s precision

  • Accuracy: absolute or relative error of a quantity.
  • Precision: accuracy with which basic arithmetic +, -, *, / are performed.for floating point, measured by unit round-off (we have not met this yet).

Accuracy is not limited by precision: By using fixed precision arithmetic, we can emulate arbitrary precision arithmetic. The problem is that often this emulation is too expensive to be useful.

Backward, Forward errors

Let y=f(x)y = f(x)y=f(x), where f:R→Rf: mathbb R ightarrow mathbb Rf:R→R. Let us compute y^hat yy^​ as an approximation to yyy, in an arithmetic of precision uuu. How do we measure the quality of y^hat yy^​?

  1. In many cases, we maybe happy with an y^hat yy^​ such that the relative error betweenyyy and y^hat yy^​ is equal to uuu: we did the best we can with the precisionthat was given. This is the forward error.
  2. An alternative question we can ask is, for what δxdelta xδx do we have that y^=f(x+δx)hat y = f(x + delta x)y^​=f(x+δx). That is, how far away from the input do we need to stray, to get a matching output? There maybe many such δxdelta xδx,so we ask for min⁡∣δx∣min |delta x|min∣δx∣. We can divide this error by xxx as a normalization factor. This is the backward error.

There are two reasons we prefer backward error.

  1. It converts error analysis into "data analysis". The data itself tendsto be uncertain. If the error given by the backward analysis is smallerthan the data uncertainty, then we can write off our error as beingtoo small. Since for all we know, we have 'fixed' the uncertain datawith our small error.
  2. It reduces the question of error analysis into perturbation theory,which is very well understood for a large class of problems.

Backward stable

A method for computing y=f(x)y = f(x)y=f(x) is called backward stable if it produces a y^hat yy^​ with small backward error. That is, we need a small δxdelta xδx such that y^=f(x+δx)hat y = f(x + delta x)y^​=f(x+δx).

Mixed forward-backward error

We assume that addition and subtraction are backward stable, where uuu is the number of significant digits to which our arithmetic operations can be performed:

x±y=x(1+Δ)±y(1+Δ)∀∣Δ∣≤u x pm y = x(1 + Delta) pm y(1 + Delta) forall |Delta| leq u x±y=x(1+Δ)±y(1+Δ)∀∣Δ∣≤u

Another type of error we can consider is that of the form:

y^+δy=f(x+Δx) hat y + delta y = f(x + Delta x) y^​+δy=f(x+Δx)

That is, for a small perturbation in the output (δy)(delta y)(δy), we can get a backward error of δxdelta xδx. This is called as mixed forward backward error. We can say that an algorithm with mixed-forward-backward-error is stable iff:

y^+δy=f(x+Δx)∣Δy∣/∣y^∣<ϵ∣Δx∣/∣x^∣<ηϵ,η are small egin{aligned} &hat y + delta y = f(x + Delta x) &|Delta y|/|hat y| < epsilon &|Delta x|/|hat x| < eta & ext{$epsilon, eta$ are small} end{aligned} ​y^​+δy=f(x+Δx)∣Δy∣/∣y^​∣<ϵ∣Δx∣/∣x^∣<ηϵ,η are small​

This definition of stability is useful when rounding errors are the dominant form of errors.

conditioning

Relationship between forward and backward error is govered by conditioning: the sensitivity of solutions to perturbations of data. Let us have an approximate solution y^=f(x+δx)hat y = f(x + delta x)y^​=f(x+δx). Then:

y^−y=f(x+δx)−f(x)=f′(x)δx+O((δx)2)(y^−y)/y=(xf′(x)/f(x))(Δx/x)+O((Δx)2)(y^−y)/y=c(x)(Δx/x)+O((Δx)2)c(x)≡∣xf′(x)/f(x)∣ egin{aligned} &hat y - y = f(x + delta x) - f(x) = f'(x) delta x + O((delta x)^2) &(hat y - y)/y = (x f'(x)/f(x)) (Delta x/x) + O((Delta x)^2) &(hat y - y)/y = c(x) (Delta x/x) + O((Delta x)^2) &c(x) equiv |x f'(x)/f(x)| end{aligned} ​y^​−y=f(x+δx)−f(x)=f′(x)δx+O((δx)2)(y^​−y)/y=(xf′(x)/f(x))(Δx/x)+O((Δx)2)(y^​−y)/y=c(x)(Δx/x)+O((Δx)2)c(x)≡∣xf′(x)/f(x)∣​

The quantity c(x)c(x)c(x) measures the scaling factor to go from the relative change in output to the relative change in input. Note that this is a property of the function fff, not any particular algorithm.

Example: log⁡xlog xlogx

If f(x)=log⁡xf(x) = log xf(x)=logx, then c(x)=∣(x(log⁡x)′)/log⁡x∣=∣1/log⁡x∣c(x) = |(x (log x)') / log x| = |1/log x|c(x)=∣(x(logx)′)/logx∣=∣1/logx∣. This quantity is very large for x≃1x simeq 1x≃1. So, a small change in xxx can produce a drastic change in log⁡xlog xlogx around 111.

  • Note the the absolute change is quite small: log⁡(x+δx)≃log⁡x+δx/xlog(x + delta x) simeq log x + delta x/xlog(x+δx)≃logx+δx/x.However, relative to log⁡xlog xlogx, this change of δx/xdelta x/xδx/x is quite large.

Rule of thumb

We now gain access to the useful rule:

forward error≲condition number×backward error ext{forward error} lesssim ext{condition number} imes ext{backward error} forward error≲condition number×backward error

  • Glass half empty: Ill-conditioned problems can have large forward error.
  • Glass half full: Well-conditioned problems do not amplify error in data.

Forward stable

If a method produces answers with forward errors of similar magnitude to those produced by a backward stable method, then it is called forward stable. Backward stability implies forward stability, but not vice-versa (TODO: why?)

Cancellation

Consider the following program:

#include <cmath>
#include <stdio.h>

int main() {
    double x = 12e-9;
    double c = cos(x);
    double one_sub_c = 1 - c;
    double denom = x*x;
    double yhat = one_sub_c / denom;

    printf("x:         %20.16f
"
           "cx:        %20.16f
"
           "one_sub_c: %20.16f
"
           "denom:     %20.16f
"
           "yhat:      %20.16f
",
            x, c, one_sub_c, denom, yhat); 
}

which produces the output:

x:           0.0000000120000000
cx:          0.9999999999999999
one_sub_c:   0.0000000000000001
denom:       0.0000000000000001
yhat:        0.7709882115452477

This is clearly wrong, because we know that (1−cos⁡x)/x2)≤1/2(1-cos x)/x^2) leq 1/2(1−cosx)/x2)≤1/2. The reason for this terrible result is that:

  • we know cos⁡xcos xcosx to high accuracy, since xxx was some fixed quantity.
  • 1−cos⁡x1 - cos x1−cosx converted the error in cos⁡xcos xcosx into its value.
  • 1−cos⁡x1 - cos x1−cosx has only one significant figure.
  • This makes it practically useless for anything else we are interested in doing.

In general:

x≡1+ϵerror of order ϵy≡1−x=ϵvalue of order ϵ egin{aligned} &x equiv 1 + epsilon ext{error of order $epsilon$} &y equiv 1 - x = epsilon ext{value of order $epsilon$} end{aligned} ​x≡1+ϵerror of order ϵy≡1−x=ϵvalue of order ϵ​

That is, subtracting values close to each other (in this case, 111 and xxx) converts error order of magnitude into value order of magnitude. Alternatively, it brings earlier errors into promience as values.

Analysis of subtraction

We can consider the subtraction:

x=a−b;x^=a^−b^a^=a(1+Δa)b^=b(1+Δb)∣x−x^x∣=∣−aΔa−bΔba−b∣=∣−aΔa−bΔb∣∣a−b∣=∣aΔa+bΔb∣∣a−b∣≤max⁡(∣Δa∣,∣Δb∣)(∣a∣+∣b∣)∣a−b∣ egin{aligned} &x = a - b; hat x = hat a - hat b &hat a = a(1 + Delta a) &hat b = b(1 + Delta b) &left| frac{x - hat x}{x} ight| &= left| frac{-aDelta a - bDelta b}{a - b} ight| &= frac{|-aDelta a - bDelta b|}{|a - b|} &= frac{|aDelta a + bDelta b|}{|a - b|} &leq frac{max(|Delta a|, |Delta b|) (|a| + |b|)}{|a - b|} end{aligned} ​x=a−b;x^=a^−b^a^=a(1+Δa)b^=b(1+Δb)∣∣∣∣∣​xx−x^​∣∣∣∣∣​=∣∣∣∣∣​a−b−aΔa−bΔb​∣∣∣∣∣​=∣a−b∣∣−aΔa−bΔb∣​=∣a−b∣∣aΔa+bΔb∣​≤∣a−b∣max(∣Δa∣,∣Δb∣)(∣a∣+∣b∣)​​

This quantity will be large when ∣a−b∣≪∣a∣+∣b∣|a - b| ll |a| + |b|∣a−b∣≪∣a∣+∣b∣: that is, when there is heavy cancellation in the subtraction to compute xxx.

Underflow

#include <cmath>
#include <stdio.h>

int main() {
    double x = 1000;
    for(int i = 0; i < 60; ++i) {
        x = sqrt(x);
    }
    for(int i = 0; i < 60; ++i) {
        x = x*x;
    }
    printf("x: %10.20f
", x);
}

This produces the output:

./sqrt-pow-1-12
...
x: 1.00000000000000000000

That is, even though the function is an identity function, the answer collapses to 1. What is happening?

Computing (ex−1)/x(e^x - 1)/x(ex−1)/x

One way to evaluate this function is as follows:

double f(double x) { return x == 0 ? 1 : (pow(M_E, x) - 1) / x; }

This can suffer from catastrophic cancellation in the numerator. When xxx is close to 000, exe^xex is close to 1, and ex−1e^x - 1ex−1 will magnify the error in exe^xex.

double f(double x) { 
              



Category : world

MOVIES REVIEW: ME YOU MADNESS (2021)

MOVIES REVIEW: ME YOU MADNESS (2021)

- This is why Me You Madness fails: the script has no comprehension of comedic timing, Linton doesn’t know how to differentiate cinema vs. music video


agree to many requests not because we want to do them, but because we don’t want to be seen as rude, arrogant, or unhelpful.

agree to many requests not because we want to do them, but because we don’t want to be seen as rude, arrogant, or unhelpful.

- agree to many requests not because we want to do them, but because we don’t want to be seen as rude, arrogant, or unhelpful.


Call of Duty: Black Ops Cold War patch notes for Season Two

Call of Duty: Black Ops Cold War patch notes for Season Two

- Call of Duty: Black Ops Cold War patch notes for Season Two A new sniper rifle, zombie mode updates, and fixes to the camo system


Get IBM C1000-002 PDF - Pass With Guarantee [Feb 2021]

Get IBM C1000-002 PDF - Pass With Guarantee [Feb 2021]

- Certs2Pass is considered a pioneer in IT filed in the provision of quality material. We offering the Latest Exam question for last 10 years.