February 1, 2026, 9:02pm 1
in construction, recalculating and moving the post to Discourse
Introduction
How about visualizing the discrepancy between the exact value of a univariate real function, such as the sine, and the approximate value that is returned by a call such as sin(0.3). By putting an interval that subsets the domain of the function on the x-axis of a plot, and the value of this discrepancy, the error onto the y-axis of the same plot, it might be possible to tell on which part of the function’s domain is the implementation less accurate than on other p…
February 1, 2026, 9:02pm 1
in construction, recalculating and moving the post to Discourse
Introduction
How about visualizing the discrepancy between the exact value of a univariate real function, such as the sine, and the approximate value that is returned by a call such as sin(0.3). By putting an interval that subsets the domain of the function on the x-axis of a plot, and the value of this discrepancy, the error onto the y-axis of the same plot, it might be possible to tell on which part of the function’s domain is the implementation less accurate than on other parts. That is, it might be possible to read the worst-case regions for the approximation off the plot.
I choose the error measured in units in the last place/units of least precision (ULPs) as the type of error to visualize here. Other types of error that are often used are absolute error, which is not usually appropriate for floating-point numbers, and relative error, which has these drawbacks to measuring the error in ULPs:
it is usually less immediately useful and less intuitive
plotting software tends to have trouble with the tiny values
One advantage of the error in ULPs is that it has some convenient and easy interpretations in the context of floating-point numbers. For example:
If the error between the exact value and the approximate (floating-point) value is less than half, the approximate value is the number closest to the exact value among the numbers belonging to that floating-point representation. The technical term is correctly rounded.
If the error is merely less than one, the approximate value is one of the two floating-point numbers closest to the exact value. The technical term is faithfully rounded (although faithful rounding is not technically a rounding).
If we tried to, say, evaluate the error in ULPs on an evenly-spaced grid on the chosen interval, the plot would just look like noise if each evaluation was plotted as a data point. However, given that the worst cases are what is most interesting, it might be possible to achieve a comprehensible visualization by giving up on visualizing the best cases. In other words, we’re interested in the upward spikes of the error, and by eliding the downward spikes it might be possible to make the plot a lot less noisy.
To accomplish this, the approach I choose here is vaguely similar to downsampling/decimation, from signal analysis: take n values, where n is some large positive integer, and represent them on the plot by an aggregate value: the maximum value among the n points (in this case). In a signal analysis context, it is also common to apply a lowpass filter to reduce noise/high-frequency components of the signal, before the decimation itself. This helps prevent artifacts in the resulting output. In this case, a sliding window maximum seems like an appropriate lowpass filter to smooth out the data before decimation, to prevent artifacts.
Julia app on Github
The app used for creating these visualizations is published on Github:
NB:
Neither the package, nor the app it exposes, are registered as of writing this.
Probably should move the Git repository from my personal namespace to the JuliaMath organization on Github.
The plots
Might be useful to both contributors and users of Julia to better understand where there’s room for improvement in the current implementations. In the cases of some functions, though, the worst case error spikes are difficult or impossible to fix efficiently, without reaching for some arbitrary-precision implementation like MPFR (BigFloat) or Arb.
acos
acosd
acosh
acot
acotd
acoth
acsc
acscd
acsch
asec
asecd
asech
asin
asind
asinh
atan
X
atand
X
atanh
X
cos
X
cosc
X
cosd
X
cosh
X
cospi
X
cot
X
cotd
X
coth
X
csc
X
cscd
X
csch
X
deg2rad
X
exp
X
exp10
X
exp2
X
expm1
X
log
X
log10
X
log1p
X
log2
X
rad2deg
X
sec
X
secd
X
sech
X
sin
X
sinc
X
sind
X
sinh
X
sinpi
X
tan
X
tand
X
tanh
X
tanpi
X
Miscellaneous
Connected PRs
Julia itself
redo `sinpi` and `cospi` kernel polynomial approximation by nsajko · Pull Request #59031 · JuliaLang/julia · GitHub
improve `cosc(::Float32)` and `cosc(::Float64)` accuracy by nsajko · Pull Request #59087 · JuliaLang/julia · GitHub
more accurate `rad2deg` and `deg2rad` for `Float16` and `Float32` by nsajko · Pull Request #59097 · JuliaLang/julia · GitHub
Julia package LogExpFunctions.jl
fix accuracy of `logit` by nsajko · Pull Request #99 · JuliaStats/LogExpFunctions.jl · GitHub
fix accuracy of `logcosh(::Union{Float16, Float32, Float64})` by nsajko · Pull Request #101 · JuliaStats/LogExpFunctions.jl · GitHub
add `logabstanh` function by nsajko · Pull Request #104 · JuliaStats/LogExpFunctions.jl · GitHub
Version, platform info
julia> versioninfo()
Julia Version 1.14.0-DEV.1670
Commit 6e64d0c3442 (2026-02-02 03:21 UTC)
Build Info:
Official https://julialang.org release
Platform Info:
OS: Linux (x86_64-linux-gnu)
CPU: 8 × AMD Ryzen 3 5300U with Radeon Graphics
WORD_SIZE: 64
LLVM: libLLVM-20.1.8 (ORCJIT, znver2)
GC: Built with stock GC
Threads: 5 default, 1 interactive, 5 GC (on 8 virtual cores)
3 Likes
jebej February 1, 2026, 11:54pm 2
Nice! But the plots are hard to see due to the low contrast, low resolution and small text, if it’s not too much trouble, I’m sure an svg (or higher res) version with better sizing and colors would be appreciated!
2 Likes
Interesting, thank you for sharing.
We recently had a conversation about ULPs over at IntervalArithmetic.
We used to have a rounding mode based on prevfloat/nextfloat (fast, not tight, but GPU compatible), though we ultimately pulled the plug because determining the ULPs turned out to be tricky and architecture-dependent.
I am not an export on this subject, though Julia itself provides implementations of some functions:
Where can we find the list of implemented functions? 1.
For these functions, this means that the ULP is architecture independent, correct? 1.
Do any of these implementations provide proven bounds on the ULP error? I would not expect correct rounding in general, but even a pessimistic, certified upper bound on the ULP would already be very useful for our purposes. Maybe one day we can even get an implementation of CORE-Math functions.
This sounds interesting. Am I the only one who can’t see the plots? I only see an “X” below each function name and I thought “X” never, ever marks the spot.
2 Likes
nsajko February 3, 2026, 7:30am 6
Yeah, the post is not done yet 
The first line says so anyway:
in construction, recalculating and moving the post to Discourse
For context, this was originally a blog post on the Julia Forem. However Forem does not accept SVG, so I provided the plots as PNG. However, Forem downscaled all the images, which is something I only realized after publishing the Forem post and linking to it from here. Since then I got sidetracked by a possible Julia bug, so I haven’t gotten around to regenerating the plots.