In the intricate world of computational simulations, balancing accuracy with performance emerges as a critical challenge. This equilibrium, especially when discussing GPU accuracy, centers around the decision between using single-precision computations and the more precise, albeit demanding, double-precision calculations. 

Let's delve into this delicate balance, understanding the unique advantages that GPU computations bring to the table in the quest for optimal accuracy and efficiency.

What is GPU Accuracy?

When we speak of GPU accuracy, we are referring to the quality and reliability of calculations executed on a Graphics Processing Unit (GPU). However, contrary to what one might instinctively think, this doesn’t directly stem from the GPU possessing a superior Floating Point Unit (FPU).

The distinction arises in the GPU's architecture and its performance optimization: with its vast array of threads and corresponding minuscule units of work, GPU threads typically engage with numbers of comparable sizes.

This optimization often yields more accurate calculations than its CPU counterpart. This advantage is not a result of inherent superiority in calculation, but rather from the sheer volume and nature of concurrent operations.

Demystifying Double Precision

Double precision or "double precision floating point" signifies a way to represent numbers using 64 bits, which provides a larger range of values and better decimal precision compared to single precision (32 bits). 

For those knee-deep in computational problems, the transition from a single to double precision floating point can be vital. This transition can capture the fine details and nuances of a problem which could be lost in a 32-bit representation.

However, wielding this power comes with a trade-off. Using double precision invariably demands more memory and computational bandwidth, leading to potential performance drawbacks. This is especially true for certain problems coded in languages like Python, where python double precision calculations might weigh down the execution speed.

The Performance Cost of Double Precision

Optimizing simulation software often leads to a familiar hurdle: 

FLOATING POINT PRECISION ISN'T ACCURATE ENOUGH, SO WE TURN TO DOUBLE PRECISION. 

This urgency arises especially in real-world physical problems, where the solutions often necessitate double-precision floating point calculations. 

Yet, with GPUs where memory and bandwidth are at a premium, the performance benefits of using 32-bit over 64-bit data types become evident. This disparity is even more pronounced when considering machine learning's recent inclination towards 16-bit operations to optimize speed.

Classical Sum Of An Array

Take as a simple example reducing the sum of an array. A classic sequential CPU loop will look something like:

As the sum gets larger, so does the error.  This is a simple consequence of how floating point numbers are represented with a fixed number of bits.

GPU Sum Of An Array

For a GPU, we must write our code to use as many threads as possible.  At a very high level, our code may look something like the code below.  For a real implementation, one may look here.

Visually, the GPU algorithm looks something like this:

Due to the tree-like nature, the elements sent to the FPU tend to have equivalent sizes and the sum operation, therefore, has less error.

Practical Example of Double Precision Using Python

Here I will use

NUMPY

and

PYCUDA

to briefly demonstrate the point.  First, looking at sum:

True sum: 4999471.5791
CPU 32bit precision: 4.99947e+06 error: 0.579096186906
GPU 32bit precision: 4999471.5 error: 0.0790961869061

Let’s look at another practical example, performing the dot product:

True dot product: 3332515.05789
CPU 32bit precision: 3.33231e+06 error: 200.557887515
GPU 32bit precision: 3332515.0 error: 0.0578875150532

Conclusions

Of course, there are many other variables at play in a real-world scenario: library implementation, compiler flags, etc.  Perhaps in this simple example, you may find the

MAGNITUDE

of the error is too small to be exciting.

Yet, the GPU solution

IS

“closer to double”.  

The pattern of the GPU implementation - with thousands of threads and tree-style reduction repeats in the real world - often leads to more accurate results.  In some cases, the difference is enough to drop the double precision requirement to the benefit of performance.

Does your problem require some serious number crunching? Send Ryan an email!

Contact Us

We are ready to accelerate your business forward. Get in touch.

Tell us what you need and one of our experts will get back to you.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.