Numeric variable types allow us to store and manipulate numeric values in our programs.
In almost all our programs we will need to manage and manipulate numbers. It’s logical, since a program is essentially “a big calculator” that does things (and many of those things will involve numbers).
A number can be any quantifiable quantity or measurement. For example, a person’s age, the number of occupants in a classroom, the temperature of a room, or coordinates on a map.
If you remember mathematics, there are different types of numbers. Natural numbers, integers, real numbers. In programming, generically, we are going to differentiate between two.
- Integers (without decimals)
- Numbers with decimals
Integers
Integers represent numeric values without decimals. They can be positive, negative, or zero.
Integers are the simplest to understand and implement in a computer. They are the numbers that arise, for example, when counting cows in a field.
Programming languages offer different types of integer variables, which vary in their size and range of values.
In general, the differences are:
- How large a number we can store
- Whether it allows negative numbers or not
Internally, numbers in a computer are stored in binary format. And we can store 1, 2, 3… up until we exhaust the computer’s memory (or, more specifically, until we exceed the capacity of the variable storing it).
When we try to store a number in a variable that is larger than its maximum capacity, it’s called an “overflow”.
Numbers with Decimals
The other major family of numbers considered in the programming realm are numbers with decimals.
It must be said that representing numbers with decimals in a computer is not as simple as it may seem at first glance.
To do this, two common mechanisms are frequently used.
- Floating-Point
- Fixed-Point
Floating-Point is the most common representation. It uses a mantissa and an exponent to store numbers with decimals. It provides a wide range of values and efficient representation, but can have rounding errors.
Fixed-Point numbers are stored as integers, and use a convention to determine the location of the decimal point. It offers precise representation, but the number of decimal digits is limited.
There are other more specific types of representation like fractions or scaled integers. They are less commonly used, but can be useful in some specific cases.
Examples of Numeric Types
As we said, the representation of numbers varies across different languages, especially between typed and untyped languages.
For example, languages like C++, C# or Java define different types of numbers.
The differences between them are,
- Whether they allow positive and negative numbers
- Whether they allow numbers with decimals or not
The details of the maximum size vary between languages. Furthermore, ultimately, it depends on the Operating System we are using, and the compiler we are using.
// positive integer numbers
byte smallUnsignedByte = 255;
ushort smallUnsignedShort = 5;
uint unsignedInteger = 10;
ulong unsignedLong = 1000000000;
// integer numbers with positives and negatives
short smallInteger = 5;
int integer = 10;
long largeInteger = 1000000000;
// numbers with decimals
float floatingPoint = 3.14f;
double doublePrecision = 3.14159265359;
decimal highPrecisionDecimal = 3.1415926535897932384626433832m;
On the other hand, JavaScript only has the Number type.
Unlike other languages, JavaScript does not distinguish between integers or numbers with decimals. All numbers are treated as floating-point (64 bits according to the IEEE 754 standard), without a clear separation between integers and floating-point numbers.
So the previous example would look like this.
let integer = 10;
let largeInteger = 1000000000;
let smallInteger = 5;
let smallByte = 255;
let floatingPoint = 3.14;
let doublePrecision = 3.14159265359;
let highPrecisionDecimal = 3.1415926535897932384626433832;
Finally, if we look at the Python example, it is also not necessary to specify the type of variable we are going to create.
Internally, Python offers different types of numeric data, such as int, float, and complex. The size of numeric variables in Python can vary depending on the specific implementation and the machine architecture on which it is running.
So the example would look like this.
integer = 10
largeInteger = 1000000000
smallInteger = 5
smallByte = 255
floatingPoint = 3.14
doublePrecision = 3.14159265359
highPrecisionDecimal = 3.1415926535897932384626433832
However, in most common implementations, the int type can grow to store any integer of any size. On the other hand, the float type is implemented using the IEEE 754 standard for 64-bit floating-point numbers.
Of course, there are different peculiarities and more specific cases in different programming languages. But as we see, they have more in common than differences.
Floating-Point Precision Problems Advanced
We have mentioned that floating-point number representation has precision limitations due to the way they work.
Most of the time it’s not a problem, but it’s good to understand it correctly (because sometimes “strange” or unintuitive situations occur when working with them).
For example, consider this example in Python.
result = 0.1 + 0.2
print(result) # Result: 0.30000000000000004
Why does such a strange result appear? Why doesn’t it give 0.3, which is what it should be? Well, this is the problem of working with floating-point number representation.
The problem is that computer systems have a finite number of bits to represent numbers, so an infinite fraction cannot be stored with perfect precision.
It’s important to note that this precision problem is not specific to a particular programming language. The same would happen in C#, in JavaScript, or in any other language. It’s a problem inherent to the number’s representation.
I won’t go into great detail about the internal implementation (if you want, you can easily find a lot of information about it). But, very briefly, a floating-point number is represented with the following expression.
s, is the sign bit (0 for positive, 1 for negative).f, is the fraction (mantissa) of the number in binary.e, is the exponent in binary.bias, is a constant value used to adjust the exponent range.
As I said, I won’t go very deep into the mathematical part of the problem, but in summary, a floating-point number is not continuous; we are counting in very tiny steps.
For example, if we add 0.1 and 0.2 in a floating-point system, we might expect to get 0.3 as a result. However, due to precision limitations, the actual result might be 0.30000000000000004.
It’s a very small difference, but it can affect sensitive calculations. These precision problems must be considered when working with calculations that require high precision, such as financial or scientific operations.
To deal with this problem, it is recommended to consider the precision of calculations and avoid direct comparison of floating-point numbers using operators like == (equality).
float myVariable = 0.3;
// do not do this
if(myVariable == 0.3)
{
}
// better this way
const float THRESHOLD = 0.0001f;
if(Math.Abs(myVariable - 0.3) < THRESHOLD)
{
}
Instead, techniques such as comparison with an acceptable margin of error or using a different type of number that allows greater precision are usually used, as needed.
