Language: EN

programacion-tipos-numericos

Numerical Types

Numeric variables allow us to store and manipulate numerical values in our programs.

In almost all our programs, we are going to need to manage and manipulate numbers. It is logical, since a program is nothing more than “a fat calculator” that does things. And many of those things will involve numbers.

A number can be any quantifiable quantity or measure. For example, a person’s age, the number of occupants in a classroom, the temperature of a room, or the coordinates on a map.

The representation of numbers in different programming languages varies greatly between them, especially between typed and untyped languages.

If you remember mathematics, there are different types of numbers. Natural, integers, real. In programming, generically, we are going to differentiate between two.

  • Integers (without decimals)
  • Numbers with decimals

Integers

Integers represent numerical values without decimals. They can be positive, negative, or zero.

Integers are the simplest to understand and implement on a computer. They are the numbers that arise, for example, when counting cows in a field.

Programming languages offer different types of integer variables, which vary in their size and range of values.

In general, the differences are:

  • How large the number we can store is
  • If it allows negative numbers, or not

Internally, numbers on a computer are stored in binary format. And we can store 1, 2, 3… up to the point where we exhaust the computer’s memory. Or, more specifically, until we exceed the capacity of the variable that stores it.

When we try to save a number in a variable with a size larger than the maximum it can store, it is called “overflow.”

Decimal Numbers

The other large family of numbers considered in the field of programming are decimal numbers.

It must be said that representing decimal numbers on a computer is not as simple as it may seem at first.

For this, two common mechanisms are often used.

  • Floating point
  • Fixed point

Floating-Point is the most common representation. It uses a mantissa and an exponent to store numbers with decimals. It provides a wide range of values and an efficient representation, but it can have rounding errors.

Fixed-Point numbers are stored as integers, and use a convention to determine the location of the decimal point. It offers accurate representation, but the number of decimal digits is limited.

There are other more specific types of representation such as fractions or integers with scale. They are less used, but can be useful in some specific cases.

Representation of Numbers in Different Programming Languages

As we said, the representation of numbers in different languages varies, especially between typed and untyped languages.

For example, languages like C++, C#, or Java define different types of numbers.

// positive integers
byte byteSmallUnsigned = 255;
ushort shortUnsigned = 5;
uint integerUnsigned = 10;
ulong longUnsigned = 1000000000;

// integers with positive and negative numbers
short shortInt = 5;
int integer = 10;
long longInt = 1000000000;

// numbers with decimals
float floating = 3.14f;
double double = 3.14159265359;
decimal longDecimal = 3.1415926535897932384626433832m;

The difference between them is

  • If they allow positive and negative numbers
  • If they allow numbers with decimals or not

The details of the maximum size vary between languages. Also, ultimately, it depends on the operating system being used, and the compiler being used.

Meanwhile, JavaScript only has the Number type, so the previous example would look like this.

let integer = 10;
let long = 1000000000;
let short = 5;
let smallByte = 255;

let floating = 3.14;
let double = 3.14159265359;
let longDecimal = 3.1415926535897932384626433832;

Unlike other languages, JavaScript does not distinguish between integers or numbers with decimals. All numbers are treated as floating point (64 bits according to the IEEE 754 standard), without a clear separation between integers and floating-point numbers.

Finally, if we look at the example of Python, it is not necessary to specify the type of variable that we are going to create. So the example would look like this.

integer = 10
long = 1000000000
short = 5
smallByte = 255

floating = 3.14
double = 3.14159265359
longDecimal = 3.1415926535897932384626433832

Internally, Python offers different types of numerical data, such as int, float, and complex.

The size of numerical variables in Python may vary depending on the specific implementation and the architecture of the machine on which it is running.

However, in most common implementations, the int type can grow to store any integer of any size. For its part, the float type is implemented using the IEEE 754 standard for 64-bit floating point numbers.

Of course, there are different peculiarities and more specific cases in different programming languages. But as we can see, they have more in common than differences.

Precision Problems in Floating Point Numbers Advanced

We have mentioned that the representation of floating point numbers has precision limitations due to the way they work.

Most of the time, it is not a problem, but it is important to understand it correctly. Because sometimes “strange” or unintuitive situations can occur when working with them.

For example, suppose this example in Python.

result = 0.1 + 0.2
print(result)  # Result: 0.30000000000000004

Why does such a strange result come out? Why doesn’t it give 0.3, which is what it should be? Well, this is the problem of working with floating point number representation.

The problem is that computer systems have a finite number of bits to represent numbers, so an infinite fraction cannot be stored with perfect precision.

It is important to note that this precision problem is not specific to a particular programming language. The same would happen in C#, JavaScript, or any other language. It is a problem inherent in the number representation.

I am not going to go into great details about the internal implementation, if you want, you can look for more information about it. But, in a very summarized way, a floating point number is represented with the following expression.

Where

  • s, is the sign bit (0 for positive, 1 for negative).
  • f, is the fraction (mantissa) of the number in binary.
  • e, is the exponent in binary.
  • bias, is a constant value used to adjust the range of the exponent.

As I said, I’m not going to go very deep into the mathematical part of the problem, but in a summarized way a floating point number is not continuous, but we are counting in very small steps.

For example, if we add 0.1 and 0.2 in a floating point system, we could expect to obtain 0.3 as a result. However, due to the precision limitation, the actual result could be 0.30000000000000004.

It is a very small difference, but it can affect sensitive calculations. These precision problems should be considered when working with calculations that require high precision, such as financial or scientific operations.

To deal with this problem, it is recommended to take into account the precision of the calculations and avoid the direct comparison of floating point numbers using operators such as == (equality).

float my_variable = 0.3;

// don't do this
if(my_variable == 0.3)
{
}

// better this way
const float THRESHOLD = 0.0001f;
if(Math.Abs(my_variable - 0.3) < THRESHOLD)
{
}

Instead, techniques such as comparison with an acceptable margin of error or the use of a different type of number that allows greater precision are usually used, as necessary.