Language: EN


Asynchrony, Concurrency, and Parallelism

When we talk about asynchronous programming, the first thing we need to do is to learn a series of terms and “jargons” such as asynchrony, concurrency, parallelism.

I had already said that the world of asynchrony is complicated, I’m not going to lie to you. In fact, it would easily give enough material for its own course. But that is not the purpose of this article.

But, as a programmer, there are certain terms that you should be familiar with. And I can tell you right now that they are very related words, and a little tangled up with each other.

I won’t go into too much detail about the differences between them. But it is important to understand the concepts, and what each of them is.

So let’s start with the most basic, the synchronous process 👇

Synchronous and Blocking Process

A synchronous and blocking process is one that is executed sequentially, waiting for each operation to finish before starting the next one.

That is, if within your program you had two tasks, “process 1 and 2”, the second one would wait for the first one to finish, and would start immediately after.


This type of process is the first thing you are going to learn. It is the simplest to understand and program. It’s the “traditional” one. There is no asynchrony here.


Concurrency refers to a system’s ability to execute multiple tasks that overlap in time (concurrently).

That is, by definition, two tasks are concurrent if one of them starts between the beginning and the end of the other. For example like this:


By saying that two tasks are concurrent, we haven’t said anything about “how” that concurrency is going to be achieved, or how they will achieve it, or anything at all.

We are just saying that both of them are executed for a period of time. Literally, we have only said they temporally overlap.

Parallelism and Semi-Parallelism

Now we come to parallelism and semi-parallelism. Here we are talking about a way to achieve concurrency, by the simultaneous execution of multiple processes.

The first thing to say is that, in general, a single-core processor can only execute a single process simultaneously.

In the case that our processor has multiple cores, we can perform parallelism. This means that each core can handle the execution of one of the processes.


However, even when our processor does not have multiple cores to execute processes in parallel, we can still emulate it with semi-parallelism.

Basically, the processor switches from task to task, dedicating time to each of them. The Operating System takes care of this task.

In this way, an illusion of both processes running simultaneously is created.


Logically, parallelism reduces global processing times. If you have two cores (let’s imagine that they are as powerful as if there were only one), the processing time will be smaller.

But even in the case of semi-parallelism, with a single core, processing times can be reduced. This is because processes often have waits.


In that case, process 1 has a wait, and semi-parallelism allows the processor to interleave process 2 (or a part of it) in between. And that’s how you save time.


Finally, we arrive at asynchrony. An asynchronous process is a process that is not synchronous (and I’m just saying it as it is). And as we can see, it is a very broad term.

Concurrency is related to asynchrony, and parallelism, and semi-parallelism. Everything is more or less related, and everything is ultimately asynchrony.

But, in general, an asynchronous process is usually called that when it involves a very long wait or blocking process. For example, waiting for something to happen, reading a file, receiving communication.

To prevent these processes from blocking the main program flow, they are launched with a concurrency mechanism to make them non-blocking. Then it is usually said that they have been launched asynchronously.

Formal Definition

Now that we have seen the different terms, let’s go for a somewhat more rigorous definition of each of them.


Execution model in which operations do not block the program flow and can be executed non-sequentially.


The ability of a system to manage multiple tasks that overlap (concur) in time.


Execution of multiple processes on different processor cores, allowing true simultaneous processing.


“Simulated” parallelism within the same core, which is achieved by activating and pausing tasks, and dedicating processor time to them in an alternating manner.

As we have seen, these are terms that are more or less simple, but are very mixed up with each other. The important thing is not so much the formal definition, but knowing what they are, and understanding how they work.