Hasitha A
4 min readSep 22, 2019


Parallelism vs Async / Non- Blocking (Introduction)

There are reasonable number of debates on when to use Async (Non-Blocking) vs parallel programming (Threading) to achieve optimal performance and expected outcome. This article discuss when to use an appropriate approach with few practical examples.

Article uses C#.NET as a programming language for code samples

When to use threads vs async?

Creating a new thread in software application will have least 2 threads including a main thread in parallel programming approach.

But async operations are running on a same thread where the current program is running.

So in fact, Async will not create a new thread. Async uses state-machine context switching mechanism to transfer the work in background

There is no straight answer for the question. But generally CPU bound operations ( Calculations, complex data processing algorithms etc) are more suitable to run parallel to the main thread. But the other operations which takes longer time to complete yet are not heavily CPU bound are better to run asynchronously. Examples REST api call which is network bound or time consuming file read operation.

Threads vs cores in CPU

Almost every computer today contains at least 2 cores, in CPU. But not many computer programs get the full benefits of maximum utilization of cores in a processor.

Utilizing multiple cores to spread the application computing complexity strategically across all the cores to gain optimal performance can be name as parallelism.

Majorly the idea is as below:

  1. Distributing complex codes to run on parallel threads. So that main thread will not be interrupted
  2. Design proper threading architectural strategy to share the threads across multiple cores

Threads in parallelism

Use the namespace below to get started:

using System.Threading;

In Microsoft c# we can use below lines to ask application to start a function on separate thread. Assume, “SyncDataToCloud” as a private static method in the class.

Thread newWorkerThread = new Thread(() => SyncDataToCloud());

Directly creating a new thread as above will NOT use thread pool

Running a function on thread does not always ensure it will make an affinity with new core (less used) by default. New threads can still run on same core in CPU which can already be heavily in use, can cause performance degrade than actual expectation of increase.

Explicitly setting a thread affinity with CPU is generally not recommended as the work is more complicated to get reliable outcome.

Therefore the programmers have to get use thread pooling strategy to instruct scheduler to use available cores appropriately, Which will eventually always ensure optimal performance. Microsoft introduced Task Parallel Library (TPL) for this purpose to get thread pooling advantage.

Multi threading Vs Performance

Multi threading environments often lead to performance degradation than increment, particularly when the threads sharing resources with each other.

More sharing, occurs more waiting time.

Therefore it is always recommended to use independent threads for parallel processing.

C# Tasks Parallel Library (TPL)

TPL creates a supporting mechanism that allows engineers to always effectively work with thread pool, unlike the scenario where directly creating a new thread by application. Thread pooling helps to work on thread affinity to run applications across less used cores. That is a optimal strategy to achieve best possible performance outcome on CPU usage.

Check below example:

Use the namespace below to get started with TPL:

using System.Threading.Tasks;

Below code will use TPL library wrapper to run a “SyncDataToCloud” method on one or multiple threads over single or multiple cores effectively and efficiently.

Task.Run(() => SyncDataToCloud());

Engineers have some level of control the way TPL should manage threads by changing thread pool settings like below. However that is more suitable for the applications runs on their own specific controlled environments. Otherwise limiting threads to certain number is not a best idea if in case application needs more room for scaling.

Therefore assuming the thread pool is not used for other purposes during this time in your application

ThreadPool.SetMaxThreads(4, 4);
ThreadPool.SetMinThreads(1, 1);

Threads Safety

There are some scenarios, functions run in multiple threads has to manage carefully to avoid unexpected failures or inaccuracy. Especially when multiple threads are accessing same resources such as application variables.

Check the below two application functions which are run on two threads:

int bankBalance;void CreditAccount(int amount)
bankBalance = bankBalance + amount;
void DebitAccount(int amount)
bankBalance = bankBalance - amount;
void saveBalance(int balance)

As in example above, “bankBalance” variable is in use in both creditAccount and debitAccount methods.

Assume they are being called by 2 or more threads at the same time. Total account balance will not have an accurate figure. Because one method is trying increase where as other one trying to decrease:

Task.Run(() => CreditAccount(1000));//Below statement can be trigger from a different eventTask.Run(() => DebitAccount(1000));

We need to find a reliable way to control access to variable by asking one thread to wait till other one finishes. We call this as concurrency situation.

How to use a lock

Easiest way to ensure the reliability of debit and credit functions is to introduce a lock code block. Lock will keep other threads on wait till the current code block completed execution.

static readonly object _object = new object();void CreditAccount(int amount)
lock (_object)
bankBalance = bankBalance + amount;
}}void DebitAccount(int amount)
lock (_object)
bankBalance = bankBalance - amount;