In Operating Systems, concurrency is defined as the ability of a system to run two or more programs in overlapping time phases.
As you can see, at any given time, there is only one process in execution. Therefore, concurrency is only a generalized approximation of real parallel execution. This kind of situation can be found in systems having a single-core processor.
In this Concurrency tutorial, you will learn
- What is Concurrency or Single Core?
- What is Parallel Execution or (Multi-Core)?
- What is Thread?
- What is Multithreading?
- How Multithreading Works?
- What is CPU Core?
- What is the Main Issue with Single Core?
- The Solution Provided by Multi-Core:
- Benefits of Multi-core Processor
- Difference between Core vs. Threads
- What is Hyper-Threading?
In parallel execution, the tasks to be performed by a process are broken down into sub-parts, and multiple CPUs (or multiple cores) process each sub-task at precisely the same time.
As you can see, at any given time, all processes are in execution. In reality, it is the sub-tasks of a process which are executing in parallel, but for better understanding, you can visualize them as processes.
Therefore, parallelism is the real way in which multiple tasks can be processed at the same time. This type of situation can be found in systems having multicore processors, which includes almost all modern, commercial processors.
A thread is a unit of execution on concurrent programming. Multithreading is a technique which allows a CPU to execute many tasks of one process at the same time. These threads can execute individually while sharing their resources.
Multithreading refers to the common task which runs multiple threads of execution within an operating system. It can include multiple system processes.
For example, most modern CPUs support multithreading. A simple app on your smartphone can give you a live demo of the same.
When you open an app that requires some data to be fetched from the internet, the content area of the app is replaced by a spinner. This will rotates until the data is fetched and displayed.
In the background, there are two threads:
- One fetching the data from a network, and
- One rendering the GUI that displays the spinner
Both of these threads execute one after the other to give the illusion of concurrent execution.
A CPU core is the part of something central to its existence or character. In the same way in the computer system, the CPU is also referred to as the core.
There are basically two types of core processor:
- Single-Core Processor
- Multi-Core Processor
There are mainly two issues with Single Core.
- To execute the tasks faster, you need to increase the clock time.
- Increasing clock time increases power consumption and heat dissipation to an extremely high level, which makes the processor inefficient.
- Creating two cores or more on the same die to increase the processing power while it also keeps clock speed at an efficient level.
- A processor with two cores running an efficient speed can process instructions with similar speed to the single-core processor. Its clock speed is twice, yet the multicore process consumes less energy.
Here are some advantages of the multicore processor:
- More transistor per choice
- Shorter connections
- Lower capacitance
- A small circuit can work at fast speed
|Definition||CPU cores mean the actual hardware component.||Threads refer to the virtual component which manages the tasks.|
|Process||The CPU is fed tasks from a thread. Therefore, it only accesses the second thread when the information sent by the first thread is not reliable.||There are many different variations of how CPU can interacts with multiple threads.|
|Implementation||Achieved through interleaving operation||Performed through suing multiple CPU'S|
|Benefit||Increase the amount of work accomplished at a time.||Improve throughput, computational speed-up.|
|Make use of||Core uses content switching||Uses multiple CPUs for operating numerous processes.|
|Processing units required||Requires only signal process unit.||Requires multiple processing units.|
|Example||Running multiple application at the same time.||Running web crawler on a cluster.|
Hyper-threading was Intel's first effort to bring parallel computation to end user's PCs. It was first used on desktop CPUs with the Pentium 4 in 2002.
The Pentium 4's at that time only featured just a single CPU core. Therefore, it only performs a single task and fails to perform any type of multiple operations.
A single CPU with hyper-threading appears as two logical CPUs for an operating system. In this case, the CPU is single, but the OS considers two CPUs for each core, and CPU hardware has a single set of execution resources for every CPU core.
Therefore, CPU assumes as it has multiple cores than it does, and the operating system assumes two CPUs for each single CPU core.
- A thread is a unit of execution on concurrent programming.
- Multithreading refers to the common task which runs multiple threads of execution within an operating system
- Today many modern CPUs support multithreading
- Hyper-threading was Intel's first effort to bring parallel computation to end user's PCs.
- A CPU core is the part of something central to its existence or character
- In, Operating System concurrency is defined as the ability of a system to run two or more programs in overlapping time phases.
- In parallel execution, the tasks to be performed by a process are broken down into sub-parts.
- The main issue of single-core processor is that in order to execute the tasks faster, you need to increase the clock time.
- Multicore resolves this issue by creating two cores or more on the same die to increase the processing power, and it also keeps clock speed at an efficient level.
- The biggest benefit of the multicore system is that it helps you to create more transistor per choice
- The CPU cores mean the actual hardware component whereas threads refer to the virtual component which manages the tasks.