Simple Concurrency And Parallelism in Python

Advance python guide

INTRODUCTION

In this article , we’ll discuss about Parallelism and concurrency in python. Here, we’ll check out Multithreading , Multiprocessing , asynchronous programming , concurrency and parallelism and the way we will use these concepts to hurry up computation tasks in python. So, lets start .

What Is Concurrency?

Concurrency is simultaneous occurrence. In Python, the things that are occurring simultaneously are called by different names (thread, task, process) but at a high level, they all refer to a sequence of instructions that run in order.

What Is Parallelism?

You’ve looked at concurrency that happens on a single processor. What about all of those CPU cores your cpu has ? How can you make use of them? answer is by multiprocessing.

With multiprocessing, Python creates new processes. A process here are often thought of as almost a totally different program, though technically they’re usually defined as a set of resources where the resources include memory, file handles and things like that. a method to believe it’s that every process runs in its own Python interpreter.

Because they’re different processes, each one of those during a multiprocessing program can run on a special core. Running on a special core means they really can run at an equivalent time, which is really great. There are some complications that arise from doing this, but Python does a reasonably good job of smoothing them over most of the time.

Python offers four possible ways to handle that. First, you’ll execute functions in parallel using the multiprocessing module. Second, an alternate to processes are threads. Technically, these are lightweight processes, and are outside the scope of this text . For further reading you’ll have a glance at the Python threading module. Third, you’ll call external programs using the system(); method of the os module, or methods provided by the subprocess module, and collect the results afterwards.

Multiprocessing:

It means distributing your tasks over CPU cores, to understand how many core your CPU has type type in terminal lscpu and it’ll display number of cores in your computer. For any CPU bound tasks ( like — doing numerical computations ), we will use python’s multiprocessing module . We simply create a Pool object in multiprocessing which offers a convenient means to parallelize the execution of a function across multiple input values.

from multiprocessing import Pooldef square(x):# calculate the square of the value of x return x*xif __name__ == ‘__main__’:# Define the dataset dataset = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14]# Output the dataset print(‘Dataset: ‘ + str(dataset))# Run this with a pool of 5 agents having a chunksize of 3 until #finished agents = 5 chunksize = 3 with Pool(processes=agents) as pool:
result = pool.map(square, dataset, chunksize)
# Output the result print (‘Result: ‘ + str(result))

Output of the Above code :

$ python3 pool_multiprocessing.py

Dataset: [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14]

Result: [1, 4, 9, 16, 25, 36, 49, 64, 81, 100, 121, 144, 169, 196]

What’s Synchronous and Asynchronous execution?

In multiprocessing , there are two kinds of execution: Synchronous and Asynchronous.

A synchronous execution is one the processes are completed within the same order during which it had been started. this is often achieved by locking the most program until the respective processes are finished.

Asynchronous, on the opposite hand, doesn’t involve locking. As a result, the order of results can get involved but usually gets done quicker.

There are 2 main objects in multiprocessing to implement parallel execution of a function: The Pool Class and also the Process Class.

  1. Pool Class
  2. Synchronous execution
  • Pool.map() and Pool.starmap()
  • Pool.apply()
  1. Asynchronous execution
  • Pool.map_async() and Pool.starmap_async()
  • Pool.apply_async())
  1. Process Class

So how do we achieve parallelism in any function?

The general method to parallelize any operation is to take a specific function that ought to be run multiple times and make it run parallelly in several processors.

To do this, you initialize a Pool with n number of processors and pass the function you would like to parallelize to at least one of Pools parallization methods.

multiprocessing.Pool() provides the apply(), map() and starmap() methods to form any function run in parallel.

Nice! So what’s the difference between apply() and map()?

These two function apply and map takes the function to be parallelized as their main argument. But the difference is, apply() takes an args argument that accepts the parameters passed to the ‘function-to-be-parallelized’ as an argument, whereas, map can take just one iterable as an argument.

So, map () is basically more suitable for less complicated iterable operations, but does the work faster.

Asynchronous Parallel Processing

The asynchronous uses apply_async(), map_async() and starmap_async() these functions can help you execute the processes in parallel order asynchronously , consecutive process can start as soon as previous one process gets over without regard for the starting order of the different processes. As a result, there’s no guarantee that the result are going to be within the same order because the input.

Asynchronous Programming

Asynchronous programming, where the OS isn’t participating. As far as OS cares you’re getting one process and there’s going to be one thread within that process, but you’ll be able to do multiple things right away.So how do we achive this?

For that we have module asyncio

asyncio is used as a foundation for multiple Python asynchronous frameworks that provide high-performance network and web-servers, database connection libraries, distributed task queues, etc.

asyncio provides a set of high-level APIs to:

· Run Python coroutines concurrently and have full control over their execution;

· Perform network IO and IPC;

· Control subprocesses;

· Distribute tasks via queues;

· Synchronize concurrent code;

There are low-level APIs for library and framework developers to:

  • Create and manage event loops, which provide asynchronous APIs for networking, running subprocesses, handling OS signals, etc;
  • Implement efficient protocols using transports;
  • Bridge callback-based libraries and code with async/await syntax.

Best Practices For Writing Parallel Code in Python:

Writing high-performance parallel code in Python are often challenging since you’ll have many new factors to stay track of, from the kind of your CPU to its L2 cache size, to not mention Python’s system for handling communication between parallel programs. during this section, we share our top three recommendations on the way to get parallelization right during a Python project.

Know your machine architecture

When writing a parallel program, knowing what’s under the hood on your development and production machines will determine whether your parallel program could be a only little faster or significantly faster than a single-core program. find out how many CPU cores are there on your machine and whether it can run more processes than there are physical cores (frequently available in Intel processors with Hyper-Threading). Know the dimensions s of your Python data structures in memory and the way those sizes compare with the size of your CPU’s L1 and L2 caches. Since access is significantly slower than most CPU instructions, taking advantage of the CPU’s caching can dramatically improve your program’s performance.

Make use of messages rather than shared state

When processing an outsized arrangement with parallel processes or threads, you’ll eventually need how for the individual tasks to coordinate with one another . the simplest method of coordination is to possess threads or processes write to a shared arrangement , for instance , multiprocessing.Array. However, using shared memory creates a variety of coordination problems where tasks can overwrite each other’s data and cause runtime problems.

Instead, use primitives like locks and semaphores to coordinate processes and threads, and use queues or pipes to send messages between tasks.

What’s worth parallelizing and what’s not

Defining a parallel program is very complex in nature as compared to normal programs.As software gets increasingly complex, it becomes less maintainable within the future , and other engineers can’t dive as quickly into the program to form changes. In teams where it’s essential to possess multiple developers performing on an equivalent applications, this extra complexity can hamper progress for the whole group or organization.

When writing a replacement program in Python, believe whether the speed gain from parallelization are going to be well worth the additional complexity for your team. Consider the long-term consequences of writing complex parallel code as against single-threaded programs, which are slower but more straightforward, more understandable, and easier to debug.

Conclusion:

We use python’s multiprocessing module to attain parallelism whereas concurrency in Python is achieved with the help of threading and Async IO modules . A program running in parallel are going to be called as concurrent but the reverse isn’t true .

Reference: Python Official Documentaion

Data Analyst with Data Engineering Experience and full time Python Addict .

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store