Explore programming tutorials, exercises, quizzes, and solutions!
Python Multiprocessing Exercises
1/20
While multithreading is great for I/O-bound tasks, it doesn’t offer true parallelism in Python due to the Global Interpreter Lock (GIL). For CPU-bound tasks, where you need real concurrent processing across multiple cores, Python provides the multiprocessing module, which spawns separate processes — each with its own memory space and Python interpreter.
This enables Python programs to fully utilize multiple CPU cores, making it ideal for tasks like data processing, mathematical computation, image manipulation, and other heavy CPU operations.
Consider the following code:
import multiprocessing
import time
def square_numbers():
for i in range(5):
print(f"Squaring {i}: {i ** 2}")
time.sleep(1)
if __name__ == "__main__":
p1 = multiprocessing.Process(target=square_numbers)
p2 = multiprocessing.Process(target=square_numbers)
p1.start()
p2.start()
p1.join()
p2.join()
print("Main process finished.")
What does this code demonstrate about multiprocessing in Python?
This code uses the multiprocessing module to create two separate processes that each execute the square_numbers() function:
Each process runs in its own Python interpreter and memory space.
p1.start() and p2.start() run both processes in parallel.
p1.join() and p2.join() ensure the main program waits until both processes finish.
Because each process is isolated, they can run truly in parallel, taking full advantage of multi-core CPUs.
This is a powerful technique for speeding up CPU-intensive tasks, and unlike threads, processes don’t share memory, reducing the risk of race conditions but increasing inter-process communication overhead.
Python’s multiprocessing module is essential for building efficient data pipelines, parallel processors, and heavy-computation programs.
Which Python module provides built-in support for multiprocessing?
The multiprocessing module provides support for process-based parallelism. It allows you to create and manage separate Python processes, taking advantage of multiple CPU cores — unlike threading, which is limited by the Global Interpreter Lock (GIL).
What does the Process class in the multiprocessing module represent?
The Process class represents a separate process that runs independently from the main program, with its own memory space. This enables true parallelism and avoids the limitations of the Global Interpreter Lock (GIL).
Which method starts the process and invokes its run() method?
Calling start() on a Process object creates a new process and invokes its run() method in that separate process. Calling run() directly runs the code in the current process instead of starting a new one.
What does the following code output?
from multiprocessing import Process
def greet():
print("Hello from process")
p = Process(target=greet)
p.start()
p.join()
The greet function is passed as the target of the Process. When p.start() is called, the new process runs greet() and prints the message. p.join() waits for the process to finish before continuing.
Which of the following correctly describes the relationship between processes created using the multiprocessing module?
Processes created with the multiprocessing module run in separate memory spaces, unlike threads which share memory. This isolation ensures processes don't interfere but requires explicit inter-process communication (IPC) to share data.
What will the following code output?
from multiprocessing import Process
def worker(num):
print(f"Worker {num}")
processes = []
for i in range(3):
p = Process(target=worker, args=(i,))
processes.append(p)
p.start()
for p in processes:
p.join()
Processes run independently and concurrently, so the print statements can appear in any order. join() waits for processes to complete but does not control the order of execution.
What is the purpose of multiprocessing.Queue?
multiprocessing.Queue provides a thread- and process-safe FIFO queue for inter-process communication (IPC). It allows different processes to share data safely without corruption or race conditions.
What happens if you call Process.run() directly instead of Process.start()?
Calling run() directly executes the target function in the current process, behaving like a normal function call. To start a separate process, you must call start(), which internally calls run() in a new process.
Which method is used to terminate a process before it finishes its task?
terminate() forcefully stops the process immediately. join() waits for a process to finish. kill() and stop() are not standard methods in Python's multiprocessing.Process.
What will the following code snippet print?
from multiprocessing import Process, current_process
def task():
print(f"Process name: {current_process().name}")
p = Process(target=task, name="MyProcess")
p.start()
p.join()
You can name a process when creating it with the name parameter. The current_process().name returns this name inside the running process. Here, the process prints "MyProcess".
What is a key difference between multiprocessing.Pool and manually managing Process objects?
multiprocessing.Pool offers a high-level interface for running tasks in parallel, managing a fixed number of worker processes efficiently. When using Process directly, you have to manage process creation, start, join, and termination manually.
Which of the following best describes how multiprocessing.Manager helps with sharing data?
multiprocessing.Manager creates a server process that holds Python objects and allows other processes to access them via proxies. This allows sharing of complex data types safely between processes, unlike shared memory arrays which only support simple data types.
Consider the following code snippet:
from multiprocessing import Pool
def square(x):
return x * x
with Pool(processes=3) as pool:
results = pool.map(square, [1, 2, 3, 4, 5])
print(results)
What will be the output?
Pool.map() applies the function to each item in the iterable using the pool of worker processes. The function square returns the square of the input. Hence, the output list contains the squares of [1, 2, 3, 4, 5].
What is the correct way to share a mutable object like a list across multiple processes using multiprocessing?
Each process has its own memory space, so passing a list as an argument creates a copy, not shared memory. multiprocessing.Manager() provides proxy objects like list() that are shared and synchronized across processes. Locks alone don’t share memory.
What will be the output of the following code?
from multiprocessing import Process, Value
def add_10(num):
num.value += 10
if __name__ == "__main__":
shared_num = Value('i', 5)
p = Process(target=add_10, args=(shared_num,))
p.start()
p.join()
print(shared_num.value)
Value creates a shared object between processes. The add_10 function increments the shared integer value by 10 in the separate process. After joining, the main process prints the updated value, which is 15.
What issue can arise when using the multiprocessing module on Windows if you forget to protect the entry point with if __name__ == "__main__":?
On Windows, the multiprocessing module starts new processes by importing the main module. Without the if __name__ == "__main__": guard, the process creation code runs recursively, causing infinite subprocess spawning and crashing the program.
Consider the following code snippet using a Pool of workers:
from multiprocessing import Pool
def f(x):
return x*x
if __name__ == "__main__":
with Pool(4) as p:
result = p.map(f, [1, 2, 3, 4, 5])
print(result)
What is true about this code?
Using Pool as a context manager automatically starts worker processes, waits for their completion, and closes the pool. It is a clean, recommended way to manage worker pools and ensures results are gathered before exiting.
What will happen if a multiprocessing.Process targets a function that returns a value?
Functions run in separate processes do not return values directly. To get results, you need to use inter-process communication methods such as Queues, Pipes, or shared memory. The return value is discarded after the function finishes in the child process.
How does multiprocessing.Queue differ from queue.Queue when used in multiprocessing programs?
multiprocessing.Queue is designed for inter-process communication, using IPC mechanisms under the hood. queue.Queue is thread-safe but works only within a single process’s threads and does not support cross-process communication.
Practicing Python Multiprocessing? Don’t forget to test yourself later in
our
Python Quiz.
About This Exercise: Python – Multiprocessing
Welcome to the Python Multiprocessing exercises — a focused collection of challenges designed to help you harness the power of parallel processing in Python. Multiprocessing allows your programs to run multiple processes simultaneously, taking full advantage of multi-core CPUs to improve performance for CPU-bound tasks. Whether you’re new to parallelism or looking to deepen your understanding, this section will guide you through the fundamentals and practical applications of multiprocessing in Python.
In this section, you’ll learn how to create and manage processes using Python’s multiprocessing module. You will practice starting processes, sharing data between them, and coordinating their execution. These exercises will teach you how to use process pools, queues, pipes, and synchronization primitives to build efficient and scalable applications.
Multiprocessing is particularly valuable for CPU-intensive workloads such as data analysis, scientific computing, image processing, and simulations. By running processes in parallel, you can bypass Python’s Global Interpreter Lock (GIL) limitation, which restricts true multi-threading for CPU-bound tasks. These exercises will help you write programs that maximize CPU usage and reduce execution time.
Mastering multiprocessing is essential for Python developers aiming to build high-performance software and handle computationally heavy tasks. These exercises will prepare you for real-world projects and technical interviews that test your knowledge of concurrent programming.
Alongside hands-on coding problems, this section highlights best practices such as managing process lifecycles, avoiding deadlocks, and ensuring data integrity during inter-process communication. Understanding these concepts will enable you to write robust, maintainable multiprocessing applications.
We recommend supplementing your practice with related topics like multithreading, asynchronous programming, and the concurrent.futures module to develop a comprehensive concurrency skill set. Quizzes and multiple-choice questions on multiprocessing are also available to reinforce your learning.
Start practicing the Python Multiprocessing exercises today to unlock the full potential of parallel computing in Python. With regular practice, you’ll be able to build efficient, scalable, and fast Python applications that leverage modern multi-core processors.