Students will walk away with a high-level understanding of both parallel problems and how to reason about parallel computing frameworks. They will also walk away with hands-on experience using a variety of frameworks easily accessible from Python.
These materials are adapted and enhanced from original materials developed by Rocklin, Ragan-Kelley, and Zaitlen for SciPy 2016.
For the first half, we will cover basic ideas and common patterns encountered when analyzing large data sets in parallel. We start by diving into a sequence of examples that require increasingly complex tools. From the most basic parallel API: map, we will cover some general asynchronous programming with Futures, and high level APIs for large data sets, such as Spark RDDs and Dask collections, and streaming patterns. For the second half, we focus on traits of particular parallel frameworks, including strategies for picking the right tool for your job. We will finish with some common challenges in parallel analysis, such as debugging parallel code when it goes wrong, as well as deployment and setup strategies.
Part one: We dive into common problems with a variety of tools
Part two: We analyze common traits of parallel computing systems.
We intend to cover the following tools: concurrent.futures, multiprocessing/threading, joblib, IPython parallel, Dask, Spark