Friday 10:45 AM–12:15 PM in Room #370B/C (3rd Floor)

Parallel Python - Analyzing Large Data Sets

Aron Ahmadia, Matthew Rocklin

Audience level:
Intermediate

Description

Students will walk away with a high-level understanding of both parallel problems and how to reason about parallel computing frameworks. They will also walk away with hands-on experience using a variety of frameworks easily accessible from Python.

Abstract

These materials are adapted and enhanced from original materials developed by Rocklin, Ragan-Kelley, and Zaitlen for SciPy 2016.

For the first half, we will cover basic ideas and common patterns encountered when analyzing large data sets in parallel. We start by diving into a sequence of examples that require increasingly complex tools. From the most basic parallel API: map, we will cover some general asynchronous programming with Futures, and high level APIs for large data sets, such as Spark RDDs and Dask collections, and streaming patterns. For the second half, we focus on traits of particular parallel frameworks, including strategies for picking the right tool for your job. We will finish with some common challenges in parallel analysis, such as debugging parallel code when it goes wrong, as well as deployment and setup strategies.

Part one: We dive into common problems with a variety of tools

  1. Parallel Map
  2. Asynchronous Futures
  3. High Level Datasets
  4. Streaming

Part two: We analyze common traits of parallel computing systems.

  1. Processes and Threads. The GIL, inter-worker communication, and contention
  2. Latency and overhead. Batching, profiling.
  3. Communication mechanisms. Sockets, MPI, Disk, IPC.
  4. Stuff that gets in the way. Serialization, Native v. JVM, Setup, Resource Managers, Sample Configurations
  5. Debugging async and parallel code / Historical perspective

We intend to cover the following tools: concurrent.futures, multiprocessing/threading, joblib, IPython parallel, Dask, Spark