Utilities for parallel computation.
Return the number of processes to use in parallel.
A picklable wrapper suitable for passing exception tracebacks through instances of
Parameters: exception (Exception) – The exception to wrap.
Re-raise the exception.
An engine for doing heavy computations over an iterable.
This is similar to
multiprocessing.Pool, but allows computations to shortcircuit, and supports both parallel and sequential computations.
- iterable (Iterable) – A collection of objects to perform a computation over.
- *context – Any additional data necessary to complete the computation.
Any subclass of
MapReducemust implement three methods:
- ``empty_result``, - ``compute``, (map), and - ``process_result`` (reduce).
The engine includes a builtin
tqdmprogress bar; this can be disabled by setting
Parallel operations start a daemon thread which handles log messages sent from worker processes.
Subprocesses spawned by
MapReducecannot spawn more subprocesses; be aware of this when composing nested computations. This is not an issue in practice because it is typically most efficient to only parallelize the top level computation.
Return the default result with which to begin the computation.
Map over a single object from
Every time a new result is generated by
compute, this method is called with the result and the previous (accumulated) result. This method compares or collates these two values, returning the new result.
Truein this method will abort the remainder of the computation, returning this final result.
Initialize and return a progress bar.
worker(compute, task_queue, result_queue, log_queue, complete, *context)¶
A worker process, run by
Initialize all queues and start the worker processes and the log thread.
Load the input queue to capacity.
Overfilling causes a deadlock when queue.put blocks when full, so further tasks are enqueued as results are returned.
Enqueue the next task, if there are any waiting.
Perform the computation in parallel, reading results from the output queue and passing them to
Terminate all processes and the log thread.
Perform the computation sequentially, only holding two computed objects in memory at a time.
Perform the computation.
Keyword Arguments: parallel (boolean) – If True, run the computation in parallel. Otherwise, operate sequentially.
Thread which handles log records sent from
It listens to an instance of
multiprocessing.Queue, rewriting log messages to the PyPhi log handler.
Configure a worker process to log all messages to