+ """An executor that uses processes on remote machines to do work. This
+ works by creating "bundles" of work with pickled code in each to be
+ executed. Each bundle is assigned a remote worker based on some policy
+ heuristics. Once assigned to a remote worker, a local subprocess is
+ created. It copies the pickled code to the remote machine via ssh/scp
+ and then starts up work on the remote machine again using ssh. When
+ the work is complete it copies the results back to the local machine.
+
+ So there is essentially one "controller" machine (which may also be
+ in the remote executor pool and therefore do task work in addition to
+ controlling) and N worker machines. This code runs on the controller
+ whereas on the worker machines we invoke pickled user code via a
+ shim in :file:`remote_worker.py`.
+
+ Some redundancy and safety provisions are made; e.g. slower than
+ expected tasks have redundant backups created and if a task fails
+ repeatedly we consider it poisoned and give up on it.
+
+ .. warning::
+
+ The network overhead / latency of copying work from the
+ controller machine to the remote workers is relatively high.
+ This executor probably only makes sense to use with
+ computationally expensive tasks such as jobs that will execute
+ for ~30 seconds or longer.
+
+ See also :class:`ProcessExecutor` and :class:`ThreadExecutor`.
+ """
+
+ def __init__(
+ self,
+ workers: List[RemoteWorkerRecord],
+ policy: RemoteWorkerSelectionPolicy,
+ ) -> None:
+ """C'tor.
+
+ Args:
+ workers: A list of remote workers we can call on to do tasks.
+ policy: A policy for selecting remote workers for tasks.
+ """
+