dask_jobqueue.LSFCluster

class dask_jobqueue.LSFCluster(queue=None, project=None, ncpus=None, mem=None, walltime=None, job_extra=None, **kwargs)

Launch Dask on a LSF cluster

Parameters:
queue : str

Destination queue for each worker job. Passed to #BSUB -q option.

project : str

Accounting string associated with each worker job. Passed to #BSUB -P option.

ncpus : int

Number of cpus. Passed to #BSUB -n option.

mem : int

Request memory in bytes. Passed to #BSUB -M option.

walltime : str

Walltime for each worker job in HH:MM. Passed to #BSUB -W option.

job_extra : list

List of other LSF options, for example -u. Each option will be prepended with the #LSF prefix.

name : str

Name of Dask workers.

cores : int

Total number of cores per job

memory: str

Total amount of memory per job

processes : int

Number of processes per job

interface : str

Network interface like ‘eth0’ or ‘ib0’.

death_timeout : float

Seconds to wait for a scheduler before closing workers

local_directory : str

Dask worker local directory for file spilling.

extra : str

Additional arguments to pass to dask-worker

env_extra : list

Other commands to add to script before launching worker.

kwargs : dict

Additional keyword arguments to pass to LocalCluster

Examples

>>> from dask_jobqueue import LSFCluster
>>> cluster = LSFcluster(queue='general', project='DaskonLSF',
...                      cores=15, memory='25GB')
>>> cluster.scale(10)  # this may take a few seconds to launch
>>> from dask.distributed import Client
>>> client = Client(cluster)

This also works with adaptive clusters. This automatically launches and kill workers based on load.

>>> cluster.adapt()
__init__(queue=None, project=None, ncpus=None, mem=None, walltime=None, job_extra=None, **kwargs)

Methods

__init__([queue, project, ncpus, mem, …])
adapt(**kwargs) Turn on adaptivity
close() Stops all running and pending jobs and stops scheduler
job_file() Write job submission script to temporary file
job_script() Construct a job submission script
scale(n) Scale cluster to n workers
scale_down(workers) Close the workers with the given addresses
scale_up(n, **kwargs) Brings total worker count up to n
start_workers([n]) Start workers and point them to our local scheduler
stop_all_jobs() Stops all running and pending jobs
stop_jobs(jobs) Stop a list of jobs
stop_workers(workers) Stop a list of workers

Attributes

cancel_command
dashboard_link
finished_jobs Jobs that have finished
job_id_regexp
pending_jobs Jobs pending in the queue
running_jobs Jobs with currenly active workers
scheduler The scheduler of this cluster
scheduler_address
scheduler_name
submit_command
worker_threads