LSFCluster(queue=None, project=None, ncpus=None, mem=None, walltime=None, job_extra=None, **kwargs)¶
Launch Dask on a LSF cluster
- queue : str
Destination queue for each worker job. Passed to #BSUB -q option.
- project : str
Accounting string associated with each worker job. Passed to #BSUB -P option.
- ncpus : int
Number of cpus. Passed to #BSUB -n option.
- mem : int
Request memory in bytes. Passed to #BSUB -M option.
- walltime : str
Walltime for each worker job in HH:MM. Passed to #BSUB -W option.
- job_extra : list
List of other LSF options, for example -u. Each option will be prepended with the #LSF prefix.
- name : str
Name of Dask workers.
- cores : int
Total number of cores per job
- memory: str
Total amount of memory per job
- processes : int
Number of processes per job
- interface : str
Network interface like ‘eth0’ or ‘ib0’.
- death_timeout : float
Seconds to wait for a scheduler before closing workers
- local_directory : str
Dask worker local directory for file spilling.
- extra : str
Additional arguments to pass to dask-worker
- env_extra : list
Other commands to add to script before launching worker.
- kwargs : dict
Additional keyword arguments to pass to LocalCluster
>>> from dask_jobqueue import LSFCluster >>> cluster = LSFcluster(queue='general', project='DaskonLSF', ... cores=15, memory='25GB') >>> cluster.scale(10) # this may take a few seconds to launch
>>> from dask.distributed import Client >>> client = Client(cluster)
This also works with adaptive clusters. This automatically launches and kill workers based on load.
__init__(queue=None, project=None, ncpus=None, mem=None, walltime=None, job_extra=None, **kwargs)¶
__init__([queue, project, ncpus, mem, …])
Turn on adaptivity
Stops all running and pending jobs and stops scheduler
Write job submission script to temporary file
Construct a job submission script
Scale cluster to n workers
Close the workers with the given addresses
Brings total worker count up to
Start workers and point them to our local scheduler
Stops all running and pending jobs
Stop a list of jobs
Stop a list of workers
Jobs that have finished
Jobs pending in the queue
Jobs with currenly active workers
The scheduler of this cluster