Dynamic task scheduling using online optimization

Babak Hamidzadeh, Lau Ying Kit, David J. Lilja

Research output: Contribution to journalArticlepeer-review

47 Scopus citations

Abstract

Algorithms for scheduling independent tasks on to the processors of a multiprocessor system must trade-off processor load balance, memory locality, and scheduling overhead. Most existing algorithms, however, do not adequately balance these conflicting factors. This paper introduces the Self-Adjusting Dynamic Scheduling (SADS) class of algorithms that use a unified cost model to explicitly account for these factors at runtime. A dedicated processor performs scheduling in phases by maintaining a tree of partial schedules and incrementally assigning tasks to the least-cost scheduling phase terminates whenever any processor becomes idle, at which time partial schedules are distributed to the processors. An extension of the basic SADS algorithm, called DBSADS, controls the scheduling overhead by giving higher priority to partial schedules with more task-to-processor assignments. These algorithms are compared to two distributed scheduling algorithms within a database application on an Intel Paragon distributed-memory multiprocessor system.

Original languageEnglish (US)
Pages (from-to)1151-1163
Number of pages13
JournalIEEE Transactions on Parallel and Distributed Systems
Volume11
Issue number11
DOIs
StatePublished - Nov 2000

Bibliographical note

Funding Information:
This work was supported in part by the Research Grant Council of Hong Kong grants no. DAG-93/94.EG03 and no. CERG-667/95E, by the Minnesota Supercomputer Institute, and by US National Science Foundation grant no. MIP-9221900. The authors would like to thank the referees for taking the time to review this paper. Their valuable comments improved the quality of the paper significantly.

Fingerprint

Dive into the research topics of 'Dynamic task scheduling using online optimization'. Together they form a unique fingerprint.

Cite this