Design an Efficent Scheduler

The electronic forum deals with the topics related to analog and digital circuits and systems (i.e. ASIC, FPGA, DSPs, Microcontroller, Single/Multi Processors etc) and their programming such as HDL, C/C++, etc.
Forum rules
The electronic forum deals with the topics related to analog and digital circuits and systems (i.e. ASIC, FPGA, DSPs, Microcontroller, Single/Multi Processors, PCBs etc) and their programming such as HDL, C/C++, etc.

Long-term Scheduling

Unread postby syedkazmi » Thu Apr 02, 2015 9:45 am

syed ali haider kazmi, cms : 7478
When a new process is created we need scheduling. If the number of active process is hight then there is budeb on operation for maintaining long lists, context switching and dispatching increases. Therefore allow only limited number of processes in to the ready queue. The "long-term scheduler" managers this. Long-term scheduler determines which programs are admitted into the system for processing. Once when admit a process or job, it becomes process and is added to the queue for the short-term scheduler. In some systems, a newly created process begins in a swapped-out condition, in which case it is added to a queue for the medium-term scheduler scheduling manage queues to minimize queueing delay and to optimize performance.
The long-term scheduler limits the number of processes to allow for processing by taking the decision to add one or more new jobs, based on FCFS (First-Come, first-serve) basis or priority or execution time or Input/Output requirements. Long-term scheduler executes relatively infrequently.
syedkazmi
 
Posts: 1
Joined: Thu Apr 02, 2015 9:40 am
Has thanked: 0 time
Been thanked: 0 time

long term scheduler

Unread postby rizwan » Thu Apr 02, 2015 9:46 am

RIZWAN (7664)
In computing,scheduling is the manner by which threads, processes or data flows are given access to system .This is usually done to load balance and share system resources effectively or achieve a target quality of service. The need for a scheduling algorithm arises from the requirement for most modern systems to perform and multiplexing .
LOND TERM SCHEDULER:
The long-term scheduler, orsadmission scheduler, chooses which jobs ora processes are to be acknowledged to the ready file (in main memory); that is, when an attempt is made to implement a program, its admission to the set of currently executing processes is either official or delayed by the long-term scheduler. The long term scheduler is responsible for controlling the degree of multiprogramming.long-term scheduler selects a good process mix of I/O-bound and CPU-bound processes. If all processes are I/O-bound, the ready queue will almost always be empty, and the short-term scheduler will have little to do. The system with the best performance will thus have a combination of CPU-bound and I/O-bound processes. In modern operating systems, this is used to make sure that real-time processes get enough CPU time to finish their tasks.Long-term scheduling is also important in large-scale systems such as batch processing systems, computer clusters, supercomputers and render farms
rizwan
 
Posts: 1
Joined: Thu Apr 02, 2015 9:40 am
Has thanked: 0 time
Been thanked: 0 time

Re: Design an Efficent Scheduler

Unread postby samiullah » Thu Apr 02, 2015 10:07 am

Sami Ullah, CMS # 8041
Lazy scheduling
is a run-time scheduler for task-parallel codes that effectively coarsens parallelism on load
conditions in order to significantly reduce its overheads compared to existing approaches, thus enabling the
efficient execution of more fine-grained tasks. Unlike other adaptive dynamic schedulers, lazy scheduling
does not maintain any additional state to infer system load and does not make irrevocable serialization
decisions. These two features allow it to scale well and to provide excellent load balancing in practice but
at a much lower overhead cost compared to work stealing, the golden standard of dynamic schedulers. We
evaluate three variants of lazy scheduling on a set of benchmarks on three different platforms and find it
to substantially outperform popular work stealing implementations on fine-grained codes. Furthermore, we
show that the vast performance gap between manually coarsened and fully parallel code is greatly reduced
by lazy scheduling, and that, with minimal static coarsening, lazy scheduling delivers performance very
close to that of fully tuned code.
The manual coarsening required by the best existing work stealing schedulers and its damaging
effect on performance portability have kept novice and general-purpose programmers from parallelizing
their codes. Lazy scheduling offers the foundation for a declarative parallel programming methodology that
should attract those programmers by minimizing the need for manual coarsening and by greatly enhancing
the performance portability of parallel code
samiullah
 
Posts: 1
Joined: Thu Apr 02, 2015 8:25 am
Has thanked: 0 time
Been thanked: 0 time

Re: Design an Efficent Scheduler

Unread postby Imran Yousaf » Thu Apr 02, 2015 11:07 am

Lazy Binary-Splitting: A Run-Time
Adaptive Work-Stealing Scheduler

Lazy Binary Splitting (LBS), a user-level scheduler of nested parallelism. And for shared-memory multiprocessors. that builds on existing Eager Binary Splitting work-stealing (EBS) implemented in Intel's Threading Building Blocks (TBB), but improves performance and ease-of-programming. In its simplest form (SP).It requires manual operating by repeatedly running the application under carefully controlled conditions to determine a stop-splitting-threshold (sst)for every do-all loop in the code. This threshold limits the parallelism and prevents excessive overheads for fine-grain parallelism. Besides being tedious, this tuning also over-fits the code to some particular data , platform and calling context of the do-all loop, resulting in poor performance portability for the code. LBS overcomes both the performance portability and ease-of-programming pitfalls of a manually fixed threshold by adapting dynamically to run-time conditions without requiring tuning. I compare LBS to Auto-Partitioner (AP), the latest default scheduler , which does not require manual tuning either but lacks context portability, and outperform it by 38.9% using TBB's default AP configuration, and by 16.2% after we tuned AP to our experimental platform. We also compare LBS to SP by manually finding SP's sst using a training dataset and then running both on a different execution dataset. LBS outperforms SP by 19.5% on average. while allowing for improved performance portability without requiring tedious manual tuning. LBS also outperforms SP with sst=1, its default value when undefined, by 56.7%, and serializing work-stealing (SWS), another work-stealer by 54.7%. Finally, compared to serializing inner parallelism (SI) which has been used by OpenMP, LBS is 54.2% faster. It is quite efficient and faster.
Imran Yousaf
 
Posts: 2
Joined: Thu Apr 02, 2015 11:03 am
Has thanked: 0 time
Been thanked: 0 time

Re: Design an Efficent Scheduler

Unread postby Imran Yousaf » Thu Apr 02, 2015 11:11 am

Imran yousaf, CMS # 7678
Lazy Binary-Splitting: A Run-Time
Adaptive Work-Stealing Scheduler

Lazy Binary Splitting (LBS), a user-level scheduler of nested parallelism. And for shared-memory multiprocessors. that builds on existing Eager Binary Splitting work-stealing (EBS) implemented in Intel's Threading Building Blocks (TBB), but improves performance and ease-of-programming. In its simplest form (SP).It requires manual operating by repeatedly running the application under carefully controlled conditions to determine a stop-splitting-threshold (sst)for every do-all loop in the code. This threshold limits the parallelism and prevents excessive overheads for fine-grain parallelism. Besides being tedious, this tuning also over-fits the code to some particular data , platform and calling context of the do-all loop, resulting in poor performance portability for the code. LBS overcomes both the performance portability and ease-of-programming pitfalls of a manually fixed threshold by adapting dynamically to run-time conditions without requiring tuning. I compare LBS to Auto-Partitioner (AP), the latest default scheduler , which does not require manual tuning either but lacks context portability, and outperform it by 38.9% using TBB's default AP configuration, and by 16.2% after we tuned AP to our experimental platform. We also compare LBS to SP by manually finding SP's sst using a training dataset and then running both on a different execution dataset. LBS outperforms SP by 19.5% on average. while allowing for improved performance portability without requiring tedious manual tuning. LBS also outperforms SP with sst=1, its default value when undefined, by 56.7%, and serializing work-stealing (SWS), another work-stealer by 54.7%. Finally, compared to serializing inner parallelism (SI) which has been used by OpenMP, LBS is 54.2% faster. It is quite efficient and faster.
Imran Yousaf
 
Posts: 2
Joined: Thu Apr 02, 2015 11:03 am
Has thanked: 0 time
Been thanked: 0 time

PreviousNext

Return to Electronics

Who is online

Users browsing this forum: No registered users and 2 guests

cron