patternMinor
Maximum Schedulable Set Zero-Lateness Deadline Scheduling
Viewed 0 times
maximumschedulingdeadlineschedulablelatenesszeroset
Problem
This is a homework problem for my introduction to algorithms course.
Recall the scheduling problem from Section 4.2 in which we sought to
minimize the maximum lateness. There are $n$ jobs, each with a deadline
$d_i$ and a required processing time $t_i$, and all jobs are available to be
scheduled starting at time $s$. For a job $i$ to be done, it needs to be assigned
a period from $s_i \geq s$ to $f_i$ = $s_i + t_i$, and different jobs should be assigned
nonoverlapping intervals. As usual, such an assignment of times will be
called a schedule.
In this problem, we consider the same setup, but want to optimize a
different objective. In particular, we consider the case in which each job
must either be done by its deadline or not at all. We’ll say that a subset $J$ of
the jobs is schedulable if there is a schedule for the jobs in $J$ so that each
of them finishes by its deadline. Your problem is to select a schedulable
subset of maximum possible size and give a schedule for this subset that
allows each job to finish by its deadline.
(a) Prove that there is an optimal solution $J$ (i.e., a schedulable set of
maximum size) in which the jobs in $J$ are scheduled in increasing
order of their deadlines.
(b) Assume that all deadlines $d_i$ and required times $t_i$ are integers. Give
an algorithm to find an optimal solution. Your algorithm should
run in time polynomial in the number of jobs $n$, and the maximum
deadline $D = \max_i d_i$.
I've solved the problem as worded with the recurrence
$Opt(i, d) = \max\left \{
\begin{array}
\\ Opt(i-1, d-t_i) + 1 \hspace{20 mm} d\leq d_i
\\ Opt(i-1, d)
\end{array}
\right \}$
but our instructor added a new requirement that our algorithm must not be dependent on D. This recurrence seems like it would produce an $O(nD)$ running time if implemented with dynamic programming.
I can't figure out how to reduce its running time from $O(nD)$ to $O(n^k)$. To me it seems like it's a var
Recall the scheduling problem from Section 4.2 in which we sought to
minimize the maximum lateness. There are $n$ jobs, each with a deadline
$d_i$ and a required processing time $t_i$, and all jobs are available to be
scheduled starting at time $s$. For a job $i$ to be done, it needs to be assigned
a period from $s_i \geq s$ to $f_i$ = $s_i + t_i$, and different jobs should be assigned
nonoverlapping intervals. As usual, such an assignment of times will be
called a schedule.
In this problem, we consider the same setup, but want to optimize a
different objective. In particular, we consider the case in which each job
must either be done by its deadline or not at all. We’ll say that a subset $J$ of
the jobs is schedulable if there is a schedule for the jobs in $J$ so that each
of them finishes by its deadline. Your problem is to select a schedulable
subset of maximum possible size and give a schedule for this subset that
allows each job to finish by its deadline.
(a) Prove that there is an optimal solution $J$ (i.e., a schedulable set of
maximum size) in which the jobs in $J$ are scheduled in increasing
order of their deadlines.
(b) Assume that all deadlines $d_i$ and required times $t_i$ are integers. Give
an algorithm to find an optimal solution. Your algorithm should
run in time polynomial in the number of jobs $n$, and the maximum
deadline $D = \max_i d_i$.
I've solved the problem as worded with the recurrence
$Opt(i, d) = \max\left \{
\begin{array}
\\ Opt(i-1, d-t_i) + 1 \hspace{20 mm} d\leq d_i
\\ Opt(i-1, d)
\end{array}
\right \}$
but our instructor added a new requirement that our algorithm must not be dependent on D. This recurrence seems like it would produce an $O(nD)$ running time if implemented with dynamic programming.
I can't figure out how to reduce its running time from $O(nD)$ to $O(n^k)$. To me it seems like it's a var
Solution
Use a dynamic programming algorithm to compute an $n \times n$ table $T$, where the entry $T(j,k)$ answers the question: suppose you wish to schedule $j$ out of the first $k$ jobs. What is the earliest time you can complete processing these?
How do we compute $T(j,k+1)$? Either the job $k+1$ is included in the best set of $j$ out of $k+1$ jobs, so $$T(j,k+1) = T(j-1,k) + t_{k+1},$$ or job $k$ is not included in this set, so $$T(j,k+1) = T(j,k).$$ We also have to worry about the deadline. We can do this by making the entries of $T$ where the task is impossible equal to $\infty$, and checking whether we have exceeded the deadline at every step. So the pseudocode for computing the $T(j,k+1)$ entry of the table is
We initialize by setting $T(j,k) = \infty$ if $j > k$, and $T(1,1) = \infty$ if $t_1 > d_1$, and $T(1,1) = t_1$ otherwise.
How do we compute $T(j,k+1)$? Either the job $k+1$ is included in the best set of $j$ out of $k+1$ jobs, so $$T(j,k+1) = T(j-1,k) + t_{k+1},$$ or job $k$ is not included in this set, so $$T(j,k+1) = T(j,k).$$ We also have to worry about the deadline. We can do this by making the entries of $T$ where the task is impossible equal to $\infty$, and checking whether we have exceeded the deadline at every step. So the pseudocode for computing the $T(j,k+1)$ entry of the table is
if T[j-1,k] + t[k+1] > d[k+1]
then T[j,k+1] = T[j,k]
else T[j,k+1] = min ( T[j,k-1]+t[k+1], T[j,k] )We initialize by setting $T(j,k) = \infty$ if $j > k$, and $T(1,1) = \infty$ if $t_1 > d_1$, and $T(1,1) = t_1$ otherwise.
Code Snippets
if T[j-1,k] + t[k+1] > d[k+1]
then T[j,k+1] = T[j,k]
else T[j,k+1] = min ( T[j,k-1]+t[k+1], T[j,k] )Context
StackExchange Computer Science Q#6202, answer score: 3
Revisions (0)
No revisions yet.