Open-loop admission control schemes into a single-server queueing system are studied. The underlying system model is a tandem queue, where a controller buffer is followed by a single-server queue. The controller's admission control takes the form of holding a customer in the controller buffer in order to prevent congestion in the single-server queue until the customer is admitted to the single-server queue. An objective of the controller is to minimize the weighted expected sum of queueing delay in the controller buffer and the single-server queue, with more weight on the delay in the single-server queue. This thesis studies open-loop schemes, where the controller cannot observe the state of the single-server queue. The single server queue can represent a wide variety of systems. Possible areas of applications include; high-speed communication networks, distributed computing systems, manufacturing networks, human service organizations.
Control schemes are discussed in this thesis under two different settings; dynamic and static. In the dynamic setting, streams of customers arrive at the controller randomly. The Leaky bucket scheme used in modern communication networks is applied and analyzed. A control scheme that simply ensures minimal time between consecutive admissions, which is referred to as “Ensured Minimal Interadmission Time” or EMIT scheme, is also proposed and analyzed. In the static setting, we assume that the controller buffer has N customers initially, and no additional customers arrive. The control problem is then reduced to off-line scheduling of admissions. Properties, such as the admission rate under the optimal schedule are discussed.
For the leaky bucket scheme, this thesis focuses only on the average delay in the single-server queue, under the assumption of deterministic service time. This thesis shows that the worst customer admission times allowed by the leaky bucket are characterized as the repetition of the following three phases: simultaneous admission of a number of customers related to the bucket size, then admission of a number of customers at earliest allowed times, followed by an interval of no admission until the bucket is full. For this worst input, the queueing delay averaged over all customers is determined as a function of the parameters of the leaky bucket
For the EMIT scheme, a stochastic algorithm that optimizes the minimal ensured time between admissions is proposed. Techniques of perturbation analysis are applied estimate, through simulation, the sensitivity of the average cost to the minimal interadmission time. Unbiasedness and strong consistency of this method are proven. Conditions for convergence of the algorithm are specified.
In the static setting, the admission rate of the optimal schedule, under the constraint of constant admission rate, converges to the service rate as the number of customers increases. For an unconstrained optimal schedule, the average admission rate is no less than the service rate if the number of customers N is sufficiently large. The admission rate of the optimal schedule averaged over initial time intervals of different lengths is also studied. For an initial interval shorter than O(ln N), the average admission rate is at least the service rate if N is sufficiently large. For an initial interval of length between O(In N) and some fraction of N, the admission rate converges to the service rate, as N increases. For an initial interval longer than this fraction of N, the average admission rate is no more than the service rate if N is sufficiently large. Finally, the length of the interval between first two admissions becomes arbitrarily close to 0, as N becomes large.