THE COMPUTATION PROBLEM WITH SEQUENTIAL DECODING
MASSACHUSETTS INST OF TECH LEXINGTON LINCOLN LAB
Pagination or Media Count:
Fano Sequential Decoding is a technique for communicating at a high information rate and with a high reliability over a large class of channels. However, equipment cost and variation in the time required to decode successive transmitted digits limit its use. This work is concerned with the latter limitation. Others have shown that the average processing time per decoded digit is small if the information rate of the source is less than a rate R-comp. This report studies the probability distribution of the processing time random variable and applies the results to the buffer overflow probability, i.e., the probability that the decoding delay forces incoming data to fill and overflow a finite buffer. It is shown that the overflow probability is relatively insensitive to the buffer storage capacity and to the computational speed of the decoder, but quite sensitive to information rate. In particular, halving the source rate more than squares the overflow probability. These sensitivities are found to be basic Sequential Decoding and arise because the computation per decoded digit is large during an interval of high channel noise and grows exponentially with the length of such an interval. A conjecture is presented concerning the exact behavior of the overflow probability with information rate. This conjecture agrees well with the limited experimental evidence available.