您好,欢迎来到飒榕旅游知识分享网。
搜索
您的当前位置:首页FECs report

FECs report

来源:飒榕旅游知识分享网
How FECs Work

In a communication system that employs forward error-correction coding, a digital information source sends a data sequence comprising k bits of data to an encoder. The encoder inserts redundant (or parity) bits, thereby outputting a longer sequence of n code bits called a codeword. On the receiving end, codewords are used by a suitable decoder to extract the original data sequence.

Codes are designated with the notation (n, k) according to the number of n output code bits and k input data bits. The ratio k/n is called the rate, R, of the code and is a measure of the fraction of information contained in each code bit. For example, each code bit produced by a (6, 3) encoder contains 1/2 bit of information.

Another metric often used to characterize code bits is redundancy, expressed as (n–k)/n. Codes introducing large redundancy (that is, large n–k or small k/n) convey relatively little information per code bit. Codes that introduce less redundancy have higher code rates (up to a maximum of 1) and convey more information per code bit. Large redundancy is advantageous because it reduces the likelihood that all of the original data will be wiped out during a single transmission.

On the down side, the addition of parity bits will generally increase the transmission bandwidth or the message delay (or both). For real-time applications, such as voice communications, the code-bit rate must be increased by a factor of n/k = 1/R to avoid a reduction in data throughput. Hence, for a given modulation scheme, the transmission bandwidth increases by that same factor n/k. If, however, the communication application does not require the real-time transfer of information, then additional message delay (rather than increased bandwidth) is the usual trade-off.

Represented graphically, the general error-performance characteristics of most digital communication systems have a waterfall-shaped appearance. System performance improves (i.e., bit-error rate decreases) as the signal-to-noise ratio increases. The two curves shown below compare the performance of a typical system with and without forward error-correction coding. The coded system, operating with a received signal-to-noise ratio of 8 decibels, has a smaller bit-error rate by a factor of 100 compared with the uncoded system at the same signal-to-noise ratio.

Viewed another way, the graphs indicate that the coded system can achieve the same bit-error rate as the uncoded system at a lower signal-to-noise ratio. This reduction in required signal-to-noise ratio, called the coding gain, is a common metric used to measure the performance of different coding schemes.

The importance of coding gain is evident when the system is viewed from the designer's perspective. For example, to obtain the same level of improved bit-error rate without the use of coding, a designer would have to achieve a larger signal-to-noise ratio (12 decibels instead of 8). To do so would require the use of larger power supplies, bigger antennas, or higher-quality components that introduce less noise. If none of these modifications can be provided, then the designer will have to tolerate some type

of performance degradation—such as reduced service ranges or lower operating margins—to obtain the same improvement.

Types

The two main categories of FEC codes are block codes and convolutional codes.

Block codes work on fixed-size blocks (packets) of bits or symbols of predetermined size. Practical block codes can generally be decoded in polynomial time to their block length. Convolutional codes work on bit or symbol streams of arbitrary length. They are most often decoded with the Viterbi algorithm, though other algorithms are sometimes used. Viterbi decoding allows asymptotically optimal decoding efficiency with increasing constraint length of the convolutional code, but at the expense of exponentially increasing complexity. A convolutional code can be turned into a block code, if desired, by \"tail-biting\".

There are many types of block codes, but among the classical ones the most notable is Reed-Solomon coding because of its widespread use on the Compact disc, the DVD, and in hard disk drives. Golay, BCH, Multidimensional parity, and Hamming codes are other examples of classical block codes

In coding theory, block codes are one of the two common types of channel codes (the other one being convolutional codes), which enable reliable transmission of digital data over unreliable communication channels subject to channel noise.

A block code transforms a message m consisting of a sequence of information symbols over an alphabet Σ into a fixed-length sequence c of n encoding symbols, called a code word. In a linear block code, each input message has a fixed length of k < n input symbols. The redundancy added to a message by transforming it into a larger code word enables a receiver to detect and correct errors in a transmitted code word, and – using a suitable decoding algorithm – to recover the original message. The redundancy is described in terms of its information rate, or more simply – for a linear block code – in terms of its code rate, k/n.

The error correction performance of a block code is described by the minimum Hamming distance d between each pair of code words, and is called the distance of the code.

In telecommunication, a convolutional code is a type of error-correcting code in which

each m-bit information symbol (each m-bit string) to be encoded is transformed into an n-bit symbol, where m/n is the code rate (n ≥ m) and

the transformation is a function of the last k information symbols, where k is the constraint length of the code. Convolutional codes are used extensively in numerous applications in order to achieve reliable data transfer, including digital video, radio, mobile communication, and satellite communication. These codes are often implemented in concatenation with a hard-decision code, particularly Reed Solomon. Prior to turbo codes, such constructions were the most efficient, coming closest to the Shannon limit. Convolutional encoding

To convolutionally encode data, start with k memory registers, each holding 1

input bit. Unless otherwise specified, all memory registers start with a value of 0. The encoder has n modulo-2 adders (a modulo 2 adder can be implemented with a single Boolean XOR gate, where the logic is: 0+0 = 0, 0+1 = 1, 1+0 = 1, 1+1 = 0), and n generator polynomials — one for each adder (see figure below). An input bit m1 is fed into the leftmost register. Using the generator polynomials and the existing values in the remaining registers, the encoder outputs n bits. Now bit shift all register values to the right (m1 moves to m0, m0 moves to m-1) and wait for the next input bit. If there are no remaining input bits, the encoder continues output until all registers have returned to the zero state.

因篇幅问题不能全部显示,请点此查看更多更全内容

Copyright © 2019- sarr.cn 版权所有 赣ICP备2024042794号-1

违法及侵权请联系:TEL:199 1889 7713 E-MAIL:2724546146@qq.com

本站由北京市万商天勤律师事务所王兴未律师提供法律服务