Sunday, April 10, 2011
Error detection schemes
Error detection is most commonly realized using a suitable hash function (or checksum algorithm). A hash function adds a fixed-length tag to a message, which enables receivers to verify the delivered message by recomputing the tag and comparing it with the one provided.
There exists a vast variety of different hash function designs. However, some are of particularly widespread use because of either their simplicity or their suitability for detecting certain kinds of errors (e.g., the cyclic redundancy check's performance in detecting burst errors).
Random-error-correcting codes based on minimum distance coding can provide a suitable alternative to hash functions when a strict guarantee on the minimum number of errors to be detected is desired. Repetition codes, described below, are special cases of error-correcting codes: although rather inefficient, they find applications for both error correction and detection due to their simplicity.
A repetition code is a coding scheme that repeats the bits across a channel to achieve error-free communication. Given a stream of data to be transmitted, the data is divided into blocks of bits. Each block is transmitted some predetermined number of times. For example, to send the bit pattern "1011", the four-bit block can be repeated three times, thus producing "1011 1011 1011". However, if this twelve-bit pattern was received as "1010 1011 1011" – where the first block is unlike the other two – it can be determined that an error has occurred.Repetition codes are not very efficient, and can be susceptible to problems if the error occurs in exactly the same place for each group (e.g., "1010 1010 1010" in the previous example would be detected as correct). The advantage of repetition codes is that they are extremely simple, and are in fact used in some transmissions of numbers stations.
A parity bit is a bit that is added to a group of source bits to ensure that the number of set bits (i.e., bits with value 1) in the outcome is even or odd. It is a very simple scheme that can be used to detect single or any other odd number (i.e., three, five, etc.) of errors in the output. An even number of flipped bits will make the parity bit appear correct even though the data is erroneous.Extensions and variations on the parity bit mechanism are horizontal redundancy checks, vertical redundancy checks, and "double," "dual," or "diagonal" parity (used in RAID-DP).
A checksum of a message is a modular arithmetic sum of message code words of a fixed word length (e.g., byte values). The sum may be negated by means of a one's-complement prior to transmission to detect errors resulting in all-zero messages.
Checksum schemes include parity bits, check digits, and longitudinal redundancy checks. Some checksum schemes, such as the Luhn algorithm and the Verhoeff algorithm, are specifically designed to detect errors commonly introduced by humans in writing down or remembering identification numbers.
Cyclic redundancy checks (CRCs)
A cyclic redundancy check (CRC) is a single-burst-error-detecting cyclic code and non-secure hash function designed to detect accidental changes to digital data in computer networks. It is characterized by specification of a so-called generator polynomial, which is used as the divisor in a polynomial long division over a finite field, taking the input data as the dividend, and where the remainder becomes the result.
Cyclic codes have favorable properties in that they are well suited for detecting burst errors. CRCs are particularly easy to implement in hardware, and are therefore commonly used in digital networks and storage devices such as hard disk drives.
Even parity is a special case of a cyclic redundancy check, where the single-bit CRC is generated by the divisor x+1.
Cryptographic hash functions
A cryptographic hash function can provide strong assurances about data integrity, provided that changes of the data are only accidental (i.e., due to transmission errors). Any modification to the data will likely be detected through a mismatching hash value. Furthermore, given some hash value, it is infeasible to find some input data (other than the one given) that will yield the same hash value. Message authentication codes, also called keyed cryptographic hash functions, provide additional protection against intentional modification by an attacker.
Any error-correcting code can be used for error detection. A code with minimum Hamming distance, d, can detect up to d-1 errors in a code word. Using minimum-distance-based error-correcting codes for error detection can be suitable if a strict limit on the minimum number of errors to be detected is desired.
Codes with minimum Hamming distance d=2 are degenerate cases of error-correcting codes, and can be used to detect single errors. The parity bit is an example of a single-error-detecting code.
The Berger code is an early example of a unidirectional error(-correcting) code that can detect any number of errors on an asymmetric channel, provided that only transitions of cleared bits to set bits or set bits to cleared bits can occur.
Error Detecting methodsThe most popular error detecting methods are:
- Parity check method.
- Cyclic Redundancy check method (CRC)
Parity Check MethodErrors may occur in recording data on magnetic media due to bad tracks, sectors on the recording surface. Errors may also be caused by electrical disturbances during data transmission between two distant computers. It is thus necessary to device methods to guard against such errors. The main principle used for this purpose in coded data is the introduction of extra bits in the code to aid error detection.
A common method is the use of parity check bit along with each character code to be transmitted. As a simple example of an error-detecting code, consider a code in which a single parity bit is appended to the data. The parity bit is chosen so that the number of 1 bits in the codeword or character code to be transmitted or recorded is even or odd.
For example, when 10110101 is sent in even parity by adding a bit at the end, it becomes 101101011, where as 10110001 becomes 101100010 with even parity. A code with a single parity but has a distance 2, since any single-bit error produces a code word with the wrong parity, It Can be used to detect single errors. Two errors cannot be detected by this scheme as the total number of 1s in the code will remain even after two bits change. As the probability of more than one error occurring is in practice very small this scheme is commonly accepted as sufficient.
Instead of appending a parity check but which makes the total number of 1s in the code even, one may choose to append a parity check bit which makes the number of 1s in the odd. Such a parity check is known as an odd parity bit. This scheme also facilities detection of a single error in a code.
Cyclic Redundancy check method (CRC)Another popular method is in wide spread for error detection. It is the polynomial code also known as cyclic redundancy code or CRC code. C.R.C codes are based upon treating bit steam as a representations of polynomials with co-efficient of zero and one only.
A k +bit frame is regarded as the co-efficient list for a polynomial with k terms, ranging from xk-1 to x0 . Such a polynomial is said to be of degrees k-1 the high order (left most) bit is the coefficient of xk-1 , the next bit the coefficient of x k-2 , and so on. For eg : 110001 has six bits and thus represents a six-term polynomial with coefficient 1,1,0,0,0 and 1 : x5+x4+x+0
When the polynomial code method is employed, the sender and receiver must agreed upon a generator polynomial, G(x), in advance. Both the high-and-low-order bits of the generator must be 1. To compute the checksum for some frame with m bits, corresponding to the polynomial M(x), the frame must be longer than the generator polynomial.
The idea is to append a checksum to the end of the frame in such a way that polynomial represent by the checksummed frame, it tries dividing it by G(x). If there is a remainder, there has been a transmission error!
Three polynomials have become international standards:
CRC – 12 = x12+x11+x3+x2+x1+1
CRC – 16 = x16+x15+x2+1
CRC- CCITT = x16+x12+x5+1
All three contains x+1 as a prime factor CRC-12 is used when character length is 6 bits. The other two are used for 3-bits character. A 16-bit checksum, such as CRC- 16 or CRC – CCITT, catches all single and double errors, all errors with an odd number of bits, all burst errors of length 16 or less, 99.997 percent of 17 bit error bursts and 99.998 percent 18 bits longer bursts.
Subscribe to: Posts (Atom)