Vector Symbol Concatenated Code Decoding with Symbol Erasures, Errors and List Decisions.
- 格式:pdf
- 大小:720.99 KB
- 文档页数:22
vector达芬奇编译代码一、什么是vector?Vector是C++ STL(标准模板库)中的一个容器,可以用来存储任意类型的对象。
它是一个动态数组,可以根据需要自动扩展或缩小。
Vector的内部实现是使用连续的内存空间来存储元素,因此在随机访问时具有较好的性能。
二、为什么要使用vector?1. 动态大小:Vector可以根据需要自动扩展或缩小,而不需要手动管理内存。
2. 高效性能:由于Vector使用连续的内存空间来存储元素,因此在随机访问时具有较好的性能。
3. 可以存储任意类型:Vector可以存储任意类型的对象,包括基本数据类型、自定义类等。
4. 支持多种操作:Vector支持多种操作,如插入、删除、查找等。
三、如何使用vector?1. 头文件```c++#include <vector>```2. 声明和初始化```c++// 声明一个int类型的vectorstd::vector<int> v;// 声明并初始化一个int类型的vector std::vector<int> v{1, 2, 3};// 声明并初始化一个string类型的vector std::vector<std::string> v{"hello", "world"}; ```3. 插入元素```c++// 在末尾插入一个元素v.push_back(4);// 在指定位置插入一个元素v.insert(v.begin() + 1, 5); ```4. 删除元素```c++// 删除末尾的一个元素v.pop_back();// 删除指定位置的一个元素v.erase(v.begin() + 1);```5. 访问元素```c++// 访问第一个元素int first = v.front();// 访问最后一个元素int last = v.back();// 访问指定位置的元素int third = v.at(2);// 使用下标访问元素int second = v[1];```6. 遍历vector```c++for (auto it = v.begin(); it != v.end(); ++it) { std::cout << *it << " ";}for (auto& i : v) {std::cout << i << " ";}```四、如何使用达芬奇编译代码?达芬奇是一款专业的视频编辑软件,支持多种编码格式。
CC++编译器错误消息C/C++编译器错误消息大全C 语言是以函数形式提供给用户的,这些函数可方便的调用,并具有多种循环、条件语句控制程序流向,从而使程序完全结构化。
下面是店铺分享的C/C++编译器错误消息大全,一起来看一下吧。
编译器错误 C2001 错误消息常数中有换行符字符串常数不能继续到第二行,除非进行下列操作:用反斜杠结束第一行。
用一个双引号结束第一行上的字符串,并在下一行用另一个双引号开始该字符串。
用结束第一行是不够的。
编译器错误 C2002 错误消息无效的宽字符常数多字节字符常数是非法的。
通过检查下面的可能原因进行修复1.宽字符常数包含的字节比需要的多。
2.未包括标准头文件 STDDEF.h。
3.宽字符不能与一般字符串连接。
4.宽字符常数之前必须是字符“L”:编译器错误 C2003 错误消息应输入“defined id”标识符必须跟在预处理器关键字之后。
编译器错误 C2004 错误消息应为“defined(id)”标识符必须出现在预处理器关键字之后的括号中。
也可能由于为 Visual Studio .NET 2003 进行的编译器一致性工作生成此错误:在预处理器指令中缺少括号。
如果预处理器指令缺少右括号,则编译器将生成一个错误。
编译器错误 C2005 错误消息#line 应跟一个行号,却找到“token”#line 指令后面必须跟行号。
编译器错误 C2006 错误消息“directive”应输入文件名,却找到“token”诸如#include 或#import 等指令需要文件名。
若要解决该错误,请确保token 是一个有效文件名。
并且将该文件名放在双引号或尖括号中。
编译器错误 C2007 错误消息#define 语法#define 后未出现标识符。
若要解决该错误,请使用标识符。
编译器错误 C2008 错误消息“character”: 宏定义中的意外该字符紧跟在宏名之后。
若要解决该错误,宏名之后必须有一个空格。
python3使⽤OpenCV计算滑块拼图验证码缺⼝位置前⾔滑块拼图验证码的失败难度在于每次图⽚上缺⼝位置不⼀样,需识别图⽚上拼图的缺⼝位置,使⽤python的OpenCV库来识别到环境准备pip 安装 opencv-pythonpip installl opencv-pythonOpenCV(Open Source Computer Vision Library)是⼀个开源的计算机视觉库,提供了很多处理图⽚、视频的⽅法。
OpenCV库提供了⼀个⽅法(matchTemplate()):从⼀张较⼤的图⽚中搜索⼀张较⼩图⽚,计算出这张⼤图上各个区域和⼩图相似度。
调⽤这个⽅法后返回⼀个⼆维数组(numpy库中ndarray对象),从中就能拿到最佳匹配区域的坐标。
这种使⽤场景就是滑块验证码上背景图⽚是⼤图,滑块是⼩图。
准备2张图⽚场景⽰例先抠出2张图⽚,分别为background.png 和 target.png计算缺⼝位置import cv2# 作者-上海悠悠 QQ交流群:730246532# blog地址 https:///yoyoketang/def show(name):'''展⽰圈出来的位置'''cv2.imshow('Show', name)cv2.waitKey(0)cv2.destroyAllWindows()def _tran_canny(image):"""消除噪声"""image = cv2.GaussianBlur(image, (3, 3), 0)return cv2.Canny(image, 50, 150)def detect_displacement(img_slider_path, image_background_path):"""detect displacement"""# # 参数0是灰度模式image = cv2.imread(img_slider_path, 0)template = cv2.imread(image_background_path, 0)# 寻找最佳匹配res = cv2.matchTemplate(_tran_canny(image), _tran_canny(template), cv2.TM_CCOEFF_NORMED) # 最⼩值,最⼤值,并得到最⼩值, 最⼤值的索引min_val, max_val, min_loc, max_loc = cv2.minMaxLoc(res)top_left = max_loc[0] # 横坐标# 展⽰圈出来的区域x, y = max_loc # 获取x,y位置坐标w, h = image.shape[::-1] # 宽⾼cv2.rectangle(template, (x, y), (x + w, y + h), (7, 249, 151), 2)show(template)return top_leftif __name__ == '__main__':top_left = detect_displacement("target.png", "background.png")print(top_left)运⾏效果看到⿊⾊圈出来的地⽅就说明找到了缺⼝位置调试完成后去掉 show 的这部分代码# 展⽰圈出来的区域# x, y = max_loc # 获取x,y位置坐标# w, h = image.shape[::-1] # 宽⾼# cv2.rectangle(template, (x, y), (x + w, y + h), (7, 249, 151), 2)# show(template)缺⼝的位置只需得到横坐标,距离左侧的位置top_left为184。
create symbol illegal element摘要:1.创建非法元素的背景和原因2.创建非法元素的具体步骤3.创建非法元素的注意事项4.创建非法元素的实际应用案例5.结论:创建非法元素的重要性和必要性正文:一、创建非法元素的背景和原因在计算机编程领域,符号非法元素指的是程序中出现的不合法字符或不符合规范的语法结构。
这些非法元素可能会导致程序崩溃、运行错误或无法正常执行。
为了解决这些问题,程序员需要掌握一定的技巧来创建非法元素,以便更好地识别和修复程序中的错误。
二、创建非法元素的具体步骤1.了解编程语言的语法规则:不同的编程语言有不同的语法规则,因此程序员需要首先了解所使用编程语言的规范,以便判断哪些元素属于非法元素。
2.编写测试代码:程序员可以通过编写一些包含非法元素的测试代码,来模拟实际程序中可能出现的错误情况。
3.运行测试代码:将测试代码运行后,观察程序是否能够正常执行。
若程序出现错误或异常,则说明测试代码中的非法元素已被成功创建。
4.分析错误原因:针对程序出现的错误或异常,程序员需要分析其原因,从而找出非法元素的具体位置和影响。
5.修复非法元素:根据错误原因的分析结果,程序员需要对非法元素进行修复,以确保程序能够正常运行。
三、创建非法元素的注意事项1.确保测试环境的安全性:在创建非法元素的过程中,程序员需要确保测试环境具有一定的安全性,以免因非法元素导致程序或系统损坏。
2.遵循编程规范:虽然需要创建非法元素来模拟错误情况,但程序员仍需遵循编程规范,以养成良好的编程习惯。
3.及时修复非法元素:在发现非法元素后,程序员需要尽快进行修复,以避免对程序的正常运行造成影响。
四、创建非法元素的实际应用案例以Python 语言为例,创建一个包含非法元素的简单测试代码:```pythondef illegal_function():# 使用非法元素x = "global"y = 10 + xprint(y)# 正常函数def legal_function():x = 10y = x + 5print(y)# 调用非法函数illegal_function()# 调用正常函数legal_function()```在上述代码中,`illegal_function`函数中的`x = "global"`语句属于非法元素,因为变量名不能为关键字。
Vector Symbol Concatenated Code Decoding with Symbol Erasures , Errors and List Decisions.John J. MetznerPennsylvania State UniversityDepartment of Computer Science and EngineeringUniversity Park, Pennsylvania 16802Email: metzner@Abstract. Vector Symbol Decoding (VSD) is compared with Reed-Solomon Decoding (RSD) as an outer code of a concatenated code for two examples, where the inner decoder may provide a combination of errors, erasures or list decisions for a symbol. One example is for idealized orthogonal inner code signals with random noise and soft inner code decisions. The other example is for multiple access fast frequency or pulse position hopping within the inner symbol. VSD with a randomly-chosen code shows an advantage over RSD error-erasure decoding for the examples. Index terms: Vector symbol decoding, List decoding, Reed-Solomon decoding, Concatenated codesI. IntroductionVector Symbol Decoding (VSD) for the outer code of a concatenated code has been described in [1-6]. The code symbols are vectors in F r over a finite field F. Assume that F is GF(2). The r-bit vectors are r data bits from the decisions of an inner code, and the vector symbol code is the outer code of a concatenated code. List decoding is an old concept [7,8] in which interest has reawakened recently [9-15].The work on list (block) decoding is based on a list of candidate code words for the whole block. Recent references [11-15] report significant gains, but the methods become very complex if the list size is greater than about 32. If there were 30 ambiguous inner code symbols, each with a list of 2, the whole-block list would have 230 candidates, which is far too many at the code word level. In [4,5], it was shown how a vector symbol decoder could, in most cases, with little extra effort, automatically discover a correct symbol value from a short list of alternative decisions for each vector symbol, as supplied by the inner code. Thus, discovery of correct values on the 30 lists occurs easily.For some symbols it is better to call the symbol an erasure than to provide a list decision. No code can be better than a Maximum Distance or Reed-Solomon (RSD) code at decoding pure erasures. It was shown previously in [5] that VSD with a randomly-chosen code can have a lower decoded error rate than RSD, where RSD corrects up to the guaranteed correction capability, with the same size symbols, same symbol error probability, and same rate and block length. But this work allowed VSD to use lists, but did not1allow RSD to use inner symbol erasures. Thus it is interesting to see how VSD and RSD compare with a mixture of erasures and errors. This paper makes such a comparison, using a randomly-chosen VSD code. The inner code and error statistics are idealized, but provide a convenient analytical means of comparison. .II. Review of VSD, with and without lists.The outer code is an (n, k) code where each of the n symbols is an r-bit vector. Let s = n-k. An s x n parity check matrix H, usually with entries over GF(2), is chosen. In decoding without lists or erasures, the outer decoder is given an r-bit decision vector for each symbol, forming a received r x n matrix Y, and an s x r syndrome matrix S is then computed according toS = HY.(1)The decoder performs a Gauss-Jordan reduction of S via elementary column operations, which reveals the rank of S and an error-locating vector which normally is 0 ’s exactly in the error locations. The Gauss-Jordan operations are simple because elements are binary, and addition of columns can be vector XOR. Null combinations are discovered from this as members of the row space of H that would give a zero syndrome vector. The error-locating vector is formed as the logical vector OR of null combinations. The decoder successfully decodes in every case where t error vectors are of rank t and the t errors do not cover all but one position of any nonzero code word of the binary code corresponding to the check matrix H. Error locations are revealed by an error location vector which contains exactly t zero values in the t error positions. The error values are solved for by a t x t binary sub-matrix of H. Ways also have been found [5] for correcting cases where the t errors are of rank t-1. Also, a method of transforming the data within a vector symbol has been proposed [5], which may decrease the chance of error vector linear dependence.In decoding with lists, the outer decoder is given, for each symbol, a first choice, and also zero or more alternative r-bit vector possibilities. VSD has a simple method of finding correct choices on the lists with negligible additional effort. The difference between each alternative choice and its first choice is written as an r-bit row appended to the bottom of the S matrix. When an alternative choice is right, this vector is the error in the symbol. When the Gauss Jordan column operations are done on S, these2extra rows participate in the column operations. This reveals if any of the alternate choice differences are in the row space of S. If S is rank D, a member of the row space will have zeroes in the last r-D entries ofbe thethe transformed S. The row space of S is a subset of the space of the error vectors. Let Htis of rank t , all the errors are in the row space submatrix of H corresponding to the t error positions. If Htof S. In the latter case the candidate correct choices are all revealed. This will be true whether or not the errors are linearly independent.When errors are discovered by the list method they are corrected immediately, since their exact values are revealed. Any remaining errors can be solved for, usually, by the regular error-locating vector method. In the case where the t columns of H are of rank t-1, pairs of correct alternate choices can be recognized by the last r -D positions being identical after the Gauss-Jordan reduction. These can also be corrected immediately.If a true error is discovered that is not a member of a linearly dependent set of error vectors, substitution of a correct second choice will reduce the rank of S by 1, which confirms the substitution; if the error being corrected happened to be in a dependent set the rank would be left unchanged. But correction of an error in a dependent set may make the set independent, which would be revealed by an increase in the number of zeroes in the error location vector.There also is a (usually small) chance 2-(r-D) that a false difference will happen to be in the space, but this will often be contradictory to its position as well as rare. If not otherwise discovered, it adds an error that is already in the space. However, adding a dependent error in most cases reduces the number of zeroes in the error location vector to a value less than the rank D. This would be another indication of a false correction.To reach a correctable stage after the alternate choice discoveries, the new rank D should equal the new number of zeroes in the error location vector. If the remaining t errors are not linearly independent, but of rank t-1, there is still a possibility of decoding by the method in [5].It will be assumed that the code matrix is obtained by selecting a random s*n matrix, where each entry is equally likely, independently, to be 0 or 1. Of course many codes will perform better than the average. The decoding complexity is not greatly affected by choosing a code at random, although a cyclic code would simplify some steps. It is possible that a row or a column will consist entirely of 0's. In actuality,34one would never use such a code. For large n and s, these choices which occur with probabilities 2-n and 2-s for a zero row or column, respectively, have negligible probability. However, with erasures, some rows and columns of H get deleted, and then there could be a zero column even though there was none originally. III. VSD with erasures as well as errors.Assume the s x n matrix is of rank s, which is almost certain for a (255,223) randomly-chosen code in the examples. With symbol lists there is flexibility in drawing the line for an erasure. Some symbols are clearly erasures. In some cases, there may be a modest size list where the probability of correct first choice is a bit less than one-half. It might be called an erasure; but if the number of erasures is too great, the erasure could be changed to a decision for a second try. Suppose there are e erasures. For ease of explanation assume they are in the first e positions. Due to the random choice of matrix, the positions of the erasures have no relevance for average performance. Let H e represent the submatrix consisting of the columns of H corresponding to the erasure positions. Let D e be the rank of H e . Usually, D e < e will be taken as a decoding failure, but we will see later that VSD can sometimes decode when certain erasures are turned into decisions. By row operations H can be transformed as follows:I D e is a D e x D e identity matrix. The reduced H(s-D e , n-e) can use VSD decoding to correct t errors in the unerased positions. Once these are corrected, if D e = e, the erased symbols can be found. If D e < e,there is a failure unless a second try is made with some erasures converted into decisions with lists.IV. VSD outer code failure probability analysis with errors and erasures.Failure probability for VSD is computed as the average over all choices of (n-k) by n binary H matrices.Actually there are two basic sources of failure. One is related to the rank of a subset of the H matrix columns. The other is related to the error vector patterns. In VSD without lists, a main concern was the linear independence of the vector symbols. With lists, the ability to discover correct second choices does not require linear independence of error vectors. It depends rather on an error vector being in the row vector space of the syndrome matrix. Only after correct second choices are made does linear independence matter for correcting the remaining errors.Consider the case where there are e erasures and t first choice errors. One contribution to thefailure event is Rank{He } < e. Define this failure probability contribution as Peand let s = n-k. Fora randomly-chosen H,As stated in III, it is possible to succeed even if Rank(He )= De< e. This possibility will bepursued further in Example 2, but in the computations we assume failure according to (2).If this failure event doesn't occur, the problem reduces to decoding a (n-e, k) random vector symbol code with t errors. If t=0, the erased positions can be filled in and built-in error detection can be used to verify correctness. If t>0, let Ht(s-e, t) be the columns of H(s-e, n-e) that correspond to thet error positions. Also, let p2be the probability that the correct choice is not on the list for a symbol, given that the first choice is not correct. There are then the following failure contributions.a) Rank {Ht(s-e, t)} = t. Define this probability as Q e, t. Since in this case all errors are in the row space of S, all correct alternate choices are found, reducing the number of errors to t’ (neglect the chance of mistaken corrections, since this is highly unlikely unless e+t is close to 32). Decoding then fails under the following condition: one of the other n-e-t columns of H(s-e, n-e) is in the columnspace of Ht (s-e, t’), Define Pt, tas the conditional failure probability given Rank {Ht(s-e, t)} = t.56Then,With the aid of the inequality 1 - (1-x)n #nx, this reduces tob) Rank {H t (s-e,t)} = t-1. Define this as Q e, t-1Rank {H t (s-e,t)} = t -1 corresponds to the errors covering exactly one code word of theH(s-e, n-e) binary associated code. Let P t, t-1 be the probability of failure given this event. The covered code word is of some weight w. If w>1,VSD was shown [4] capable of correcting the errors if two or more of the w positions have a correct alternate choice on their list. There will be a failure if there are not at least two correct second choices among the w positions covering a code word of weight w.First, let P[no two] be the probability that there won’t be at least 2 correct second choices among w.With a randomly chosen code,7Equation (7) reduces toIf there are at least two correct second choices, these two will reduce the number of errors to t-2, and H t-2 will be rank t-2. Then, ignoring that other errors among the t-w not covered might also be corrected,For example 1 it will be assumed that all cases where rank {H t (s-e,t)} < t-1 result in failure, even though some cases could be decoded. However, for example 2, where the probability of a correct alternative choice being on the list is much greater, the case of rank {H t (s-e,t)} = t-2 will be included.Thus, for e xample 1, the probability of failure due to e errors and t erasures isV. Decision lists and erasures for a symbol.How large a list for each symbol should the VSD use? Each false list member has a small chance of causing a misinterpretation that could turn a correct symbol decision into an incorrect decision. For some symbol value decided with high confidence, the list size should be one. For less certain decisions the list size for the symbol could vary from two up to some maximum.Possible alternatives should not be added to the list if they do not add significantly to theprobability the correct choice is on the list, or if the maximum size is reached. If a maximum size list is not sufficiently likely to contain the correct value, an erasure should be recorded .For codes with a certain guaranteed error correcting capability based on minimumdistance, one error is roughly equivalent to two erasures. In the case of trying to make a single decision about a particular symbol, if the estimated probability that the decision is correct is less8than 1/2, it would be better to record an erasure, while if it is greater than 1/2 it would be better to record that decision. With VSD the equivalence of one error to two erasures is not valid.The value of VSD and alternative symbol list decisions depends on the channel and the inner coding or modulation method. To illustrate the tradeoff of lists and erasures, and as a basis for comparison, we will consider two idealized models of a particular inner symbol code. The outer code will be a (255,223) code, to match a popular form of a Reed-Solomon code. The inner code symbol size will be presumed to be 32 bits. The H matrix for the Vector Symbol code will be a randomly-chosen binary entry 32*255 matrix.VI . First example - inner code orthogonal signaling .Assume orthogonal signaling with coherent demodulation and Gaussian white noise. It is obviously impractical in complexity and bandwidth to communicate by sending one of 232 signals and to pick the best or two best using maximum likelihood decision. However, error probability is readily computed, which allows for convenient comparison. A partial justification is that there may be practical codes for deciding 32 bits by soft decision that are almost as good as with orthogonal signaling.Normalize the noise to F 2 = N 0/2 = 1and the signal level to s, so E s = s 2, andLet m = 232, and defineThen, with threshold th,For RSD, the threshold is chosen to minimize P(erasure) + 2*P(error), and the erasure probability is from (14) (nothing above th); the error probability is from (15) (error if pick maximum when $1 are above threshold). The failure probability for RSD is the probability that e + 2t $33.is small, which it is not in this For VSD, e+t is a more important parameter, unless p2example. Thus, choose th = -4, which minimizes e+t. The top two scores form the symbol list of 2. Thus VSD observes no erasures in this example. Then the results from (15) and (16) are used in (5)- (10), with e=0, to obtain frame failure probability. However, because of the moderate risk of false symbol “correction” when t is close to 32, it is assumed that all cases of t > 25 or 26 will fail.Figure one shows the comparative performance. For VSD two curves are shown: where all cases of t > 25 or t > 26 are failures. There is a small difference favoring vector symbol decoding, even though the code for vector symbol decoding is randomly-chosen and some cases that could be decoded are neglected. The random choice affects VSD performance at very low failure rates due to the contribution of bad codes to the averaging. Both performances are pretty9respectable at low Eb /N, considering the outer code uses primarily hard decisions. However, theinner 32-bit orthogonal signal set is not practical. The outer Reed-Solomon code would need to work with four 8-bit portions of the symbols, so as to work in GF(28) rather than GF(232). In this example with orthogonal signals, an error in the 32-bit symbol has probability about 255/256 of being in error in any 8-bit section, so all 8-bit portions usually experience the same error events.Figure 1. Comparative performance, VSD and RSD,. for a (255,223) outer code.For t=25 first choice errors, in the plotted range, about 30%, or 7-8 of the errors would likely be revealed and removed prior to further decoding. However, correct first choices each would have a probability about 1/128 (2D-32 , where D#25 is the rank of the S matrix) of difference in the row space. There are 230 of these candidates for adder error. However, these potential false corrections still have a small probability of being consistent with their position. A good procedure would remove first the candidates that reduced the rank of S by one each. This would shrink the10space further and reduce the false correction risk. Another better procedure may be to declare a list of one for cases where the top score is very high, and use the list of 2 only for doubtful cases. Then the risk of false substitution is greatly reduced since there are fewer candidates. However, it is difficult to quantify all effects, so list of two for all symbols is assumed for simplicity.VII. Example 2 - multiple access, interference only.The application is in code division multiple access by fast frequency or pulse position hopping. A multi-bit inner symbol modulation which has been considered for multi-access wireless communication is a MFSK fast frequency hopping scheme [16]. In the scheme, k bits are sent by selecting one of 2k frequencies and prescribing m data repetition intervals. The true value x, 0 # x #2k-1, is sent as {x+h i} modulo 2k, where the h i, 1#i#m, form a hopping pattern known to the sender and receiver. Different senders use randomly different hopping sequences. After de-hopping, the correct frequency x will likely have a score of m or close to m hits, but there is a probability that some wrong frequency value happens to score as high as or higher than the true value due to an accumulation of hits from other senders.To simplify the analysis, assume that there are no chip cancellations, no fading, and all senders are synchronized relative to the chip interval. A presence decision will be made in any chip that has one or more signals present, and an absence decision otherwise.To create 32 bits for the inner code, consider 28 frequencies or pulse positions to carry 8 bits, with a hopping/repetition interval of m time instants. There will be five such 8-bit sets, creating 40 bits, of which 32 are data bits and 8 are check bits. Assume the 8 bits are all in the last 8-bit section, and each checks an independent random set of data bits.By the assumption of no cancellation, the correct code word will have, after de-hopping, full rows in all five sections, and the associated bit sequence will check all parities. Sometimes, a false path or paths will have full rows in all five sections, and also check parity.1112A. Inner code error statistics .Say there are n simultaneous senders. In the interval of sending 32 data bits, there are 5*m*256 chips, and 32n bits are sent. The rate in bits/chip is n/(40m). Consider a chip position that is not used by the sender that is to be decoded. The chance it would be falsely filled, P F , by a random occurrence from one or more of the n-1 other senders isTo be on the list after de-hopping, a sequence of 5m chips would have to be filled and would have to check with the 8 parity bits. There are 240 bit sequences, of which 232 would satisfy parity. Some wrong sequences will have 8-bit portions which agree with parts of the correct sequence, and thus would not need as many accidental fills. It is important to carefully enumerate the various cases. From the enumeration, the average number L av of false list members can be computed. Since the code is chosen at random and 232 is a large number of valid sequences, it is a good approximation to use a Poisson distribution for the probability the list has i wrong members, based on the computed average number of wrong members.We find L av by adding the probability that each of the 232-1 wrong sequences that check are filled . Note that a wrong sequence that checks must depart from the right sequence in at least two blocks,because the check block can be devised to ensure that any single block is uniquely derivable from the other 4 blocks. The successive terms in (18) represent the cases of 2, 3, 4, and 5 disagreeingblocks, in order. For example, for the first term, one of the disagreeing blocks could be any of 255,and the other is determined by the parity check requirement. For the second term, the first two couldbe picked as any of the 2552 combinations, except the ones which force the third wrong block to be the correct value; for all other the remaining 2552 –255 combination members, the third block is unique for parity checking. Similar reasoning extends to the cases of 4 and 5 disagreeing blocks, and the total number of contributing false paths is 232-1, as it should be.With the Poisson approximation,B. Outer code failure events.Let VSD operate with a maximum list size of L. A list of size j will be recorded for a symbol if j checking sequences are filled, and j # L; an erasure will be recorded if j > L. For RSD, L = 1, so an erasure will be recorded if j>1 of the inner symbol code word sequences are filled. Since the correct sequence is always filled in this simple model, RSD will never experience a symbol error, but can have erasures.A (255,223) RSD code can fill in up to any 32 erasures. The probability of a symbol error is zero, and the probability of symbol erasure isFor VSD, an average over all randomly-chosen (255,223) linear codes will be computed. The symbol erasure probability is1314The probability the first choice is wrong isIn this example the correct symbol is always on the list when there is no erasure. Thus p 2 = 0. Also, if Rank{H t (s-e, t)} = t, all correct alternate choices will be revealed after the Gauss-Jordan reduction . Thus, P t, t =0, as is verified by equation (5) with p 2 = 0. Also, Rank t-1 only fails if H t (s-e, t) has a zero column. Sincedecoding fails for any t > 0 case where s-e H t has one or more zero columns, it is easier to lump this into oneexpression:Cases where the rank of H t (s-e, t) is t-2 and there are no zero columns are frequently decoded successfully,so it is worthwhile seeing when this can be done. If the rank is less than t-2, however, we will assume failure to decode. Excluding the occurrence of zero columns, we find:Next, we seek the fraction of these cases that cannot be decoded in example 2. This requires the following Theorem : Given the inner code model of example 2, if H t (s-e, t) is rank t-2 with no zero columns, either allerror patterns will be discovered, or the number of errors will be reduced to three, and the rank of H 3(s-e, 3)will be one.Proof: If a pair of correct alternate choices is discovered so that elimination of two errors reduces the rank by only one, decoding will be successful because the new H t’(s-e, t’)’ will be of rank t’-1.Without changingthe row space except for a reordering of column position, the row space is spanned by t-2 independent row vectors in the form151 0 0 0 .............x x0 1 0 0 .............x x.............................0 0 .............1 0 x x0 0 .............0 1 x xIn the last two positions, if any pair is 01 or 10, two corrections are made by the pair discovery property. This yields t’ = t - 2, rank reduced by 1 to r’ = t-3 = t’-1, so the decoding is successful. To fail, we need 00 or 11pairs only. They can’t be all 00, because we are considering only cases with a nonzero column. (If there is a zero column in these t-2 independent vectors that span the space, there would have to be a zero column in H t (s-e, t) ) If there are two or more pairs of 11, two rows will add to weight 2 in the space, so againt is reduced by 2 while the rank is reduced by 1, to t’-1, so decoding is successful.If one pair is 11, and all other pairs are 00, t-3 single errors are in the syndrome space and will be revealed and correctable immediately. However, what remains is three errors, and the only remaining equation is the sum of the three error vectors. This latter event will be recorded as a failure, though the errors could be discovered with additional work.From the theorem, we can obtain an expression for the fraction of arrangements of the (t-2)x 2 binary array having nonzero columns, for which one horizontal pair is 11 and all others are 00.C. Mass substitution of second choice, L = 2.Expressions (23) - (25) are functions of t, the number of symbol errors before decoding. For this example, all members of a symbol list are equally likely. For L = 2, there is something we can do early on that reduces the average value of t substantially. We know the number of symbols z with R = 2. All symbols with R = 1 are correct, by the model. Take the first choices, compute the syndrome,16and start a Gauss-Jordan reduction. If we reach a rank greater than z/2, we know that more than half our R = 2 decisions were wrong. Reverse all z decisions and start over. Then t will be a smaller number z-r, or less, where r is the observed rank before the change. For L = 2, defineFor the minimum to be t, z must be at least 2t. Then the probability function of t, the minimum of t first and z - t first , isWhere z > 2(s-e), the rank might be the maximum s-e without exceeding z/2. In this case, it might be worthwhile trying both all first choices and all second choices, at the expense of at most doubling the decoding work. Or the second choices might be tried any time there is trouble with the original first choices.The idea of mass substitution could also be used to improve Reed-Solomon Decoding. In RSD, lists >1 are considered erasures, but if the list 2 cases are saved and the number of erasures proves to be greater than 32, additional decoding could be tried with either all first choices or all second choices. The minimum t might be small enough that e + 2t < 33, and then decoding would be possible.D. Additional correction possibilities1) Although the case of t=3, rank {H(s-e, t)} = 1 is considered a failure, with some extra work the correct alternate choices could be found. Rank one is observed from the syndrome matrix. All nonzero rows of S will be the same nonzero value, after the number of errors has been reduced to 3.Add one such row of H and S to the other nonzero rows so that exactly one S-row is nonzero. Let that correspond to the first row of a new H. All the rows except the first row will contain 0 in the three true positions. Take the logical OR of these s-e-1 row vectors and the inversion of the first. The three positions of the true errors will be zero, and none of the other positions in the OR will be zero unless that column were all zeroes except for a 1 in inverted row 1. This has probability 2-(s-e) for each column. If L = 2, the three indicated alternate second choices can be inserted, and the resulting syndrome should now be all zeroes. If L = 3 and some of the indicated positions have R = 3, two values for each such case would have to be tried to ensure the sum of three alternate selected second choice differences add to the single nonzero syndrome value.2) Similar possibilities exist even if the rank is less than t-2. Say the rank is t-3. Similar to the rank t-2 argument, the t-3 independent row equations, with position rearrangements, can be chosen in the form:1 0 0 0 (xxx)0 1 0 0 (xxx).............................0 0 .............1 0 xxx0 0 .............0 1 xxxNot all xxx’s can be 000. All cases where xxx is 000 lead to correction of an error, but the new rank after all these corrections will be t’–3. If any remaining xxx is 001, 010, or 100, 2 errors will be eliminated, rank down by 1, which reduces to the rank 2 less case. Two identical xxx can reduce errors by two by combining the two equations, but the rank is reduced by only one, because the vector 000---------xxx will be independent with xxx not all zero. Thus in many cases the errors cancannot be reduced to rank t’-2 are where the xxx’s be found in stages. The only cases where the Ht’have r different values, 4$r$1, all from the set 011,101, 110, 111. This corresponds to r+3 errors with rank r check equations. Assume the unk nown error vectors are linearly independent, which is extremely likely for 7 or fewer 32-bit vectors. Diagonalize the syndrome matrix by row operations on S, and do the same operations on the corresponding rows of the smallest part that of H(s-e,n-e)17。