## A7.1 Statistical testing of receiver performance

3GPP51.010-1Mobile Station (MS) conformance specificationPart 1: Conformance specificationTS

Testing the receiver performance can be done either in the classical way with a fixed minimum number of samples or using statistical methods that lead to an early pass/fail decision with test time significantly reduced for MS with an error rate not on the limit.

Statistical testing of the receiver performance is based on the evaluation of error rates, such as bit error rates, block error rates or also the rate of missing bad frame indications.

## A7.1.1 Basics

### A7.1.1.1 Definition of (error) events

1) Bit Error Ratio (BER)

The Bit Error Ratio is defined as the ratio of the bits wrongly received to all data bits sent.

2) Block Error Ratio (BLER)

A Block Error Ratio is defined as the ratio of the number of erroneous blocks received to the total number of blocks sent. An erroneous block is defined as a Transport Block, the cyclic redundancy check (CRC) of which is wrong.

3) Rate of missing Bad Frame Indications (BFI)

The rate of missing Bad Frame Indications is the ratio of frames not marked incorrect to all frames sent, although all frames sent are incorrectly. This mechanism is used to test Bad Frame Indication of the MS.

### A7.1.1.2 Test Method

Each test is performed in the following manner:

a) Set up the required test conditions.

b) Continuously record the number of samples tested and the number of (error) events (bit error, block error or missing BFI).

c) While recording samples and errors continuously check, if it is about time to make a decision. The possible outcomes of a decision are: Early pass, early fail, continue with measuring the error rates, pass or fail.

### A7.1.1.3 Test Criteria

The test shall fulfil the following requirements:

a) good pass fail decision with high confidence level

1) to keep reasonably low the probability (risk) of passing a bad unit for each individual test;

2) to have high probability of passing a good unit for each individual test;

b) good balance between test time and statistical significance

3) to perform measurements with a high degree of statistical significance;

4) to keep the test time as low as possible.

### A7.1.1.4 Calculation assumptions

#### A7.1.1.4.1 Statistical independence

(a) It is assumed, that error events are rare (error rate close to zero) and independent statistical events.

The assumption of rare events is justified by the error rates that need to be met by the DUT. Statistical independence is given as data bits of completely transmitted bursts are evaluated without further memory of the receiver active. Samples and errors are summed up after every time slot interval. So the assumption of independent error events is justified.

(b) In error rate tests with fading there is the memory of the multipath fading channel which interferes the statistical independence. A minimum test time is introduced to average fluctuations of the multipath fading channel. So the assumption of independent error events is justified approximately.

#### A7.1.1.4.2 Applied formulas

The formulas, applied to describe the error rate test, are based on the following experiments:

(1) After having observed a certain number of (error) events (ne) the number of samples are counted to calculate the error rate. Provisions are made such that the complementary experiment is valid as well:

(2) After a certain number of samples (ns) the number of errors, occurred, are counted to calculate the error rate.

Experiment (1) stipulates to use the following Chi Square Distribution with degree of freedom ne:

Where 2 is the Chi-square distribution.

Experiment (2) stipulates to use the Poisson Distribution:

Where P(ne) is the Poisson distribution for ne with mean .

with  as the mean of the distribution.

To determine the early stop conditions, the following inverse cumulative operation is applied:

. This is applicable for experiment (1) and (2).

D Wrong decision risk per test step.

NOTE: Where C-1 is the inverse cumulative distribution function for the 2 distribution (the D-quantile function)
Other inverse cumulative operations are available, however only this is suited for experiment (1) and (2).

## A7.1.2 Definition of good pass fail decision

A correct pass/fail decision requires the knowledge of the exact (true) error ratio of the DUT. However the true error ratio of the DUT is generally unknown. Measuring the true error ratio of the DUT requires to evaluate an infinite number of samples, which of course is not possible. This means that any error rate measurement within limited time is affected by an uncertainty, leading to two kinds of wrong decisions possible. If the measured error rate is higher than the true error rate a good DUT could possibly be failed and vice versa if the measured error rate is lower a bad DUT could possibly be passed.

Error rate tests within limited time hence require the acceptance of a wrong decision risk. The measure of a good pass fail decision is given by the probability (risk) F of the wrong decision at the end of the test. The probability of a correct decision is 1-F.

Wrong decision risk F for one single error ratio test:

The probability (risk) to fail a good DUT shall be Ffail according to the following definition: A DUT is failed, accepting a probability Ffail that the DUT is still better than the test requirement

The probability (risk) to pass a bad DUT shall be Fpass according to the following definition: A DUT is passed, accepting a probability Fpass that the DUT is still worse than M times the specified error ratio. (M>1 is the bad DUT factor).

The wrong decision risk F explained above applies to one single error ratio test. In most test cases where only one or few error ratio tests are done the wrong decision risk acceptable for an erroneous pass is identical to the acceptable risk for an erroneous fail:

Fpass   =   Ffail   =   F and e.g. F   =   0.2%

If a test is repeated under different conditions for several times, the total wrong decision risk for the DUT increases. The increasing risk for a bad fail decision is not acceptable for test cases that are composed of many single error rate tests like e.g. the blocking test, which implies approximately 3000 error rate tests (depends on design of MS). A DUT on the limit will fail approximately 6 to 7 times due to statistical reasons (wrong decision probability at the end of the test F = 0.2%). 30 fails (6 in inband range and 24 outside) are allowed in the blocking test but these fails are reserved for spurious responses. This problem shall be solved by the following rules:

– All passes (based on Fpass  =  0.2%) are accepted, including the wrong decisions due to statistical reasons.

– An early fail limit based on Ffail  =  0.02% instead of 0.2% is established, that ensures that wrong decisions due to statistical reasons are reduced to less than one.

These asymmetric test conditions ensure that a DUT on the test limit consumes hardly more test time for a blocking test than in the symmetric case and on the other hand discriminates sufficiently between statistical fails and spurious response cases.

Wrong decision probability D per test step:

As one single error ratio test is composed of several test steps the wrong decision probability per test step needs to be sufficiently small to keep the wrong decision risk F (the wrong decision risk at the end of the test) within the requirements. The wrong decision probability D per test step is a numerically evaluated fraction of F. Considerations regarding symmetry between probability of wrong pass and wrong fail decision are identical to those given for F.

For most test cases where only one or few error rate tests are done the wrong decision probability D per test step for a pass decision is identical to the wrong decision probability for a fail decision.

Dpass   =   Dfail   =   D and e.g. D   =   0.0085%

For test cases where Fpass   ≠   Ffail (e.g. blocking) this applies also to D: Dpass   ≠   Dfail.

## A7.1.3 Implementation

### A7.1.3.1 Proceeding

a) Set up the required test conditions.

b) Continuously record the number of samples tested and the number of (error) events (bit error, block error or missing BFI). Calculate the preliminary error rates ber0 and ber1 from the number of samples and the number of (error) events. Regarding ber0 and ber1 refer to “A7.1.3.1 Limit lines”.

c) Continuously check while recording samples and errors, if it is about time to make a decision. The possible outcomes of a decision are: Early pass, early fail, continue with measuring the error rates, pass or fail.

– 1st decision after minimum test time due to fading (refer to Table A7.1.4.2 : Minimum test time due to fading) has elapsed. In case the test runs without fading conditions this time is zero and in case this time exceeds the target test time (refer to A7.1.3.1 Limit lines), the test is already finished requiring a pass/fail decision .

– 2nd and possibly further (early) decisions after a certain cyclic interval or the occurrence of the next error event. As long as no early decision can be made the test is continued.

– If the target test time has elapsed the test is definitively finished and a pass/fail decision can be made. In case the minimum test time due to fading exceeds the target test time this point is reached already in the 1st step.

#### A7.1.3.2 Limit lines

Early decisions require that the actual error rate is checked both against a limit line for early pass and a limit line for early fail.

Limit line for early pass decision (for ne ≥1):

The condition for an early pass decision is: ber1   <   berlimbadpass

ber1 is the normalised bit error rate with counting errors started from one which means that an artificial error is introduced at the beginning to avoid that the early pass condition is met when the test starts. After the first real error event has occurred the artificial error has to be removed to calculate the error rate correctly.

Limit line for early fail decision (for ne≥ 7):

The condition for an early fail decision is: ber0   >   berlimfail

ber0 is the normalised bit error rate with counting errors started from zero, meaning that no artificial erroneous sample is introduced at the beginning of the test..

Due to the nature of the test, namely discrete error events, the early fail condition shall not be valid, when fractional errors <1 are used to calculate the early fail limit: Any early fail decision is postponed until number of errors ne ≥7. In the blocking test any early fail decision is postponed until number of errors ne ≥ 8.

Parameters for limit lines:

 1. D wrong decision probability per test step. 2. M =   1.5 bad DUT factor 3. ne number of (error) events. This parameter is the x‑ordinate in figure A7.1.3.1 Limit lines. 4. ns number of samples. This parameter is not needed for limit lines, but enumerated here because it is aligned to ne closely. The bit error rate is calculated from ne and ns. Parameters D and M define the limit lines for early pass and early fail. With the two curves known the intersection point of the two limit lines can be calculated. The x‑ordinate of this intersection point is the target number of errors (TNE) and y‑ordinate is the (normalised) test limit (TL). This intersection point is reached when the target test time has elapsed. In this case a decision against the test limit (column “derived test limit”) can be made. 5. TL =   1.234 For tests with F   =   0.2 the parameters given above lead to this (normalised) test limit. The BER limit given in the core specs (column “Orig. BER requirement” in the tables defining the test limits) is multiplied with the test limit factor TL to gain the limit for the pass/fail decision (column “derived test limit”). TL =   1.251 Normalized test limit for tests with F   =   0.02 (e.g. blocking test). 6. TNE The parameters given above lead to a target number of errors. For tests with F   =   0.2 the target number of errors is 345. For tests with F   =   0.02 (e.g. blocking test) the target number of errors results in 403.

Figure A7.1.3.1 Limit lines

A typical error rate test, calculated form the number of samples and errors using experimental method (1) or (2) (see A7.1.1.4 Calculation assumptions) runs along the yellow trajectory. With an errorless sample the trajectory goes down vertically. With an erroneous sample it jumps up right. Making a pass/fail decision means to check if the error rate (“BER trajectory”) intersects the limit lines for early pass or early fail. The term ‘test limit’ used in the figure above denotes the term ‘derived test limit’ used in this document.

## A7.1.4 Good balance between test time and statistical significance

Three independent test parameters are introduced into the test and shown in Table A7.1.4.1. These are the obvious basis of test time and statistical significance. From the first two of them four dependent test parameters are derived. The third independent test parameter is justified separately.

Table A7.1.4.1 Independent and dependent test parameters

 Independent test parameters Dependent test parameters Test Parameter Value Reference Test parameter Value Reference Bad DUT factor M 1.5 Section A7.1.3.1 Early pass/fail condition Curves Section A7.1.3.1 Figure A7.1.3.1 Final probability of wrong pass/fail decision F 0.2% 0.02% for blocking Section A7.1.2 Target number of error events 345 403 for blocking Section A7.1.3.1 Probability of wrong pass/fail decision per test step D 0.0085% 0.0008% for blocking Section A7.1.2 Test limit factor TL 1.234 1.251 for blocking Section A7.1.3.1 Minimum test time Table A7.1.4.2

The minimum test time is derived from the following justification:

1) For no propagation conditions and static propagation condition

No early fail calculated from fractional number of errors <1