A.1 General theoretical methodology
3GPP51.021Base Station System (BSS) equipment specificationRadio aspectsRelease 17TS
Statistical parameters are measured as a number of error events M within a set of observed events (or samples) N, and the ratio M/N is used as the estimated value. This estimate has a given uncertainty due to the limited statistical material, i.e. the number of samples N. The general methodology to ensure correct PASS / FAIL decisions is outlined in the following.
Given a random variable Xi output from a random process indicating error/no error, the probability of an error is p and consequently, the probability of no error is 1‑p. The expected value E(Xi) and variance Var(Xi) as given in (Eq 1), according to the binomial probability distribution.
E(Xi) = p (Eq 1a)
Var(Xi) = p ‑ p2 (Eq 1b)
If the number of samples of the event is N, the average X of the random variables Xi is of interest, which has the expected value E(X) and variance Var(X) given in (Eq 2), assuming that the random variables Xi are independent.
E(X) = p (EQ 2a)
Var(X) = (p ‑ p2) / N (Eq 2b)
Assuming that the error probability p is small, the formula can be simplified as in (Eq 3).
E(X) = p (Eq 3a)
Var(X) = p / N (Eq 3b)
Furthermore, if the number of samples N is great, the probability density of X may be assumed to be Gaussian and the confidence intervals needed can easily be found.
Assuming that a "good" BSS has the real performance Pg when measured over an infinite time and that a "bad" BSS has the corresponding performance Pb, the relationships to the system requirement Ps are the following:
Pg <= Ps (Eq 4a)
Pb > Ps (Eq 4b)
Irrespective of the values of Pg and Pb, the aim would ideally be to guarantee that the probabilities of passing a "good" BSS, P(PASS|Pg) and the probability of failing a "bad" BSS, P(FAIL|Pb) are as high as possible. Given a certain Pg and a certain Pb, this can be done by increasing the number of samples N until the distributions around Pg and Pb are "narrow" enough, i.e. the variances are sufficiently reduced, so that there is sufficient space in between for a test requirement Pt with the required confidence. The principle is illustrated in figure A.1-1 with Pg=Ps.
In practice, the above ideal approach can not be used since when Pg or Pb get very close to Ps, the needed number of samples to reduce the variances would be infinite. However, what can be done is to represent Pg by the worst‑case Ps and to have a certain confidence of failing a BSS which is a given amount worse than Ps, i.e. with a fixed Pb. This will, however, give less confidence in failing a "bad" BSS which has a performance closer to Ps. This is the exact principle illustrated in figure A.1-1.
Ps = system requirement
Pt = test requirement
Pg = real performance of a "good" BTS
Pb = real performance of a "bad" BTS
Figure A.1-1: Statistical testing
The test requirement Pt will then be as in equation (Eq 5) for the overall requirements depending on Ps and Pb, and on the needed number of samples N:
Pt = Ps + G (Ps/N)1/2 (Eq 5a)
Pt = Pb ‑ B (Pb/N)1/2 (Eq 5b)
G and B are the ordinates (in fact the inverse Gaussian Q‑function) giving the normalized Gaussian distribution confidence intervals required for passing a "good" BSS and failing a "bad" BSS, respectively.
Finally, if the ratio Pb/Ps is fixed, the number of samples is given by the following equations (Eq 6).
(Eq6a)
Pb = K Ps (Eq 6b)