5 Functional description of the encoder
3GPP46.060Enhanced Full Rate (EFR) speech transcodingRelease 17TS
In this clause, the different functions of the encoder represented in figure 3 are described.
5.1 Pre‑processing
Two pre‑processing functions are applied prior to the encoding process: high‑pass filtering and signal down‑scaling.
Down‑scaling consists of dividing the input by a factor of 2 to reduce the possibility of overflows in the fixed‑point implementation.
The high‑pass filter serves as a precaution against undesired low frequency components. A filter with a cut off frequency of 80 Hz is used, and it is given by:
(4)
Down‑scaling and high‑pass filtering are combined by dividing the coefficients at the numerator of by 2.
5.2 Linear prediction analysis and quantization
Short‑term prediction, or linear prediction (LP), analysis is performed twice per speech frame using the auto‑correlation approach with 30 ms asymmetric windows. No lookahead is used in the auto‑correlation computation.
The auto‑correlations of windowed speech are converted to the LP coefficients using the Levinson‑Durbin algorithm. Then the LP coefficients are transformed to the Line Spectral Pair (LSP) domain for quantization and interpolation purposes. The interpolated quantified and unquantized filter coefficients are converted back to the LP filter coefficients (to construct the synthesis and weighting filters at each subframe).
5.2.1 Windowing and auto‑correlation computation
LP analysis is performed twice per frame using two different asymmetric windows. The first window has its weight concentrated at the second subframe and it consists of two halves of Hamming windows with different sizes. The window is given by:
(5)
The values and are used. The second window has its weight concentrated at the fourth subframe and it consists of two parts: the first part is half a Hamming window and the second part is a quarter of a cosine function cycle. The window is given by:
(6)
where the values and are used.
Note that both LP analyses are performed on the same set of speech samples. The windows are applied to 80 samples from past speech frame in addition to the 160 samples of the present speech frame. No samples from future frames are used (no lookahead). A diagram of the two LP analysis windows is depicted below.
Figure 1: LP analysis windows
The auto‑correlations of the windowed speech , are computed by:
(7)
and a 60 Hz bandwidth expansion is used by lag windowing the auto‑correlations using the window:
(8)
where Hz is the bandwidth expansion and Hz is the sampling frequency. Further, is multiplied by the white noise correction factor 1.0001 which is equivalent to adding a noise floor at ‑40 dB.
5.2.2 Levinson‑Durbin algorithm
The modified auto‑correlations and are used to obtain the direct form LP filter coefficients by solving the set of equations.
(9)
The set of equations in (9) is solved using the Levinson‑Durbin algorithm. This algorithm uses the following recursion:
The final solution is given as .
The LP filter coefficients are converted to the line spectral pair (LSP) representation for quantization and interpolation purposes. The conversions to the LSP domain and back to the LP filter coefficient domain are described in the next clause.
5.2.3 LP to LSP conversion
The LP filter coefficients , are converted to the line spectral pair (LSP) representation for quantization and interpolation purposes. For a 10th order LP filter, the LSPs are defined as the roots of the sum and difference polynomials:
(10)
and
, (11)
respectively. The polynomial and are symmetric and anti‑symmetric, respectively. It can be proven that all roots of these polynomials are on the unit circle and they alternate each other. has a root and has a root . To eliminate these two roots, we define the new polynomials:
(12)
and
(13)
Each polynomial has 5 conjugate roots on the unit circle , therefore, the polynomials can be written as
(14)
and
, (15)
where with being the line spectral frequencies (LSF) and they satisfy the ordering property . We refer to as the LSPs in the cosine domain.
Since both polynomials and are symmetric only the first 5 coefficients of each polynomial need to be computed. The coefficients of these polynomials are found by the recursive relations (for to 4):
(16)
where is the predictor order.
The LSPs are found by evaluating the polynomials and at 60 points equally spaced between 0 and and checking for sign changes. A sign change signifies the existence of a root and the sign change interval is then divided 4 times to better track the root. The Chebyshev polynomials are used to evaluate and . In this method the roots are found directly in the cosine domain . The polynomials or evaluated at can be written as:
with:
, (17)
where is the th order Chebyshev polynomial, and are the coefficients of either or , computed using the equations in (16). The polynomial is evaluated at a certain value of using the recursive relation:
with initial values and The details of the Chebyshev polynomial evaluation method are found in P. Kabal and R.P. Ramachandran [6].
5.2.4 LSP to LP conversion
Once the LSPs are quantified and interpolated, they are converted back to the LP coefficient domain . The conversion to the LP domain is done as follows. The coefficients of or are found by expanding equations (14) and (15) knowing the quantified and interpolated LSPs . The following recursive relation is used to compute :
with initial values and . The coefficients are computed similarly by replacing by .
Once the coefficients and are found, and are multiplied by and , respectively, to obtain and ; that is:
(18)
Finally the LP coefficients are found by:
(19)
This is directly derived from the relation , and considering the fact that and are symmetric and anti‑symmetric polynomials, respectively.
5.2.5 Quantization of the LSP coefficients
The two sets of LP filter coefficients per frame are quantified using the LSP representation in the frequency domain; that is:
(20)
where are the line spectral frequencies (LSF) in Hz [0,4000] and is the sampling frequency. The LSF vector is given by , with t denoting transpose.
A 1st order MA prediction is applied, and the two residual LSF vectors are jointly quantified using split matrix quantization (SMQ). The prediction and quantization are performed as follows. Let and denote the mean‑removed LSF vectors at frame . The prediction residual vectors and are given by:
(21)
where is the predicted LSF vector at frame . First order moving‑average (MA) prediction is used where:
(22)
where is the quantified second residual vector at the past frame.
The two LSF residual vectors and are jointly quantified using split matrix quantization (SMQ). The matrix is split into 5 submatrices of dimension 2 x 2 (two elements from each vector). For example, the first submatrix consists of the elements , and . The 5 submatrices are quantified with 7, 8, 8+1, 8, and 6 bits, respectively. The third submatrix uses a 256‑entry signed codebook (8‑bit index plus 1‑bit sign).
A weighted LSP distortion measure is used in the quantization process. In general, for an input LSP vector and a quantified vector at index , , the quantization is performed by finding the index which minimizes:
(23)
The weighting factors , are given by
(24)
where with and . Here, two sets of weighting coefficients are computed for the two LSF vectors. In the quantization of each submatrix, two weighting coefficients from each set are used with their corresponding LSFs.
5.2.6 Interpolation of the LSPs
The two sets of quantified (and unquantized) LP parameters are used for the second and fourth subframes whereas the first and third subframes use a linear interpolation of the parameters in the adjacent subframes. The interpolation is performed on the LSPs in the domain. Let be the LSP vector at the 4th subframe of the present frame , be the LSP vector at the 2nd subframe of the present frame , and the LSP vector at the 4th subframe of the past frame . The interpolated LSP vectors at the 1st and 3rd subframes are given by:
(25)
The interpolated LSP vectors are used to compute a different LP filter at each subframe (both quantified and unquantized coefficients) using the LSP to LP conversion method described in clause 5.2.4.
5.3 Open‑loop pitch analysis
Open‑loop pitch analysis is performed twice per frame (each 10 ms) to find two estimates of the pitch lag in each frame. This is done in order to simplify the pitch analysis and confine the closed‑loop pitch search to a small number of lags around the open‑loop estimated lags.
Open‑loop pitch estimation is based on the weighted speech signal which is obtained by filtering the input speech signal through the weighting filter . That is, in a subframe of size , the weighted speech is given by:
(26)
Open‑loop pitch analysis is performed as follows. In the first step, 3 maxima of the correlation:
(27)
are found in the three ranges:
The retained maxima , are normalized by dividing by , respectively. The normalized maxima and corresponding delays are denoted by . The winner, , among the three normalized correlations is selected by favouring the delays with the values in the lower range. This is performed by weighting the normalized correlations corresponding to the longer delays. The best open‑loop delay is determined as follows:
This procedure of dividing the delay range into 3 clauses and favouring the lower clauses is used to avoid choosing pitch multiples.
5.4 Impulse response computation
The impulse response, , of the weighted synthesis filter is computed each subframe. This impulse response is needed for the search of adaptive and fixed codebooks. The impulse response is computed by filtering the vector of coefficients of the filter extended by zeros through the two filters and .
5.5 Target signal computation
The target signal for adaptive codebook search is usually computed by subtracting the zero input response of the weighted synthesis filter from the weighted speech signal . This is performed on a subframe basis.
An equivalent procedure for computing the target signal, which is used in the present document, is the filtering of the LP residual signal through the combination of synthesis filter and the weighting filter . After determining the excitation for the subframe, the initial states of these filters are updated by filtering the difference between the LP residual and excitation. The memory update of these filters is explained in clause 5.9.
The residual signal which is needed for finding the target vector is also used in the adaptive codebook search to extend the past excitation buffer. This simplifies the adaptive codebook search procedure for delays less than the subframe size of 40 as will be explained in the next clause. The LP residual is given by:
(28)
5.6 Adaptive codebook search
Adaptive codebook search is performed on a subframe basis. It consists of performing closed‑loop pitch search, and then computing the adaptive codevector by interpolating the past excitation at the selected fractional pitch lag.
The adaptive codebook parameters (or pitch parameters) are the delay and gain of the pitch filter. In the adaptive codebook approach for implementing the pitch filter, the excitation is repeated for delays less than the subframe length. In the search stage, the excitation is extended by the LP residual to simplify the closed‑loop search.
In the first and third subframes, a fractional pitch delay is used with resolutions: 1/6 in the range and integers only in the range [95, 143]. For the second and fourth subframes, a pitch resolution of 1/6 is always used in the range , where is nearest integer to the fractional pitch lag of the previous (1st or 3rd) subframe, bounded by 18…143.
Closed‑loop pitch analysis is performed around the open‑loop pitch estimates on a subframe basis. In the first (and third) subframe the range , bounded by 18…143, is searched. For the other subframes, closed‑loop pitch analysis is performed around the integer pitch selected in the previous subframe, as described above. The pitch delay is encoded with 9 bits in the first and third subframes and the relative delay of the other subframes is encoded with 6 bits.
The closed‑loop pitch search is performed by minimizing the mean‑square weighted error between the original and synthesized speech. This is achieved by maximizing the term:
(29)
where is the target signal and is the past filtered excitation at delay (past excitation convolved with ). Note that the search range is limited around the open‑loop pitch as explained earlier.
The convolution is computed for the first delay tmin in the searched range, and for the other delays in the search range , it is updated using the recursive relation:
, (30)
where , is the excitation buffer. Note that in search stage, the samples , are not known, and they are needed for pitch delays less than 40. To simplify the search, the LP residual is copied to in order to make the relation in equation (30) valid for all delays.
Once the optimum integer pitch delay is determined, the fractions from to with a step of around that integer are tested. The fractional pitch search is performed by interpolating the normalized correlation in equation (29) and searching for its maximum. The interpolation is performed using an FIR filter based on a Hamming windowed function truncated at 23 and padded with zeros at 24 (). The filter has its cut‑off frequency (‑3 dB) at 3 600 Hz in the over‑sampled domain. The interpolated values of for the fractions to are obtained using the interpolation formula:
(31)
where corresponds to the fractions 0, , , , , and , respectively. Note that it is necessary to compute the correlation terms in equation (29) using a range to allow for the proper interpolation.
Once the fractional pitch lag is determined, the adaptive codebook vector is computed by interpolating the past excitation signal at the given integer delay and phase (fraction) :
(32)
The interpolation filter is based on a Hamming windowed function truncated at 59 and padded with zeros at 60 (). The filter has a cut‑off frequency (‑3 dB) at 3 600 Hz in the over‑sampled domain.
The adaptive codebook gain is then found by:
(33)
where is the filtered adaptive codebook vector (zero state response of to ).
The computed adaptive codebook gain is quantified using 4‑bit non‑uniform scalar quantization in the range [0.0,1.2].
5.7 Algebraic codebook structure and search
The algebraic codebook structure is based on interleaved single‑pulse permutation (ISPP) design. In this codebook, the innovation vector contains 10 non‑zero pulses. All pulses can have the amplitudes +1 or ‑1. The 40 positions in a subframe are divided into 5 tracks, where each track contains two pulses, as shown in table 2.
Table 2: Potential positions of individual pulses in the algebraic codebook
Track |
Pulse |
positions |
1 |
i0, i5 |
0, 5, 10, 15, 20, 25, 30, 35 |
2 |
i1, i6 |
1, 6, 11, 16, 21, 26, 31, 36 |
3 |
i2, i7 |
2, 7, 12, 17, 22, 27, 32, 37 |
4 |
i3, i8 |
3, 8, 13, 18, 23, 28, 33, 38 |
5 |
i4, i9 |
4, 9, 14, 19, 24, 29, 34, 39 |
Each two pulse positions in one track are encoded with 6 bits (total of 30 bits, 3 bits for the position of every pulse), and the sign of the first pulse in the track is encoded with 1 bit (total of 5 bits).
For two pulses located in the same track, only one sign bit is needed. This sign bit indicates the sign of the first pulse. The sign of the second pulse depends on its position relative to the first pulse. If the position of the second pulse is smaller, then it has opposite sign, otherwise it has the same sign than in the first pulse.
All the 3‑bit pulse positions are Gray coded in order to improve robustness against channel errors. This gives a total of 35 bits for the algebraic code.
The algebraic codebook is searched by minimizing the mean square error between the weighted input speech and the weighted synthesized speech. The target signal used in the closed‑loop pitch search is updated by subtracting the adaptive codebook contribution. That is:
(34)
where is the filtered adaptive codebook vector and is the quantified adaptive codebook gain. If is the algebraic codevector at index , then the algebraic codebook is searched by maximizing the term:
(35)
where is the correlation between the target signal and the impulse response , is a the lower triangular Toepliz convolution matrix with diagonal and lower diagonals , and is the matrix of correlations of . The vector (backward filtered target) and the matrix are computed prior to the codebook search. The elements of the vector are computed by
(36)
and the elements of the symmetric matrix are computed by:
(37)
The algebraic structure of the codebooks allows for very fast search procedures since the innovation vector contains only a few nonzero pulses. The correlation in the numerator of Equation (35) is given by:
(38)
where is the position of the th pulse, is its amplitude, and is the number of pulses (Np = 10 ). The energy in the denominator of equation (35) is given by:
(39)
To simplify the search procedure, the pulse amplitudes are preset by the mere quantization of an appropriate signal. In this case the signal , which is a sum of the normalized vector and normalized long‑term prediction residual :
(40)
is used. This is simply done by setting the amplitude of a pulse at a certain position equal to the sign of at that position. The simplification proceeds as follows (prior to the codebook search). First, the sign signal and the signal are computed. Second, the matrix is modified by including the sign information; that is, . The correlation in equation (38) is now given by:
(41)
and the energy in equation (39) is given by:
(42)
Having preset the pulse amplitudes, as explained above, the optimal pulse positions are determined using an efficient non‑exhaustive analysis‑by‑synthesis search technique. In this technique, the term in equation (35) is tested for a small percentage of position combinations.
First, for each of the five tracks the pulse positions with maximum absolute values of are searched. From these the global maximum value for all the pulse positions is selected. The first pulse i0 is always set into the position corresponding to the global maximum value.
Next, four iterations are carried out. During each iteration the position of pulse i1 is set to the local maximum of one track. The rest of the pulses are searched in pairs by sequentially searching each of the pulse pairs {i2,i3}, {i4,i5}, {i6,i7} and {i8,i9} in nested loops. Every pulse has 8 possible positions, i.e., there are four 8×8‑loops, resulting in 256 different combinations of pulse positions for each iteration.
In each iteration all the 9 pulse starting positions are cyclically shifted, so that the pulse pairs are changed and the pulse i1 is placed in a local maximum of a different track. The rest of the pulses are searched also for the other positions in the tracks. At least one pulse is located in a position corresponding to the global maximum and one pulse is located in a position corresponding to one of the 4 local maxima.
A special feature incorporated in the codebook is that the selected codevector is filtered through an adaptive pre‑filter which enhances special spectral components in order to improve the synthesized speech quality. Here the filter is used, where is the nearest integer pitch lag to the closed‑loop fractional pitch lag of the subframe, and is a pitch gain. In the present document, is given by the quantified pitch gain bounded by [0.0,1.0]. Note that prior to the codebook search, the impulse response must include the pre‑filter . That is, .
The fixed codebook gain is then found by:
(43)
where is the target vector for fixed codebook search and is the fixed codebook vector convolved with ,
(44)
5.8 Quantization of the fixed codebook gain
The fixed codebook gain quantization is performed using MA prediction with fixed coefficients. The 4th order MA prediction is performed on the innovation energy as follows. Let be the mean‑removed innovation energy (in dB) at subframe , and given by:
(45)
where is the subframe size, is the fixed codebook excitation, and dB is the mean of the innovation energy. The predicted energy is given by:
, (46)
where are the MA prediction coefficients, and is the quantified prediction error at subframe . The predicted energy is used to compute a predicted fixed‑codebook gain as in equation (45) (by substituting by and by ). This is done as follows. First, the mean innovation energy is found by:
(47)
and then the predicted gain is found by:
(48)
A correction factor between the gain and the estimated one is given by:
. (49)
Note that the prediction error is given by:
(50)
The correction factor is quantified using a 5‑bit codebook. The quantization table search is performed by minimizing the error:
(51)
Once the optimum value is chosen, the quantified fixed codebook gain is given by .
5.9 Memory update
An update of the states of the synthesis and weighting filters is needed in order to compute the target signal in the next subframe.
After the two gains are quantified, the excitation signal, , in the present subframe is found by:
(52)
where and are the quantified adaptive and fixed codebook gains, respectively, the adaptive codebook vector (interpolated past excitation), and is the fixed codebook vector (algebraic code including pitch sharpening). The states of the filters can be updated by filtering the signal (difference between residual and excitation) through the filters and for the 40‑sample subframe and saving the states of the filters. This would require 3 filterings. A simpler approach which requires only one filtering is as follows. The local synthesized speech, , is computed by filtering the excitation signal through . The output of the filter due to the input is equivalent to . So the states of the synthesis filter are given by . Updating the states of the filter can be done by filtering the error signal through this filter to find the perceptually weighted error . However, the signal can be equivalently found by:
(53)
Since the signals , and are available, the states of the weighting filter are updated by computing as in equation (53) for . This saves two filterings.