B.4 Scene Displacement Processing

26.1183GPPRelease 17TSVirtual Reality (VR) profiles for streaming applications

B.4.1 General

The position of each point source derived from the channels and objects input is represented by a 3-dimensional vector in a Cartesian coordinate system. The scene displacement information is used to compute an updated version of the position vector as described in clause B.4.2. The position of point sources that result from non-diegetic channel groups with an associated ‘gca_fixedChannelsPosition == 1’ or from non-diegetic objects with an associated ‘goa_fixedPosition == 1’ (see clause B.3.4) is not updated, i.e. is equal to .

B.4.2 Applying Scene Displacement Information

The vector representation of a point source is transformed to the listener-relative coordinate system by rotation based on the scene displacement values obtained via the head tracking interface. This is achieved by multiplying the position with a rotation matrix calculated from the orientation of the listener:

The determination of the rotation matrix is defined in ISO/IEC 23008-3 [19], Annex I.

For HOA content, the rotation matrix suited for rotating the spherical harmonic representation is calculated as defined in ISO/IEC 23008-3 [19], Annex I. After the rotation, the HOA coefficients are transformed back into the ESD representation. Each ESD component is then converted to the corresponding point source with its associated positional information. For the ESD components the position information is fixed, i.e. , as the rotation due to scene displacement is performed in the spherical harmonic representation.