3Dc
Encyclopedia
3Dc also known as DXN, BC5, or Block Compression 5 is a lossy data compression
algorithm for normal maps
invented and first implemented by ATI
. It builds upon the earlier DXT5 algorithm and is an open standard
. 3Dc is now implemented by both ATI and NVIDIA
.
that simulates lighting on geometric surfaces by reading surface normals from a rectilinear grid analogous to a texture map
- giving simple models the impression of increased complexity. This additional channel however increases the load on the graphics system's memory bandwidth. Pre-existing lossy compression algorithms implemented on consumer 3d hardware lacked the precision necessary for reproducing normal maps without excessive visible artefacts, justifying the development of 3Dc.
Compression is performed in 4x4 blocks. In each block the two components of each value are compressed separately. For each block, each of the two components have a palette of 8 values to choose from. The palettes are generated from two values representing the start and end of a line and the other six values being generated as linear combinations of the start and end values.
Compression is naively achieved by finding the lowest and highest values of the 16 pixels to be compressed and storing each of those as an 8-bit quantity. Individual elements within the 4x4 block are then stored with 3-bits each, representing their position on an 8 step linear scale from the lowest value to the highest. Each pixel's 3-bit value (the palette index) would be chosen by choosing the palette entry with the minimum distance from the original values.
Total storage is 128 bits per 4x4 block once both source components are factored in. In an uncompressed scheme with similar 8-bit precision, the source data is 32 8-bit values for the same area, occupying 256 bits. The algorithm therefore produces a 2:1 compression
ratio.
The compression ratio is sometimes stated as being "up to 4:1" as it is common to use 16-bit precision for input data rather than 8-bit. This produces compressed output that is literally 1/4 the size of the input but it is not of comparable precision.
: ATI1) (also known as BC4 or Block Compression 4) can compress textures, i.e. light maps, shadow maps, HDR textures and material properties. 3Dc+ provides 2:1 compression
ratio with single component (DXT5 alpha) 8-bit integer textures and 4:1 compression ratio with normal maps and textures consisting of two 8-bit integer components
Lossy data compression
In information technology, "lossy" compression is a data encoding method that compresses data by discarding some of it. The procedure aims to minimize the amount of data that need to be held, handled, and/or transmitted by a computer...
algorithm for normal maps
Normal mapping
In 3D computer graphics, normal mapping, or "Dot3 bump mapping", is a technique used for faking the lighting of bumps and dents. It is used to add details without using more polygons. A common use of this technique is to greatly enhance the appearance and details of a low polygon model by...
invented and first implemented by ATI
ATI Technologies
ATI Technologies Inc. was a semiconductor technology corporation based in Markham, Ontario, Canada, that specialized in the development of graphics processing units and chipsets. Founded in 1985 as Array Technologies Inc., the company was listed publicly in 1993 and was acquired by Advanced Micro...
. It builds upon the earlier DXT5 algorithm and is an open standard
Open standard
An open standard is a standard that is publicly available and has various rights to use associated with it, and may also have various properties of how it was designed . There is no single definition and interpretations vary with usage....
. 3Dc is now implemented by both ATI and NVIDIA
NVIDIA
Nvidia is an American global technology company based in Santa Clara, California. Nvidia is best known for its graphics processors . Nvidia and chief rival AMD Graphics Techonologies have dominated the high performance GPU market, pushing other manufacturers to smaller, niche roles...
.
Target Application
The target application, normal mapping, is an extension of bump mappingBump mapping
Bump mapping is a technique in computer graphics for simulating bumps and wrinkles on the surface of an object. This is achieved by perturbing the surface normals of the object and using the perturbed normal during lighting calculations. The result is an apparently bumpy surface rather than a...
that simulates lighting on geometric surfaces by reading surface normals from a rectilinear grid analogous to a texture map
Texture mapping
Texture mapping is a method for adding detail, surface texture , or color to a computer-generated graphic or 3D model. Its application to 3D graphics was pioneered by Dr Edwin Catmull in his Ph.D. thesis of 1974.-Texture mapping:...
- giving simple models the impression of increased complexity. This additional channel however increases the load on the graphics system's memory bandwidth. Pre-existing lossy compression algorithms implemented on consumer 3d hardware lacked the precision necessary for reproducing normal maps without excessive visible artefacts, justifying the development of 3Dc.
Algorithm
Surface normals are three dimensional vectors of unit length. Because of the length constraint only two elements of any normal need to be stored. The input is therefore an array of two dimensional values.Compression is performed in 4x4 blocks. In each block the two components of each value are compressed separately. For each block, each of the two components have a palette of 8 values to choose from. The palettes are generated from two values representing the start and end of a line and the other six values being generated as linear combinations of the start and end values.
Compression is naively achieved by finding the lowest and highest values of the 16 pixels to be compressed and storing each of those as an 8-bit quantity. Individual elements within the 4x4 block are then stored with 3-bits each, representing their position on an 8 step linear scale from the lowest value to the highest. Each pixel's 3-bit value (the palette index) would be chosen by choosing the palette entry with the minimum distance from the original values.
Total storage is 128 bits per 4x4 block once both source components are factored in. In an uncompressed scheme with similar 8-bit precision, the source data is 32 8-bit values for the same area, occupying 256 bits. The algorithm therefore produces a 2:1 compression
Data compression
In computer science and information theory, data compression, source coding or bit-rate reduction is the process of encoding information using fewer bits than the original representation would use....
ratio.
The compression ratio is sometimes stated as being "up to 4:1" as it is common to use 16-bit precision for input data rather than 8-bit. This produces compressed output that is literally 1/4 the size of the input but it is not of comparable precision.
3Dc+
3Dc+ (FourCCFourCC
A FourCC is a sequence of four bytes used to uniquely identify data formats.The concept originated in the OSType scheme used in the Macintosh system software and was adopted for the Amiga/Electronic Arts Interchange File Format and derivatives...
: ATI1) (also known as BC4 or Block Compression 4) can compress textures, i.e. light maps, shadow maps, HDR textures and material properties. 3Dc+ provides 2:1 compression
Data compression
In computer science and information theory, data compression, source coding or bit-rate reduction is the process of encoding information using fewer bits than the original representation would use....
ratio with single component (DXT5 alpha) 8-bit integer textures and 4:1 compression ratio with normal maps and textures consisting of two 8-bit integer components