|(2 intermediate revisions not shown.)|
|Line 1:||Line 1:|
= Efficient Textures in Crystal Space =
= Efficient Textures in Crystal Space =
Efficient Textures in Crystal Space
Textures are an essential part of 3D content - they allow for surfaces to appear with much greater detail than geometry could alone in the same regions of performance. Everyone knows that they're to control the colors, patterns etc. that appear on a surface, and lately, with shaders further aspects such as shininess, surface normal (important for lighting), and much more.
This guide assumes that you know some texture basics and how to attach a texture to a CS material. It contains information on how to create efficient textures and explains how to control aspects of texture rendering, like the format they are uploaded to the graphics hardware.
3D hardware (but also software) can render textures more efficiently if the dimensions are powers of two(abbr. "PO2") (e.g. 256x256, 512x128...), going so far that hardware and graphics APIs (e.g. OpenGL) require textures to have PO2 dimensions. So does CrystalSpace for all textures, 2D and 3D. While you can feed non-PO2 textures into CS, they will be resized internally to a PO2 size (e.g. 640x480 will become 512x512). The resizing isn't very good, though: the texture will end up looking rather ugly when rendered.
"But", you may say now, "doesn't modern hardware support non-PO2 textures?" Well, yes, however:
- Some hardware still doesn't support them,
- They have limitations (no mipmapping, texture coords need to be in another range...)
Bottom line: Use power-of-two sized textures.
Another facet is what size actually to use; it all depends on the content, of course. Some points to consider:
- The days were certain hardware was limited to 256x256 textures are long gone; nowadays, the limit is 2048x2048 or 4096x4096. Texture compression also allows for performant high-res textures; use them.
- You can easily downsize a texture when you find it is too large in a case. Upsizing a texture you find looks ugly or too blurry won't help - you cannot get information that's just not there. Means, better start off with textures too large than textures too small.
- The OpenGL renderer allows user-configurable use of lower-res versions of textures through the Video.OpenGL.TextureDownsample config setting - you don't need to worry that much about users with low video memory as they can get a performance increase by changing this setting to something >0.
Texture file format
A texture (obviously) has to be physically stored somewhere. CS supports a number of image formats to load textures from, common ones such as PNG, TGA, BMP, JPG, DDS, GIF, and less common to exotic ones like MNG and JNG. What format you want depends on considerations like whether you need alpha or not, can accept lossy compression, and the disk space taken up. The color depth is pretty much unimportant, CS uses truecolor textures when rendering with OpenGL, so e.g. using paletted textures gains you exactly nothing over truecolor images at runtime.
Alpha is supported by PNG, TGA, BMP, MNG, JNG, the compression is lossless for PNG, GIF, TGA, BMP, and can be lossless or lossy for MNG and DDS. MNG is a bit special as it is an animation format and hence is usually used if you want animated textures.
Commonly, PNG, TGA are used for textures with alpha and JPG for textures without. "Makes sense", you may say, "PNG and JPG cover all my needs and their compression is great." However, actually, the best format you can use is DDS.
Why? Well, let's take a look what happens if you load e.g. a PNG into CS:
- The texture is uncompressed.
- Mipmaps are created.
- The textures are uploaded to the hardware.
Unobviously, step 3 actually contains a recompression. That is due the fact that CS uses texture compression (which has a rather positive effect performance-wise), but the texture data is sent to OpenGL in RGB(A) format, which means that the driver needs to compress the texture - this costs some time.
How does DDS fit in? In DDS files, (a) image data can be stored in the same compression format(s) that hardware nowaday uses (DXT1, DXT3, DXT5) (b) the mipmaps of a texture can be stored, too. That means, that the steps 1 and 2 above are basically not needed, and so the recompression in step 3, as the just precompressed data needs to be uploaded. Getting rid of all that processing greatly improves load time.
What about file size? Without alpha (DXT1 compression), 4 bits per pixel are needed, with alpha (DXT3+5), 8bpp are needed. That is before any zip compression, though; the gross file size of a DDS can rival those of PNGs and JPGs.
Texture quality control
As mentioned above, textures in CS are compressed before being uploaded to the graphics hardware; while compressed textures are fast, there are sometimes undesireable (e.g. for normal maps - see Nvidia page on bumpmap compression for an illustration of the problems). CS allows quality control here on a per-texture base thorugh "texture classes". Basically, a texture class is a collection of certain settings that control how a texture is uploaded to the graphics hardware. E.g. the lookup, normalmap and nocompress cause textures to be stored uncompressed on the hardware. Additionally, texture classes also attach some "semantics" to textures - useful for tools or humans that read the raw world file. The class of a texture can be set by adding e.g. <normalmap/> to the <texture> block.
CS supports two kinds of transparency: alpha and binary. Alpha allows smooth transparency, with the downside that geometry with such textures needs to be sorted back to front to correctly display it; also, it usually can't write to the Z buffer (e.g. can't use zuse Z buffer mode), as transparent areas will be written to the Z buffer (and subsequently wrongly occlude other geometry).
Binary transparency does not have this disadvantages, it can be used without any special sorting or Z modes. However, as the name suggests, you have either fully transparent or fully opaque pixels there.
Usually, for binary transparency, a keycolor is set on the texture. Internally, keycolor basically works like following: all keycolored pixels are searched, their color value replaced with the average of all non-keycolored neighbor pixels (or the average of all non-keycolored pixels if no neighbors are opaque), and their alpha set to 0. (The color replacement is done to avoid color bleeding due texture filtering - if that wouldn't be done, you could e.g. see a pink halo around transparent areas if the keycolor is pink.)
This however means that a keycolored image needs to be preprocessed prior upload to the HW, i.e. the steps 1-3 are always performed. You can achieve binary transparency much more efficiently if you don't use keycolors, but instead put the transparency into the alpha channel (and fill the transparent areas with a color that doesn't cause obvious bleeding). The binary transparency can be enabled by adding <alpha> <binary/> </alpha> to the <texture> block.