Jump to content
Eternal Lands Official Forums
Roja

Texture Masking

Recommended Posts

I would really like to have Texture Masking in the game.

 

Here is an example image of what it is:

http://img.photobucket.com/albums/v59/ffle...neshiftmask.jpg

 

Basically, this allows us to mix/match different areas of texture files to have a lot of combinations with a minimum amount of texture files.

e.g. We have 4 texture files for a shirt. Each has a different shirt color AND each has a different skin color(like the neck area showing). With a texture mask, we can make it so that any of those skin colors can be matched with any of those shirt colors. Without the mask, we'd have to make a texture file for each separate combination of shirt/skin.

 

If we have texture masking we will be able to have a lot more different looks such as:

 

-skin showing on torso/arms/legs/feet (so we can have short sleeved shirts, vests, armor with your shirt underneath showing through, nice dresses, etc)

-new races like Minotaurs that require a lot of skin to show.

-tatoos

 

And many other things of course :whistle:

Please let me know if you have any questions, I will provide all art needed to test this and such.

Share this post


Link to post
Share on other sites

I looked at this when topic when it was first posted.

 

The example has a mask image in which the colour values denote which image to read for that pixel. Basically, it is serving as an index into a layered image set.

 

I started by thinking of the best way to implement this, and how it could be used. But it seems rather limited and inflexible; as all your textures have to be coordinated against the mask image.

Wouldn't it be simpler and more flexible to just overlay & merge images with transparency?

 

Using images with alpha channels you can build up a composite texture from layers -- for example, starting with a skin layer, skin decoration (tattoos), clothing layers, etc. There is no need to define what areas come from what layer, as that is held by the layers themselves.

 

Want to implement a surcoat? Then create an image of it with appropriate transparency, autoupdate, and it can be added over any other body texture.

 

Want some coats of arms to decorate the surcoat? Draw a few patterns, with transparency, and combine them as required; a limited heraldry on the fly.

 

Now, how to merge the layers?

 

The regular "over" operator for alpha composition is given by:

A_r = A_f + A_b ( 1 - A_f )
C_r = C_f A_f + C_b ( 1 - A_f )

Where A is the alpha value (0..1), C the colour value, and subscripts r, f, b, signify resultant, forground, and background respectively. Notice that the background alpha value (A_:closedeyes: is not use in determining the colour value.

 

There is a problem with this method: It is not associative!

 

This means that for three layers, j,k,l; the order in which you merge them matters; (j+k)+l is different to j+(k+l). This is a problem if you want to build up sets of layers, such as common costumes, to be cached and used later.

 

Instead, there is an alternative form of alpha composition, not as commonly used, which does associate:

u_f = 1 - A_f
A_r = A_b u_f + A_f
C_r = ( C_b A_b u_f + C_f A_f ) / A_r

(Where u_f is just for convenience).

 

By using this method you are not forced to compose all your layers at one time, but can cache common sequences; which can be a great saving.

BMP doesn't handle transparency well

 

You may want to consider not using BMP as your image format, especially as it lacks good alpha channel support. Only the client and graphics developer need understand the format, so keep it simple, and provide some developement tools and convertors.

 

Just dump the uint8 array to a file with a simple header to give the size, and you have a custom image format optimised for the client -- we don't care about anyone else!

...personally, I'd also read/write these via something like zlib.

No changes to protocol; filenames become layer specifications

 

Layered images can then be specified by interpreting the texture filename already sent to the client. Allow 2 or 3 characters for each layer ([A-Z0-9] gives 1200 or 46,000 combinations), then the filename is a concatonation of the layer names: "ABCDEFGHIJ" implies concatonate layers "AB", "CD", "EF", "GH", "IJ".

Edited by trollson

Share this post


Link to post
Share on other sites

But it seems rather limited and inflexible; as all your textures have to be coordinated against the mask image.

 

Actually there's really no problem with this. All textures are set up in specific places first off, so lining them up with the mask is np at all.

 

Secondly, there needs to be more than 1 mask texture.

Examples:

 

-Short sleeve shirt mask

-Sleeveless shirt mask

-Dress mask

-shorts mask

-skirt mask

A mask for each type of outfit.

 

 

 

About your other suggestions, I honestly can't comment because i'm not a programmer.

 

On another note, we were talking a bit about separating all textures to be stand alone. Right now all textures that make up a character model are combined into a 256 block in the code. If we separated them we woudl'nt have to worry about making them fit in that 256 block, they'd each be a power of 2 in size.

However I don't know if this would be more or less efficient than what we have now in terms of performance..Learner knows more about that stuff, was talking to him about it a while ago.

 

Also, here is a test program that Mikeman made a while ago for texture masking:

http://www.eternal-lands.com/misc/SkinMask_Demo.zip

Share this post


Link to post
Share on other sites
Actually there's really no problem with this. All textures are set up in specific places first off, so lining them up with the mask is np at all.

Thats what I was intending to avoid -- the need for any seperate mask at all, and the implicit requirements on how to build up a complete texture.

 

By using alpha channel images, the components carry their own mask. Any set of layers can be merged in any order -- so you can have a character wear a short sleaved shirt over a long sleeved one, with no additional work.

 

The alpha channel also allows semi-transparency, good for avoiding the harsh edges from a simple mask, allowing semi-transparent clothes (silks, lace), and especially tattoos, which are then blended with the character's skin tone.

 

In the long term, creating textures this way should be less work, easier to maintain, and give far more flexibility in combinations and additions.

Share this post


Link to post
Share on other sites

I think the downside to your approach would be that it would require more files overall.

 

Example:

The torso texture. It includes 1. shirt 2. skin

We have many skin colors in the game, and many shirt colors.

Let's say we have 6 shirt colors and 6 skin...this would require ONLY 6 textures with the way I have described + 1 mask file.

 

With yours, would it not require 12?

Share this post


Link to post
Share on other sites
Let's say we have 6 shirt colors and 6 skin...this would require ONLY 6 textures with the way I have described + 1 mask file.

With yours, would it not require 12?

No, it only requires six files, one per shirt. The shirts could also be different designs, different collar shapes, sleeve lengths, since they dont have to map to the same mask -- they are their own masks.

Share this post


Link to post
Share on other sites

Considering how low end many of our players are, anything that can be done to avoid alpha blending or masking during the actual per frame rendering would be good. If we could do it once per player to get the combination we need, that would be good.

Share this post


Link to post
Share on other sites
If we could do it once per player to get the combination we need, that would be good.
Do the merge as part of loading the layered texture, and cache the result (hence reinterpret the texture filename as a layering description). The use of associative blending means that any set of layers can be cached as a blendable texture in its own right.

Share this post


Link to post
Share on other sites

No, it only requires six files, one per shirt. The shirts could also be different designs, different collar shapes, sleeve lengths, since they dont have to map to the same mask -- they are their own masks.

 

But if what you're saying, puting the ALPHA inside the file, like a targa or png, that means there's only 1 mask there. That 1 mask, black & white, can I suppose be in fact 2 masks(all of the white area=shirt color, all of the black=skin color), but what if we needed a 3rd? It's not possible.

Plus all the work in programming a different format, converting all the textures...I think it' dbe easier just to make a separate mask file, when needed. Just like the way we use alpha maps on the 3d objects.

Share this post


Link to post
Share on other sites

But if what you're saying, puting the ALPHA inside the file, like a targa or png, that means there's only 1 mask...

Huh? No thats not it at all. There are no masks, so no limit to how layers can be combined.

 

First of all, the alpha channel has values 0...1 (represented by 0...255); its the forth byte in the RGBA image (32bit image). The advantage of using an alpha channel, as opposed to a boolean mask, is that edge effects can be smoothed and semitransparent textures made.

 

The client already uses a full alpha channel; whatever format the texture is in, it is expanded to RGBA when it is loaded.

 

Lets break down an example composition. Consider the following few texture layers; where I say it "covers" an area, I mean that it is transparent outside that area (cf. it is the area of the mask):

  1. The skin texture covers the entire body.
  2. The pants texture covers part of the legs and lower torso.
  3. The shirt texture covers the torso and arms.

Now the desired texture is made by layering textures in a desired order:

  • [1] gives a naked character.
  • [1+2] gives a character wearing trousers.
  • [1+2+3] gives a character wearing shirt and trousers, the shirt is over the trousers.
  • [1+3+2] gives the shirt tucked into the trousers.

A different shirt texture could have short sleeves, or a v-neck. A different pair of pants could be short legged. It doesn't matter -- that information is entirely defined by the individual texture.

 

Decorations can be added as additional textures. Since they are responsible for their own alpha channels, there is no need to coordinate masks an multiple textures. So we could add a belt texture, or a waistcoat, and so on. These can all be designed individually, without impacting other textures.

 

Different textures can overlap when layered; that is not a problem. When they are layered together, the top most opaque texture is visible.

 

There is no new image formats to be concerned with. In fact, the client drops support for BMP and just loads RGBA dumps directly -- afterall, the overhead for supporting BMP is redundant as the format used internally is RGBA. Standard image file formats are useful when you need to exchange files between different systems; in our case this does not apply, as the only consumer of these images is the EL client, a proprietory or custom format can be preferable.

 

You do need a few tools to support development:

  • To converting between the texture format and a standard RGBA image format (both ways).
  • To merge a set of textures into a new texture (for testing).

The EL client already does the first, apart from dumping the bytes to a file (along with a size header). The same code which would be in the client can then be reused for the second.

Edited by trollson

Share this post


Link to post
Share on other sites

so the full alpha channel can be the same thing as the texture itself? What happens when there is BLACK in the texture needed to show? -you need a separate alpha file or an alpha channel in the texture file(which bmp does not support).

 

Also, the game currently does not have support for grayscale alpha, only 2 color alpha. As you can see, there are no semi-transparent blends in the game.

Share this post


Link to post
Share on other sites
so the full alpha channel can be the same thing as the texture itself? What happens when there is BLACK in the texture needed to show? -you need a separate alpha file or an alpha channel in the texture file(which bmp does not support).

 

Also, the game currently does not have support for grayscale alpha, only 2 color alpha. As you can see, there are no semi-transparent blends in the game.

Again, no. The alpha channel is seperate from the RGB channels; you can have any colour (RGB) and any degree of transparency (A).

 

The client does support full 8bit alpha channel; if an alpha.bmp uses grey scale values, and not just black/white, then it will be honored when the texture is loaded. The internal representation of textures is full RGBA, as used by opengl.

see: textures.c: load_alphamap()

Share this post


Link to post
Share on other sites
There is no new image formats to be concerned with.

But in fact there would be, otherwise how do we have an alpha channel? :P

I mean that the "format" RGBA is already used by the client internally. Since this is a block of bytes, it is safe to write it directly to a file (no endian issues), rather than having the client decode a BMP whenever it loads an image.

 

The client does support full 8bit alpha channel

It may support it, but it's never used it.

http://www.eternal-lands.com/forum/index.php?showtopic=31188

May not be used in the image files, but is used by the client, even if the data only ever gives it black/white.

ie., its already doing the work.

Okay since I'm coughing and spluttering at home today :D I'll do a cartoony example of laying textures...

 

We can use with four layers for this example:

  1. the skin layer (David with naughty bits blurred).
  2. some fetching pants
  3. a shirt
  4. a medallion for that special night out.

david.pngpants.pngshirt.pngmedalion.png

In this case these are PNG with alpha channel. We can now layer and merge them in any order:

dp.pngdpsm.pngdsp.pngdpm.png

Here we have:

  1. skin + pants.
  2. skin + pants + shirt + medallion. Notice that the shirt is over the pants, and the medallion on top.
  3. skin + shirt + pants. The shirt is now tucked into the pants, very Rod Stewart.
  4. skin + pants + medallion.
dpm.png

Edited by trollson

Share this post


Link to post
Share on other sites

Ok, well when I meant image format, I meant change from bmp to png or tga...not something internal that I have no clue about in the first place :D

 

At any rate, if something like layers is feasible and better for the programmers and game performance then sure, i'm not against it.

Share this post


Link to post
Share on other sites

Premultiplied Alpha Blending

In an earlier post I describe how to perform associative alpha blending:

U_f = 1 - A_f
A_r = A_b U_f + A_f
C_r = ( C_b A_b U_f + C_f A_f )  / a_r

Where C is the RGB values, and A the alpha values, and f,b,r subscripts stand for forground, background, and resultant.

 

This operation has to be performed for each pixel for each colour channel (roughtly); so if the intention is to layer lots of textures, we want to make this operation as quick as possible.

 

This can be done by storing our textures premultiplied by the alpha channel, rather than as straight RGBA quads; instead of

{ red, green, blue, alpha }

we store

{ alpha * red, alpha * green, alpha * blue, alpha }

Then the associative alpha blending is greatly simplified:

U_f = 1 - A_f
A_r = A_b U_f + A_f
C_r = C_b U_f + C_f

Where C is now the colour channel premultiplied by the alpha channel.

This can then be further optimised by branching on the special cases where A_f is either 0 or 1 (byte value 0 or 255); a no-op or a copy respectively. Very useful where images are predominately transparent or opaque.

Not only have we reduced the number of operations by 9 per pixel, but each channel, colour and alpha, is treated exactly the same! This is a real boon for optimisation by SIMD instructions (SSE etc).

...especially if FMA is supported (eg PowerPC).

It does mean that our texture format is now different from what we want to pass to OpenGL; so unless there is a mode for premultiplied-alpha textures, we have to convert (divide by alpha) before we pass to the graphics layer -- but even if we only perform one merger we still come out ahead.

 

To summerise:
The EL texture format should record alpha multiplied colour values to best optimise layering.

Update 2007-02-05:

Looking for OpenGL support of premultiplied textures, and the use of premultiplied alpha textures is much commented on (google). Not unsurprising really, and many systems expect this format (window managers).

 

One particular comment which keeps coming up:

 

...the proper (mathematical) way to do blending in OpenGL is to use premultiplied texture and

glBlendFunc (GL_ONE, GL_ONE_MINUS_SRC_ALPHA)

and not

glBlendFunc (GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA)

as written in may places.

So does OpenGL correctly handle a pre-alpha RGBA texture that is bound to it? There may be more to tell OpenGL when doing so.

 

However, this may be useful but does not eliminate the need to do our own blending. OpenGL doesn't know the context of a layered texture; we can pre-blend and cache layered textures for future repeated use (including caching to temporary files).

Edited by trollson

Share this post


Link to post
Share on other sites

Quick thought from this morning...

 

If we have multiple textures and compositions for different monsters and creatures, we can have consistent varied representations -- for example, various faces (build from parts), tatoos and scars, clothing.

 

All this costs in bandwidth is one integer (2 or 4 bytes) per creature, the "seed". So the protocol says:

"you see Orc #234456".

We need to add a explicitly coded pseudo-random number generator (PRNG); so we can create a new PRNG object, initialise it with the seed number, and you get a repeatable number sequence.

 

The textures for the creature are caches with the seed as a suffix ("orcface#234456"). If the texture is not present, then it is composed from rules using the seed and a PRNG to determine the layers to merge.

 

Ideally, the composition process will be generic, fed with a "texture family" and composed according to some general set of rules (otherwise we'd need composition processes for each creature/texture type).

 

The PRNG can be used in many other places to create the same experience for all players, from a single integer seed. In effect it is an ultimate de-compression algorithm (there is no corresponding compression algorithm though).

 

This approach is common in simulation software where repeatability is important (as here). It was also used in the original Elite to pack the galaxy onto a floppy.

 

I was intending to tidy up some examply code for texture merging, including critically convertors to/from PNG and BMP; but I have had a little upgrade nightmare this weekend (rebuilding packages since Saturday afternoon, still going Monday morning...)

Update 2007-02-23

This would also work nicely with
"named monsters"
, if these are ever used; we could determine the seed number from the
hash
of their name.

Edited by trollson

Share this post


Link to post
Share on other sites
Guest
This topic is now closed to further replies.

  • Recently Browsing   0 members

    No registered users viewing this page.

×