      Serious Engine Skeletal Animation (SKA) file format and usage description
      =========================================================================





General info on Serious Engine SKA
=================================================


First of all, note that the new SKA system in Serious Engine has nothing to do with the old .mdl models. The new SKA is designed from the bottom up to be much more flexible and powerful than the old models so they are not sharing any of the common code for animation and rendering. It is completely different renderer and animator with all features completely independent. In the engine, both old models and SKA models support same types of collision and raycasting, can be models and editormodels, etc., but they are rendered separately. 
Therefore you cannot attach one type of model to another (using attachments). Keep that in mind when planning eventual ports over from .mdl to SKA.

Note that this is a very advanced system and that some modeling/animation packages might not support some of the features. Lightwave supports all of them and we have plugins available for Lightwave Modeler and Lightwave Layout programs that export meshes, skeletons and animations. 


Model components involved
-------------------------

Each skeletal model (CModelInstance in code) uses one or more of the following components: mesh, texture, skeleton and animset. You can have only one skeleton per model instance, but several meshes and animsets are allowed. Additionally, you can attach child models to it, similary to the attachment models in the old mdl format. Additionally, child model skeletons may influence parent meshes. That way you can muild hierarchical skeletons that control one mesh. To each mesh in a model instance, you can assign several textures.

You can have one mesh per model, but will maybe want to have some models separated into several meshes, so you can exchange parts (like different heads on one body). The multiple meshes can still be continuous and controlled by only one skeleton (if you want so). There is no hierarchy between meshes in one model. Each mesh can have several LODs, all completely independent. You don't have to preserve vertices, textures or weight maps between different LODs. Usually, if you use several meshes for one model, it is best that their LOD distances match so you don't have mixed LODs at the same time, especially if the meshes are supposed to be continuous.

Textures are just added to the texture list, and names are assigned to them when adding them. Shader parameters in mesh then refer to those textures by their names. So you can use put one texture in a mesh and several different surfaces in the mesh can use them in their shaders. Same old .tex format is used. Note that the SKA renderer completely disregards the texture size in mex/meters. Unit uv coordinates (from range 0-1) apply as used in most modern modeling and animation packages. This should simplify the process of uv mapping a model.


Usually, you only have one skeleton for a model. When you add a child model, you specify which bone you want it attached to. For instance, you can have the wings attached to a spinal bone of a model, and use separate animations for the wings, to create a winged character.
Bones in skeletons are matched to the weightmaps in meshes by their names. There is no one-one matching between skeletons and meshes. I.e. one skeleton can control multiple meshes, or one mesh can be controlled by multiple skeletons, or any combination of that.
Examples: 
- If you put body and legs in different meshes, and create one skeleton for entire model, you can exchange different bodies while still having them all controlled by one skeleton. 
- Or you could use one mesh for both body and legs, and make two skeletons, one for legs and one for body. So you can exchange them and play different anims on them. (However, you don't need separate skeletons to play separate anims, more on that below).
- In a very specific case, you can use a setup where you have separate body and legs meshes with one skeleton to create an odd winged combination: Just replace the body mesh with a body that has wings, and attach a wings skeleton to a spinal bone of the base skeleton. In this case, a part of the body mesh is controlled by the base skeleton, while a part of it is controlled by the wings skeleton.

Each skeleton can have several LODs, all with their own completely independent hierarchy. But, to be able to use animations and meshes properly, you will only drop the child bones, toward the skeleton tree leaves, or eventually remove some bones in the middle of some chain.


Animation sets are added to the animation set list for each model. Each animation set contains one or more animations. When you add several animsets to one model, all their anims are treated equally. I.e. you don't have to specify animset when you want to play an animation. Just refer to the animation by name, and it is played. 
One animation consists of one or more envelopes. Each envelope can control a bone or a morph weight. On the level of animation-playing API, there is no distinction between animations that control vertex morphing and those that control skeletal animations. In fact, one animation can both control several bones and several morphs - all in sync. It is not required that an animation has envelopes for all bones in a skeleton. So you can animate one skeleton by playing several partial animations on it.

Animation envelopes are matched to bones on the skeleton(s) or morphmaps on the mesh(es) using their names. More than one animation can be played at the same time. This way you can have separate animations for different parts of a skeleton. For example, you can have one skeleton for entire model, but play separate animations for legs and body. Or you can overlay body animation over the animation for the whole model, to override current body position with some misc animation.

If more than one animset has animation with some name, the latter one has priority. That way you can override some animations.


Aditional note:
- With the above concepts, the notion of model attachments is blured a bit. Just as before, you can add child models with new meshes and skeletons to models, attaching the skeletons to specific bones, and that will have the same effect as old attachments. But, this is more flexible since you can attach models that will be smoothly joined with the base model by sharing weightmap names with them, or you can attach several meshes on one skeleton to animate them together, etc.


Shaders and shader parameters
-----------------------------

Shaders in our SKA format are semi-procedural scripts that define appearance of a model's surface. You can think of them as 'materials'. A shader can have completely different implementation (and possibly even appearance - especially if some features are not supported on underlying hardware). Each shader has one or more possible implementations that are compiled into a dll, the proper one is chosen based on current settings and used during rendering.
A shader usually requires some parameters like which textures, colors, etc to use. The parameters are not fixed during compilation, but rather queried during rendering - either from the surface that the shader is applied to, or from the entity that owns the model.
A shader defines complete rendering - i.e. all layers of textures on the surface, and it can use different number of passes or hardware texture units in different implementations. It references UV maps from the mesh and parameters from the surface by their names.

This system allows for one shader to be used with different textures and colors. It is much more friendly and simpler for the user that has to apply the shaders to the models. The user doesn't need to create a new shader for each texture that needs to be used, but rather just select different texture using one of the common shaders. 
On the other hand, the system allows for simple fallback if the underlying hardware doesn't support some of the features needed. In that case a simpler implementation is used. It can either emulate the same effect using multiple passes, or fallback to more simplistic rendering.

One example of a shader would be a bump shader that uses base texture of the model and a bump map. Each mesh using the shader would have two UV maps - one for the base texture and one for the bump map. A model instance would define the two texture images as well. 
On the top-notch hardware, the shader will use pixel shaders to provide high quality DOT3 bump mapping in single pass. On a bit older hardware, it can use 2 passes with multitexturing for emboss bump mapping, or 1 pass multitexturing using the bumpmap only as a detail texture. On legacy devices, it would probably dump the whole bump idea and just render the base texture.

The parameters queried from the entity are used for the fully procedural effects, like UVmap scrolling etc.


General name and hierarchy matching
-----------------------------------

We use a global naming pool that allows ASCII descriptive names (like 'LeftLegUpper', 'RunWithGun', etc.) to be mapped to unique IDs. These IDs are not retained across different invocations of the application. Only guarantee is that, in one application run, a call to map a certain string to ID will always result in the same ID. The lookups are usually done at loadtime, to prevent string searches in runtime.
A 'name' is considered to be of string type, when presented to the user, but is internaly managed as an integer ID. Unless you are going to do entity code, you don't need to think about the IDs, but just take note that things are matched by names:

Name of each texture in a mesh (used by the shaders) matches name of one texture assigned to the model.
Name of each weightmap in a mesh matches name of a bone in the assigned skeleton.
Name of each bone in a skeleton matches name of a bone in assigned animation set(s).
Name of each morphmap in a mesh matches name of a morphfactor animation in assigned animation set(s).


Animation playing
-----------------

Each instance of a model has a special structure holding information of currently playing animations. It is layed out as a queue of lists. At each moment, one list is current, while one or more lists might be fading out. Each list holds zero, one, or more animations currently playing. Each list has a fade time, and it gradually becomes more active over time, to allow animation blending.

The animations in one list are blended or added, in order. Each has a flag (blend or add) and a strenght. Blend with strength 1 is a default operation - replacing. Since each animation influences only certain bones (or morphmaps), a partial animation (e.g. upper body only) can override a part of a full animation (e.g. both body and legs).

Lists are blended with each other with time, depending on the fade time of the newer list. Older list implicitly falls out of the queue, when the list on top of it fades in to the full ratio (1.0).

As each animation is added to current state, it can implicitely be assigned an ID (usually, just a small number like 1,2,3...), so that, later, several animations can be automatically removed by specifying that ID. (Example, if you use ID=1 whenever you play some animations that animates legs moving, you can stop them with one call, without knowing which exactly is currently playing). Additionally, the animations in the queue are sorted by IDs, so you can make sure that some overriding animation is always after the one that it overrides, regardless of the actual order of adding them.

Animations played in a model instance can influence bones and morphs in that instance and in its children. Note that if you specify animations both in a parent model and in a child, if those two anims influence same bone(s)/morhpmap(s), the order of animation blending is undefined. Therefore, it is generally undesirable to do that.

Interface for animation playing allows following operations:

1) NewClonedState
Copies current topmost list once more in the queue, and gives it a fade time. You can then use add/remove functions to modify the state. This is used when you want to just add or remove some anim(s), while you want rest will continue playing uninterrupted. (E.g. add some waving with hand while walking, and later stop waving. Never stopping walking in the mean time.)
2) NewClearState
Adds an empty list to top of the queue, and gives it a fade time. You then add animations to the state. This is used when you want to completely replace current animations playing. (E.g. to change from walking into falling down.)
3) AddAnimation(animation, flags, strength, id=0)
Plays a new animation with given flags (blend or add type, looping or once,...), blending strength and an optional id.
4) RemAnimation(animation)
Removes the given animation from current state.
5) RemAnimsWithID(id)
Do RemAnimation() for all animations in current state that have the given ID.



Skeletal animations and vertex morphing explained
-------------------------------------------------

SKA supports both skeletal animation _and_ vertex morphing, and the two can be combined in any SKA model and animation. 

A skeleton file defines the hierarchy of the bones, their names and their default positions. 
In order for the skeleton to influence a mesh, the mesh must have weightmaps. A weightmaps is a list of vertices influenced by a bone, and for each vertex it contains an influence factor, in the range 0-1. Weightmaps are matched to the bones by their names.
Finally for the whole thing to start moving we need an animation. The animation contains bone envelopes. Note that the animation doesn't hold any information of the bone hierarchy, and the animation can be partial, i.e. not containing all bones. Each envelope holds a bone name, its default position in the skeleton and positions in each frame in the animation. 
The renderer then matches different information from skeleton, mesh and animation and deforms the vertices accordingly.

Now, some animations are cumbersome to define using skeletons, and are much easily expressed with morphing. Good examples of that would be facial expressions. Vertex morphing is defined using only meshes and animations without need for skeletons. You deform a part of object and save that as one morph map. For example, you can make a character smile and save its face as a morphmap. Important part here is that the morpmap does not contain the entire mesh, but only those vertices that are transformed. The morphmap contains a list of influenced vertices and for each vertex it holds the deformed position and deformed vertex normal. 
Besides bone envelopes, animations can contain morph envelopes that hold morphing factor (0-1) for some morphmap in each frame.

Note that one animation can contain envelopes for bones, morphs or both. That means that you can create one animation that, when played, causes a character to wave its hand (using bones) and smile (using morphmaps).

Morphmaps influence the vertices before bones, and can be applied one over another. If more than one morphmap affect one vertex, the influence is dependent on whether a morhpmap has 'relative' or 'absolute' type. 

If current vtx coord is 'cur', original coord in mesh is 'src', the one in morphmap is 'dst', current morph factor is 'f' and final result is 'new', then we have:

for 'absolute' morphing:  new = (1-f)*cur + f*dst
for 'relative' morphing:  new = cur + f*(dst-src)

Or in other words, absolute morphing moves the current vertex position towards where it is in the morphmap, while the relative morphing moves it in the direction in which it was moved in the morphmap from where it was in its original position. It is a subtle difference that won't make much difference in most cases, except when doing complicated combinations between several morphmaps. 

Example: You are applying a facial expression to a character that already uses a morphmap to make its face quiver (as make of jelly). Let's say that the facial expression is put over the quivering. 
If the facial expression morphmap is absolute, as you fade it in, it will look as if that part of the face is becoming fixed and applying the expression, while the rest of the character continues to quiver.
If it is relative, it will continue to quiver while fading in the expression.


Matching different skeletons
----------------------------

Initial benefit of most SKA systems is usually that if two models have same skeletons, they can share animations. You can conserve a lot of memory that way. Now, with Serious Engine SKA, you can also use same animations on models that have different skeletons. For example let say that one character has much broader shoulders and longer arms. You would use a different skeleton for it, but still the same animations. The renderer adjusts animations relatively to the skeleton proportions so that the movements will still appear normal. Note that this doesn't work for animations where e.g. exact distance between hands is important, like when a character is holding a gun with both hands. But it works for usual running, walking, falling, sitting, lying down,... etc.



Coordinate systems
------------------

Serious Engine uses standard mathematical right handed coordinate system with axes going in directions: +x=right=east, +y=up, +z=back=south. All 3d coordinates and normals are layed out in that format. 
Matrices are always dumped in row-by-row order. All elements of one row from left to right, then the next row, etc. For object and bone placements, we use 3x4 matrices here. The left 3x3 part of the matrix defines the orientation of the object, while the right-most column defines the translation. This is same as the layout of an OpenGL matrix, just that the bottom row is not written, as it is assumed to be [0,0,0,1].
Texture coordinates map (0,0) to the upper left corner and (1,0) to upper right), (0,1) to the lower left and (1,1) to the lower right corner of the texture.




File formats
=================================================

We use two different formats, ascii and binary, for most types of files. Ascii format is usually exported from a modeling/animation package and then converted in the SKA studio, or can sometimes be edited manually. The binary format is what SKA studio saves after importing and optimizing the ascii file. 
You should always keep ascii versions of your content in backups, as the binary formats might get changed in any moment requiring you to reconvert the content. It is guaranteed that you won't need to readjust any data manually, since all data that is contained in binary files is also stored in ascii files. This simplifies any future optimizations as anything can be changed in the formats that the actual game uses without it influencing the existing content.
Binary formats are not described here as they are internal to the engine and are subject to changes. You should always export ascii files from modeling/animation packages and let SKA studio convert and optimize them.

Ascii file formats usually have extensions that start with the leter 'a', and binary with 'b'.

All ascii files are parsed with a standardized parser that handles names, complex data, properties, blocks, arrays, etc. in a common way and the formats will look all alike. That should simplify understanding of the formats and writing converters. 
It is advised that you examine several of the supplied example files and see how they look like. You shall notice that all blocks are surrounded with parentheses ('{' and '}'). Before any array or complex structure, as well as before most properties, there is a descriptive keyword describing what it is. In case of array, the keyword is always followed with the number of elements in the array. Vectors, quaternions, matrices and similar are terminated with a semicolon (';') and have the elements separated with a comma (,).
Note that sometimes you will see #INCLUDE statements. Those are completely same as #includes in C/C++. The contents of the mentioned file are included verbatim in that position in parsing.
In the below paragraphs, only the structure and the meaning of the data is discussed. The syntax should be obvious from the examples.


Mesh data format (.am, .aml, .shp, .bm)
---------------------------------------

Extension '.am' stands for 'ascii mesh'. The format allows redundancy (i.e. repeating one normal several times). This should simplify writing export plugins or scripts for 3d modeling packages. 
Note that in the binary representation, the vertex data is unique per-vertex. The conversion is handled during importing. The loader optimizes the vertex data when importing the ascii file and converting it to the final binary representation. It merges together any two vertices that have all the vertex data identical. That means not only coords, normal and uvs, but they also need to have same morphmaps and all weights. Also - in different surfaces each vertex must be separated (so that it can use separate buffers for any eventual internal calculations). Data might need to be internally split even more to satisfy maximum bone number for hardware rendering, etc. The loader also normalizes weightmaps so that sum of weights for a vertex always equals 1.0.
This is all internal works of the loader and you don't need to worry about that.
All polygons must be triangles and their vertices must be listed in counter-clockwise order.

Here's rough layout of the data stored in the .am file:

Mesh { // normal and texcoord indices match the vertex indices
  Vertex vertices[]
  Normal normals[]		
  UVmap uvmaps[]
  Surface surfaces[]
  WeightMap weightmaps[]
  MorphMap morphmaps[]  
}

UVmap {
  STRING uvmapname
  TexCoord texcoords[]	// each uvmap array _must_ have same count as vertices
}
Surface {
  STRING surfacename
  INDEX triangles[][3]	// vertex indices for each triangle, in counter clockwise order
				// indices refer to global arrays of vertices for this mesh
}

Vertex {
  FLOAT x,y,z	// in SE coordinate system (+x=right=east, +y=up, +z=back=south)
}

Normal {
  FLOAT nx,ny,nz	// normalized direction vector, same coord system as Vertex
}

TexCoord {
  FLOAT u,v		// texture is mapped from 0-1 in both axis, (0,0) is upper left corner in texture
}


WeigthMap {
  STRING name
  VtxWeight weights[]
}

VtxWeight {
  INDEX vtxindex		// absolute index in the 'vertices' array of this mesh
  FLOAT weight		// weight for this bone [0-1]
}

MorphMap {
  STRING name
  BOOL relative		// each morphmap can be either relative or absolute
  VtxMorph morphs[]
}

VtxMorph {
  INDEX vtxindex		// absolute index in the 'vertices' array of this mesh
  FLOAT x,y,z		// same coord system as Vertex
  FLOAT nx,ny,nz		// normalized direction vector, same coord system as Vertex
}

That is what is saved from your modeling package and imported into the SKA Studio. Besides the geometry data, a mesh has shader parameters for each surface that define how each surface is rendered. Those parameters are edited in the SKA Studio and are kept saved in a .shp (shader parameters) file. Exact layout of which will not be discussed here, as it is best handled by the SKA Studio. 

Note that one .am and .shp pair in fact defines only one LOD of the mesh. In order to generate multiple LODs, you create multiple .am and .shp files and list them in a .aml file, meaning 'ascii mesh list'. That is best done from the SKA Studio. So, SKA Studio loads an .aml that references several .am and .shp files and creates one final .bm (binary mesh) file that contains all info for all LODs of a mesh.


Skeleton data format (.as, .asl, .bs)
-------------------------------------

Extension '.as' stands for 'ascii skeleton'.

Here's rough layout of the data stored in the .as file:

Skeleton {
  Bone bones[]
}

Bone {
  STRING name
  STRING parentname		// (or null for root bone)
  FLOAT bonelength
  MATRIX3x4 placement	// defines position and rotation of the bone
				// in SE coordinate system (+x=right=east, +y=up, +z=back=south)
				// the placement is relative to the parent bone (root bone's parent is the coordinate system)
}

Similar to meshes, skeletons also use .asl (ascii skeleton list) files to define multiple LODs for a skeleton that is then saved as '.bs' (binary skeleton).


Animation data format (.aa, .aal, .ba)
--------------------------------------

Extension '.aa' stands for 'ascii animation'. Again, we have '.aal' file which now defines which animations go into one animlist, which is then saved as '.ba' (binary animation).

Animations can handle both bone placement envelopes and morph strength envelopes.
Each ASCII animation file contains only one animation sequence and does not have a notion of eventual different animations being stored in it.
Each ASCII animation file is saved as a series of keyframes at fixed timedeltas. The length of one frame (in seconds) is specified for the entire animation. 
This is all ment to simplify the export process. When the ascii is imported and converted in internal format, it can be split into different animations and/or can have more ascii anim files joined together to form one animation set. Also, during the import process, the keyframes are optionally optimized to remove unneeded keyframes (those that are infact interpolations between neighbouring keyframes). Internally, only linear interpolations are done, both on translations and rotations.
Bone key rotations are stored as rotation matrices in the ascii file, to concisely define the rotation. Internally, rotations are stored as quaternions or compressed quaternions.

Animation does not define any hierarchy among the bones that it animates. Default positions of the bones are used to calculate bone parameters used to rescale animations in runtime to fit characters of different sizes.

Here's rough layout of the data stored in the .aa file:

Animation {
  FLOAT secperframe		// speed of animation in seconds per one frame
  INDEX frames		// number of frames in the animation
  STRING animname
  BoneEnvelope boneenvs[]
  MorphEnvelope morphenvs[]
}

BoneEnvelope {
  STRING bonename
  MATRIX3x4 defaultpos	// placement of the bone relative to its parent in the default pose
				// this implicitely defines size of the bones
  MATRIX3x4 positions[]	// placements are relative to the parent bone
				// must have exact frame count as specified in the animation
}

MorphEnvelope {
  STRING morphmapname
  FLOAT factors[]		// must have exact frame count as specified in the animation
}


Skeletal Model Configuration file (.smc)
----------------------------------------

Simple SKA model has a mesh, a texture, a skeleton and an animset, but usually, you will need more complicated setups with several children, multiple meshes, more than one texture and animset, etc. While it is possible to set any model layout from the code directly, it is much more convenient to use .smc files. When you put together a model, you setup all the meshes, skeletons, animsets, textures, children, etc. and the SKA Studio saves it in a .smc file. It is an ascii file that will also not be explained here, but it can be simply created in the SKA Studio and saved there. Later, it can be loaded from the code.

