No takers?
Ok try this on for size, before an object is fully rendered to the display output it must undergo transformation from model space (i.e. its vertex co-ordinates in XSI or 3DSmax) to world space (i.e. the new co-ordinates within the game environement with respect to other objects). My very limited understanding of the graphics pipeline thinks this is the role of the Vertex shader.
Now we cannot manipulate an model and or any sub mesh objects in model space without using the particle system and based on experiments to date that manipulation seems limited to a fixed tranformation matrix or in other words we cannot apply much in the way of logic and calculations to the matrix.
But what about performing that calculation during its transformation into world space using a vertex shader?
Let me explain:
If for example you are recieving an input from the game engine as float4 Pos : POSITION; and the vertex shader is then multiplying this by a WorldView tranformation matrix to get the position in world space, why not change the POSITION before it gets transformed.
These are just chunks of HLSL code to illustrate
float4x4 matWorldViewProj: WORLDVIEWPROJECTION
struct VS_OUTPUT
{
float4 Pos : POSITION;
};
VS_OUTPUT VS(float4 Pos : POSITION)
float angle=(time%360)*2;
Pos.z = sin( Pos.x+angle);
{
VS_OUTPUT Out = (VS_OUTPUT)0;
Out.Pos = mul(Pos,matWorldViewProj);
return Out;
}
Now I might be barking up the wrong tree here but haven't I just altered the Z component of the position prior to its multiplication into WorldSpace, hmm now there is food for thought.