Low-poly Tips 3 – Game Art Asset optimization

These are 3D art asset modeling, rigging, uv-mapping and texturing tips. And not only for low-poly though it is where they are needed the most. See also other tip collections, the first and second set.

Minimize number of Draw Calls the Asset generates

Draw Calls are for game engine the number of separate objects, materials and textures that are loaded.  The less draw calls the better the game can run. Here are some ways to lower the number:
Multi-object texture optimizing art assets

  • Have each character as one single mesh. Characters that are made from pieces in-game cost in draw calls.
  • Combine separate static meshes to one. If you can have a collection of objects as one object, one file(the meshes can be unconnected), it is better than as several files.  But don’t combine a whole village to one object as the whole thing would get loaded to memory even though you may not need it. This trick is best for moderate collection of objects, say all items inside a shop interior.
  • Use only one material and texture per object. Or even..
  • Have several objects all use same texture and material. This means each has the same uv-map but uses only a portion of the whole – uv-map collects all textures together. See picture. Even though not shown in picture(for clarity) the sections different objects use can well overlap.
[divider_line]

Optimize character rig, use 2 rigs – one for animation and one for export

The less bones your character has the lighter it is to run.  And less resources used for one character means more to use elsewhere – maybe even allowing more characters.

But very few bones makes animating difficult and prevents many motions. Of course we rather animate with the optimum amount – and with control objects as well to make work easier. Sure you can have control object in your game rig and just make sure not to export them to game, but having bones in a rig that you don’t export, like between one bone and another? That is asking for trouble.

Solution is two rigs, one for animation and one for exporting to game. Game-rig is linked to follow animation rig – you animate only with animation-rig and export only with the game-rig.
[clearboth] Character Rig optimization
[clearboth] [one_half]Animation Rig is the rig you build first. It has the bones and control objects you want to animate with. The rig can even have details, like fingers, which you can animate and later decide to use or not(via the game rig). Build your animation rig and then consider what parts of it are essential for moving the character. Every bone in a character only supporting other bones and not really affecting the mesh itself is a bone the game character does not need. So, do you really need the neck-bone if head and chest bone playing together can offer the same result or close enough?

[/one_half] [one_half_last]Game Rig is collection of helper objects(any type, also called nulls), one per every important part of character. The reason to use nulls instead of bones is that creating bones is intended to build hierarcies you don’t need and should not have, here. Create these objects, then align and parent them to follow the relevant bones of Animation Rig. They should relocate to pivot-points of the animation-rig-bones. Then skin your character mesh to these helper objects(nulls). In the end you animate with Animation Rig and the Game Rig follows and deforms the mesh.[/one_half_last] [clearboth]

That was the 3rd set of little tips for improving 3D (game) art assets. Cheers!

Modo, Zbrush and Messiah for Fast Production


Mini-tutorial covers a workflow using the above 3 software together for high detail character creation, preparing for animation and combining the results. Idea here is to show how to make these software work together – not how to model, sculpt or animate.

Method Limitations

  • You will need a base-mesh to begin with. I’m giving away the one I used here(see male ver2).
  • While a base mesh like this is good for extensive sculpting, it works best for naked characters. While a some clothes can be just sculpted on, for better results I would either add polygons on the base mesh to ‘set base’ for them too, or do separate cloth-meshes.
  • While the autorig is fine for many types of animations, for more advanced stuff(lip-sync, better deformations, muscle bulges, etc.) you will obviously need to improve the rig.

Real world example – a rushed task

To give an example for this, I had a brief like so: Make a group of Zombie-like dead people standing around in place and moving only a little. I had 5 or so days to do it which is quite quick considering my old slow computer takes 2-3 days just for rendering.
Because the figures will be in background of the scene, out of focus and behind effects, and because of the rush, I opted for very simple texturing, rigging and animation – and no facial animation. I ended up with 3 variations of one character and 5 simple looping animations. The dead mainly sway in place. Result is passable but some zombies are really not looking the way I want. See the breakdown for my problem.

Production Breakdown

Shape rough base mesh in Modo and save out an OBJ.

Not much to explain – either model your own mesh or pick one up somewhere. You can grab my human male base mesh (ver2.) here.

Import to Zbrush and sculpt to your hearts content.

zombie sculpted in ZbrushHave fun sculpting. I went up to level 6 with subdivision (see pic). I also used surface noise found in latest Zbrush(4.5) for extra skin detail. And herein lies my mistake: Overdoing the noise strenght will push the shape out in an unified manner that, especially when whole sculpt is later applied as a displacement map in Modo, makes the characters lose detail and look bloated. I have this problem with 2 of my zombies(see in video below). Somehow I noticed it too late – didn’t have time to re-render.

Add uv-mapping.

Zbrush AUVtilesI chose AUV-tiles because it is automatic and good for a mesh like this that has quite evenly sized polygons. Default settings give each polygon same size in UV-map. Thing to remember with AUVtiles or GUVtiles or similiar, though, is that if you use them you should paint your textures in Zbrush. At least in Modo(302) painting on an AUVtiles uv-mapped  mesh lead to paint in one place ‘leaking’ all over the mesh. Zbrush won’t have this issue – especially not as you can paint model first and uv-map after.

Export the mesh from lowest subdivision level and create and export a displacement map.

Zbrush displacement map exportNote that sculpting on any level changes the bottom level(base mesh) as well, so you better export your base when finished. Final thing is to export a displacement map which again you do when on lowest subdivision level. For my settings see the picture.

Import the base mesh to Messiah and use autorig to get it rigged.

  • Load the mesh in. Select it and in the Setup-tab and hit Autorig(see Setup>Items). Next in Setup>Effect find the Character Face Camera under Bone Deform, select that and your character mesh and hit Replace(see pic).
  • Next in Setup-tab move the Character Root to where your figures hip center is. Then, starting from hips and going out bone by bone, scale and rotate bones to fit the character. Work only on one side, right or left.
  • When done placing bones, go to Setup-tab>Items>Drop-down>Fix Symmetry. Root could be your FK_Spine or similar base bone. FixSymmetry will use it as a starting point and will go down the hierarcy looking for things with your typed suffix/postfix. For Source type the letter of your working side, like _R, and for Destination the other one (see pic). Then run FixAll or FixSides-command.
  • Now your can test your rig in Animate-tab by rotating bones. Remember to undo your tests When happy save your scene as ‘charactername_rigging’ or whatever for backup purposes. Then hit Autorig in the animation-view and wait. Save scene as ‘charactername_rigged’.

Messiah, Fix Symmetry
Messiah Autorig, set to use model
There is also a video tutorial on Autorig (not mine).

Animate and export animation as MDD

Messiah, export MDDI leave animation to you. MDD-export happens in the Customize-tab>Drop-down Menu>SaveMorphSequence. Select your mesh. Type in what frames to export and give a filename. Generate Current Morph Sequence.

Mesh to Modo and apply MDD

Modo, apply MDDImport or load the mesh and right-click it – in the menu find the MDD-deformer and load the file. Now your mesh will be animated (see Animate-tab).

Setup lights and camera, texture and displacement and render to image series

Modo, set displacementYou can see my lighting tutorial here. For texturing do what you want. I used a simple procedural(Cellular) for both diffuse and bumb, and painted a mask for specular and diffuse amounts – mainly just added black to eye-sockets because director wanted all black eye-sockets, no highlights or anything.

For displacement I have yet to find optimum solution since Zbrush displacement export-workflow has changed since ver.3 and I just started with 4.5. However here is something I find functional: Bring in your displacement and make one instance of it. Then set the instance to Invert and its Blend Mode to Subtract (see pic). Now the regular displacement pushes polygons both out(positive) and in(negative) and the second displacement adds to the negative. Play with the instances opacity and your materials displacement amount for final result.

Remember Modo defaults to 24 frames/second (film framerate) so that’s your render output unless you change it. Finally when you have a series of images take them to your favourite video editing software and make a video.

My result with this method

Actual movie effect has more of these figures standing on the background of a scene (hidden behind effects and out of focus). This video is just so that I can show them to you. Was a rushed job with problems and lackluster animation, but there you go.


See it in HD at Vimeo.

Do you have comments or insights? Please share.

Personal Animation Production Hell

This is a story about how my animation production came about, and it wasn’t the way I recommend.  Read the following brief journal and see why.  This was done on the side of occasional freelance work and other on-going projects(movie and game).  I didn’t sleep much for half a year.

You can view clips of this animation production at the start of my 2009 demoreel(hd).

January 2009

Fishman, old design from 2007My HD-demoreel needed some current generation game characters, animated.  I decide to go with a fishman who I had earlier modeled a preliminary head for.  For his nemesis I chose a nasty looking deep sea fish (enlarged many times over).  Plan was low-poly game-models with Zbrush-sculpted details applied as normal-map.

Fish low-polyI didn’t spend much time on design, just went ahead modeling animation-ready base meshes in Modo.  Polycount (triangle faces):  fisman 7532(including eyes, teeth, clothes and equipment), fish 4572.

This large image shows the fishman construction, simplified.

Fishman new designAfter 3 or so weeks I had both characters modeled, sculpted, textured and rigged.  Rigging was the slowest step, for it is the most technical and not my favourite.  Last days of the month went to finding a way to make Messiah animation work in Lightwave with Zbrush-based displacement.  I’ve later done a tutorial on this.
Plan had changed:  Game character showcase now had a short high-detail animation production added to it.  Oh boy.

February 2009

Action takes place by ocean coast, underwater.  Fishman escapes towards the light and the demonic fish chases.  Enviroment creation was next.

I modeled an underwater bay with massive roots coming from above.  The more I built, the more the story wanted to grow.  Dangerous thing, that.  Suddenly I was doing particle effects, great mats of flowing seaweed and water caustics, colours, shadows and light projected from world above.  It was slow work, endless testing.  Early February was also when I started production rendering, my one computer laboring 24h hours a day – with limited power of course while I work.

scene modelingscene particles

scene layout, polygonsscene layout, textured

After that I could finally begin animating and of course discoved issues in the rigs and and meshes that needed tweaking.
animation production scene example, final look

The tiny animation production had grown to unestimable size.  And silly me went ahead optimistic.  I knew it would take some time, though.

March – June 2009

These 4 months were all divided somewhat like this: 1 week for animating, 2 for trying to make renders happen, and 1 for other technical problems.  My ambition was too much for my computer, or, better said:  My goals were all wrong –  high detail & HD instead of good story and animation.  Had to drop many cool features, optimize the scenes and renders, find workarounds and segment the workflow as much as possible to render at least one layer at a time.  This in turn caused problems when things separated to several scenes had to interact with each other(shadows and more).

Production Hell Crash screens
In short most of the entire production was spent fighting limited resources, trying to make the render at all possible, and then render and re-render because it crashes over and over.  I count my computer rendered 5 months(!) around the clock giving me 12+ gigabytes of hd720p animation frames: characters, scene and effects all on separate layers.  Combined it is 5-6 minutes of animation.

July – September 2009

I spent a week or so combining animation frames to video clips in Vegas.  Doing this it crashed 9 times out of 10.  HD editing with more than 2 layers was again too much for my computer.  The rendered clips revealed many faults in the animation, but there was no way I would go through the test’n crash-hell again to fix them.
I edited the animation down to 3 and half minutes. Following removed scene was an easy cut.  It doesn’t fit overall story pacing and both continuity and animation are lacking.  In the clip the fish looks for the fishman but finds his discarded lamp instead.

A sound-savvy friend did the sound effects in August.  I also had a musician working on the music, but our sensibilities didn’t meet this time.  In September I found another musician.  One of his compositions was almost a perfect match for the film pacing and lenght.  So, on September 29th the final movie was complete.

Results and things learned

The movie is now going to festivals.  The first it was accepted  to is  Short Film Festival of Los Angeles.  So even though it wasn’t a sensible story-based production, it has some merits – people like it.  I’m glad 🙂  This festival tour is why I’m not sharing the film online, yet.

So what did I learn?  I knew this is not the way to do an animation production but couldn’t help myself.  It was a technical challenge I set myself to finish, no matter what.  I learned not to do production this way ever again.  Also the process taught many practical things – some I’ve been sharing as tips.  And finally I learned doing production the hard way doesn’t necessarily mean the result is bad.  But doing it ‘right’ would improve end result a lot and make whole process a great deal easier.

Please don’t get carried away with some half-baked project like I did.  Be a realist and plan well to get the most out of your story and animation.

What about you, what’s your story?  Have you made your own production(s) or tried and crashed & burned?  I’d love to hear about it.

8 Animation Production Tips – Modeling and Animation

I wish to encourage lunacy that is Personal Animation Production.
This is Animation Production Tips collection 1.  These were born from problems I’ve faced, from the neurons burnt.  Read and save yourself a great deal of trouble.

Note that these are Tips.  Many could be expanded to full tutorials.  You may find futher info on some of these tips somewhere – maybe even here, later.  Important for now is to get the ideas across.

Tips for animation production

  1. Use each software to their strenghts.  Build a ‘pipeline’. May sound like a costly solution but doesn’t have to be(Wings for modeling+Blender for animation, effects and video&audio editing= all free).  You can build an affordable pipeline even with commercial software and have it all under the price of one Max or Maya licence.  One example of such a combo would be Silo, 3D Coat, Messiah and Vegas Pro.
  2. Model your characters for animation – use edgeloops to create surface flow that deforms well in animation.  See the above picture?  Your model has to be good to get that range of motion without problems.  This is crucial especially in the joint and face-areas.  In short your polygons should mimick the major muscle flows under the skin.  Surface flow is a major topic by itself.  If it is a new concept for you, I suggest you start from the following classic modeling document.  http://www.theminters.com/misc/articles/derived-surfaces/index.htm
  3. Don’t go super low-poly with your character models.  I’m very familiar with the obsession to optimize, but if you go exceedingly low in polys your character deformations become too large – no longer in your control.  A bit more polygons is better for displacement too – it displaces with more reliable results.
  4. Use displacement for detailing.  Sculpt or model the detail in a software that lets you bake it into a displacement-map.  In production use less detailed models and use displacement-maps to bring the detail out at rendertime.  Advantages are a lot lighter animated models and scenes meaning generally better animating conditions, faster manipulation and hopefully less crashes too.  Also you get faster overall rendering as detail is generated only where and when it is seen.  Most software should allow linking displacement to, say, camera distance.  Or you can set the amount of subdivision happening per pixel – meaning only the area that shows well in your current camera frame is subdivided for detail.
  5. Use as few bones in your rig as possible. Unless you’re creating the ultimate in realistic muscle deformation, you can get by with very few bones.  The less you have the smoother deformation created by them can be.  You know, organic.  In reverse the more bones you add the more you have to adjust bone influence or use muscle bones between them or corrective morphs or what have you – all to get rid of the too sharp deformations many bones bring.
  6. Transfer animation from one software to another with MDDs.  MDD is an universal way to transfer Mesh Deformation Data.  It transfers every deformation of the mesh in your animation software, even morphs, meaning all animation, to another software.  This way you can animate in animation specialized software and do the rest in whatever software you like. MDD-support should be common.
  7. Brake your animation into sequences.  Don’t try to animate all in one project-file and don’t try to export long animation mdds. The files can get corrupted and then you lose all at once. And long animations, especially with complex meshes, become huge as mdd-files.
  8. Set your character rig up so that you can do mesh or rig revisions with ease in production.  Lets say you find, right in the middle of production, that you have to change geometry in your characters shoulder area.  It will be an absolute pain if, to get the changed model moving again, you have to re-weight it and set the your mesh-based tricks(morphs and such) up again.  Instead use an animation software that gets by with bones and weight fields and such – so that all is in the rig and not tied to the mesh in any way.  Then you can change the mesh around the rig as much as you like, change to other characters even.  Messiah works like this.  Your software, if other, might not but may have some other way to save you from re-weighting-hassle.  Find it out and test it before you start animating.

Do you use these tricks in your productions?  What would you change?  What would you add?  What tip would you like to see expanded to a tutorial?

Illusion of Life in 3D Character Animation. Part 2.

What are the most common mistakes new animators make and why?  Could it be that the mistakes come from ignoring the classic rules of animation? In part 1 we examined what makes good animation.  Now lets check what animation helping tools are available that novices may use.  Then lets see, via a case study, if novice animators make mistakes and if so where (with some insight into why as well).

Animation helpers in software
Most major animation softwares have some helpers included.  New animators are likely to use these to get their productions up and going.  Note that I refer back to my research in 2003 here, so things may have changed.  Probably there are overall a lot more helpers which can be good or bad – depends how you use them.

  • Autorigging.  The software autorigger lets you sketch out the driving skeleton with few drags and klicks, and one may think that should be it.  In truth every character has problem areas, usually at joints, where you will need to either add bones or corrective morphs, or adjust bone weighting.
  • Walk/Run with routes and steps.  You draw a route of footsteps to tell character where to go and can adjust the walk with variables like speed or step lenght.  Problem is the generated motion is just a sketch and should be treated as such.
  • Character dynamics.  Character rig can automaticly maintain balance.  Hips twist and tilt in the walk and top part of the character balances this out.  But again this gives only a sketch, lacks personality and doesn’t know things like how heavy or asymmetrical your character may be.

Analyzing animation – a case study

Lets look at how a novice level animation meets the criteria of lifelike animation.
Moriar Ubi Sum screenshot, copyright is Moriar Ubi Sum creatorsConsidering I had no group to test with nor the resources for several test subjects, I chose just one animation and only the main character to analyse.  My selection was Moriar Ubi Sum, a short movie made by a team of 3 people in 2002 using 3DS Max 4.  Luckily it is still on-line.  It would be bad manners to leech their stream here, so please go to the site and see animation there.  The image is from Moriar Ubi Sum.
http://moriarubisum.free.fr/
It is a curious short because pretty much everything else is of good or even better quality except the main character and his animation.  That contrast is the reason I chose just this animation.
The most obvious problems and their connections are as follows.

[styled_table]
IMPRESSION PROBLEM AREA DEFINED
Movements are too precise and the guy moves like on tracks Variance or chaos.  May have used footstep routes to direct the character and did not edit the resulting animation sketch.
Poses and moves too stiff and mechanical and not all parts of the body move Arcs, Squash & Stretch, Tilt & Twist
Main character looks off balance and doesn’t shift his weight when doing something Balance, Tilt & Twist, Anticipation
Movements don’t seem to require an effort and stop or start too abruptly. The character often acts like a marionette doll rather than doing the action himself. Dynamics, Squash and Stretch, Anticipation
Character’s limbs deform badly at knees, elbows and shoulders. Clothes or hair don’t react to movement. No Overlapping action. And may have either used an autorigger and left it at that and/or didn’t bother/have time/know how to fix joints.
His eyes are not alive and hands and fingers have this ‘frozen in death’ look to them – they are in a rigid pose and rarely move. Overlapping + Subcouncious action
Movement in general is too lazy or lacks ‘punch’. Overall Timing, Dynamics
[/styled_table]

For Moriar Ubi Sum creators reading this I wish to say:  I don’t mean to offend.  Rather I wish to make a point.  Would you agree with my critique now, 8 years after your animation release?  Rest assured I will be equally harsh when examining my own work.  That’s coming later.  Edit:  Watch my first animated short and read critique here.

Results

The example animation did lack ‘life’ and showed the sympoms of tool reliance.  We should conclude novice animators often don’t pay enough attention to animation principles and may use the software helpers as a crutch.   You can find more examples of this in large animation archives.  Look for other than the most popular animations as popular clips usually have little to fault in this regard.  And forget ‘first’ works by Animation/CG school graduates put out – there is nothing novice or about work being supervised by professionals.  Really this study applies to beginner animator’s(often self-taught) first productions the most.

Does this mean any animation with these issues sucks?  Definitely not.  If the animation is entertaining(story, acting etc.), people usually like it.  What you can take from this is just that applying the principles and remembering software helpers don’t do all the work for you makes for better animation.

Does this seem pointless or useful?  Does knowing the principles  help you see the problems and how to fix or avoid them?  What is your experience?

Postscript
So what happened with the study of these articles are based on?  It was rated 2.5/3.  The critique is what I really like.
[blockquote]”Has practical approach, like learning material, which is positive but also negative as it takes away from the merit as a research paper.”[/blockquote] I hope to keep this blog just like that: plain but practical.