Future of 3D modeling looks organic – why care about polygons?

Future of 3D modeling looks organic, image
3D Modeling programs and devices advance in leaps and bounds. New tools make sculpting accessible and ever more organic. Can artist skip learning the ‘oldschool’ skills and just embrace the new?

3D Tools of the Future are here

Coming up is Motion-based creation with a new device, Leap Motion, think Kinect on stereoids. The developers say it was originally developed with 3D modeling in mind. See some collected videos on the tube: http://www.youtube.com/watch?v=mQkKyOOyLSs&list=PL867A53645EDDD94C

Playstation 4 developers are showcasing motion-based solution offering freedom for modeling, amongst other applications. See here from 1:50. http://www.youtube.com/watch?v=KI4nn9uDFGE

Another new device takes the user in to 3D, Virtual Reality that is. Oculus Rift is a revolution in VR-headset-space and built with games in mind. While not meant for 3D modeling I can’t help thinking how it would be to work in a blank, Tron-esque virtual space that you could populate with whatever references or other stuff you need. Combine that with a motion sensor like Leap Motion and wow. http://www.oculusvr.com/

3D scanners are also coming to home users, in time. Surely there would be no approach more organic than the original, clay? Gnomon School blog speculates on those possibilities.
And you don’t have to wait for the dedicated scanners. You can scan objects with Kinect controller. http://www.youtube.com/watch?v=of6d7C_ZWwc

All in all technology barrier is getting lower and lower. Which is great.

So if sculpting gets so easy and fun..

Why care about polygons or polygon modeling anymore?

If you do sculpture then you don’t need to.
But otherwise the issues areas are:

  1. Scanned/Sculpted Model is not readily usable for anything else than a sculpture.
  2. Many types of 3D models are best realized with polygon modeling tools.
[clear]

1. Sculpture or scanned model is good for a sculpture only – unless you apply modeling skills

sculpt detail
A scanned or sculpted 3D model, by default, does not have construction that makes sense for anything other that what it is – a sculpture. For animation, game or any other practical use the sculpture is too dense and has no useful topology aka directed polygon surface flow. (One COULD pose a sculpted figure in a sculpting program, and sculpt to fix the issues in the new pose, or do all same steps in real clay and scan each pose in to 3D, and then render those poses for stop motion-like animation, but that would be painstaking.)

Too dense model consist of too many polygons and is simply too heavy for game or animation use. Fortunately there are tools to slim it down like Zbrush’s Decimation Master. However it does not fix the topology.

No uv-map means the model is not mapped for texturing. Software like Zbrush allow texturing without an uv-map but it works only in the said software. Zbrush also has great auto-uv-mapping but it is not the same as a map planned and made by a person. Games in particular can demand very creative tricks in this area. And the hook here it is that a person can’t reasonably uv-map something that doesn’t have a decent topology.

It comes down to (good) topology – without it a model..

  1. deforms badly in animation or posing
  2. shades oddly
  3. displaces less well
  4. can’t be sensibly uv-mapped
  5. and is pretty much in every way more difficult to read and work with

For more on some of the above points, please see Why Surface Flow Matters, Modeling For Animation and Testing Models for Animation.

To make good topology one has to understand polygon modeling. And if it is characters than required is some knowledge of anatomy. It is also beneficial to understand how models are rigged and what happens when they animate.

What about automatic topology?
Ideal would be fully automatic and perfect topology creation tool so that artist could focus only on the fun parts, shaping and painting.

There are tools that help a lot. 3D Coat and Zbrush for example offer auto-topology-tools. This 3D Coat video shows the idea well: http://www.youtube.com/watch?v=vEnwxnNMPk4
However the video sticks to larger elements for a reason. In areas of detail like the face the tools need a lot more guidelines to produce something usable. Again the user needs to understand polygon modeling. Also the ‘automatic’ tools are by definition not as precise as modeling tools (after all the idea is Not to work with polygons). Hence polygon or two out of place may become a pain to remove. Well, until the retopology-tools advance to a level resembling artificial intelligence.

2. Many 3D model purposes and styles are best realized with polygon modeling

Second major defence for polygon modeling is that many platforms require low-polygon models which are impossible or too much work to do with the organic future tools. Sculpting can’t compete in low-poly with polygon modeling simply because that is what polygon modeling tools were made for (in the beginning all modeling was low-poly).

Low-poly modeling is also a skill and style(s) of its own and is used in great variety of platforms and media, including many types of games, multimedia, web and visualization to name a few.

Future of 3D is bright

CGmascot head sculpted
While I want to remind people that new tools don’t change modeling altogether, I am still excited by them. The progress is wonderful and very welcome! The work becomes ever more fun. I for one can’t wait to play and create with some of these tools.

BTW I didn’t mention alternatives to polygons in this article, like voxels, as I don’t think they are yet solid enough to compete with polygons in everyday use.
[clear] [related_posts]

Low-poly Tips 3 – Game Art Asset optimization

These are 3D art asset modeling, rigging, uv-mapping and texturing tips. And not only for low-poly though it is where they are needed the most. See also other tip collections, the first and second set.

Minimize number of Draw Calls the Asset generates

Draw Calls are for game engine the number of separate objects, materials and textures that are loaded.  The less draw calls the better the game can run. Here are some ways to lower the number:
Multi-object texture optimizing art assets

  • Have each character as one single mesh. Characters that are made from pieces in-game cost in draw calls.
  • Combine separate static meshes to one. If you can have a collection of objects as one object, one file(the meshes can be unconnected), it is better than as several files.  But don’t combine a whole village to one object as the whole thing would get loaded to memory even though you may not need it. This trick is best for moderate collection of objects, say all items inside a shop interior.
  • Use only one material and texture per object. Or even..
  • Have several objects all use same texture and material. This means each has the same uv-map but uses only a portion of the whole – uv-map collects all textures together. See picture. Even though not shown in picture(for clarity) the sections different objects use can well overlap.
[divider_line]

Optimize character rig, use 2 rigs – one for animation and one for export

The less bones your character has the lighter it is to run.  And less resources used for one character means more to use elsewhere – maybe even allowing more characters.

But very few bones makes animating difficult and prevents many motions. Of course we rather animate with the optimum amount – and with control objects as well to make work easier. Sure you can have control object in your game rig and just make sure not to export them to game, but having bones in a rig that you don’t export, like between one bone and another? That is asking for trouble.

Solution is two rigs, one for animation and one for exporting to game. Game-rig is linked to follow animation rig – you animate only with animation-rig and export only with the game-rig.
[clearboth] Character Rig optimization
[clearboth] [one_half]Animation Rig is the rig you build first. It has the bones and control objects you want to animate with. The rig can even have details, like fingers, which you can animate and later decide to use or not(via the game rig). Build your animation rig and then consider what parts of it are essential for moving the character. Every bone in a character only supporting other bones and not really affecting the mesh itself is a bone the game character does not need. So, do you really need the neck-bone if head and chest bone playing together can offer the same result or close enough?

[/one_half] [one_half_last]Game Rig is collection of helper objects(any type, also called nulls), one per every important part of character. The reason to use nulls instead of bones is that creating bones is intended to build hierarcies you don’t need and should not have, here. Create these objects, then align and parent them to follow the relevant bones of Animation Rig. They should relocate to pivot-points of the animation-rig-bones. Then skin your character mesh to these helper objects(nulls). In the end you animate with Animation Rig and the Game Rig follows and deforms the mesh.[/one_half_last] [clearboth]

That was the 3rd set of little tips for improving 3D (game) art assets. Cheers!

Low-poly Tips 2 – Game Art Asset Optimization


These modeling, uv-mapping and texturing tips apply to 3D art asset work for games and similiar media. While they are best matched with low-poly 3D, they are definitely not limited to it. See also the previous collection here and the 3rd one here.

Middle edgeloop optimization & UV-mapping a character

It is common to model an edgeloop running around the middle of a character. It allows mirror copying the torso – you uv-map and texture only half and duplicate to get both halves with same detail (see Low-poly Tips vol. 1 for futher explanation). However there are number of reasons why full middle loop and mirroring everything is often not the best choice.

Mirror-uv-mapping everything on a character mirrored makes it look more generic. For visual interest you want variation in at least the texture if not the shape and for this you can’t mirror everything. In a humanoid figure the places seen the most are where you want variation, usually top half of character torso, shoulders and face.

A middle cut running all through your character model means more polygons. There are places where you have to have it, namely the crotch/hips area for humanoids because this area receives lot of stretching – you need to separate the legs. But there are also many places where you don’t need it. See the image for example of middle edgeloop use.

For four (or more)-legged characters like dogs you can often forgo the middle loop at hips, too. Sure the area will bend and break in animation, but if it doesn’t show then does it matter?

Fake roundness with just 4 polygons – optimize asset polycount

A square can be made to look rounded in game. The trick is to use one smoothing group and turn a square so that polygons are not aligned to world axis, rather angular to them. This places the corners closer to where round objects would be and away from where square objects corners were. That and the smoothing group fools your eye. It is mostly the smoothing group – I don’t know the technicalities of this. Just that it works. See same tricks also with character legs.

Of course this only applies to the sides, the 4 polygons we are talking about. Looking at the top and bottom the objects square nature shows, but when you hide them it is another story.

Fake complex shapes with bitmap and alpha channel – optimize polycount

Any object with a mostly flat top, especially shapes like barrel and similiar where top is equally proportioned or larger than parts below it, can have a faked top: a single polygon and the shape of the top mapped on it with bitmap and alpha channel. This can save numerous polygons. However the top with alpha does take space from your UV-map since it needs some size to have enough detail to not blur and reveal its faked nature. So judge for yourself which one is more important with your object: texture/uv-space or lower number of polygons.

Texturing with seamless textures – re-using textures

Re-using textures is a core part of low-poly work. Characters don’t allow that too much, but props such as houses do. Say for a medieval building you might just have a texture with 1/4 stone, 1/4 wood, 1/4 roof, 1/4 window – see image used to texture a well, the idea is the same.

The trick is to place almost every polygon in your uv-map separately so that they grab the maximum texture area – AND also change polygon sizes, rotation and mirroring to add variation to the way it is displayed on your model.

Unlike ‘standard’ uv-mapping, where you map first and texture after, for this you better do the reverse. Make the texture – lay out the different material areas(preferably each tileable). Think what you need and what shapes you need, like longer varied strips of material, and add those bits to your texture. Then uv-map polygon by polygon, or few at a time, to get all you can out of it.

Texturing by re-using textures does become a balancing act: Do you use more uv-space for one particular area or more polygons? Say you have a long continuous wall. To cover it all with a single unique texture would take a large amount of uv-space. On the other hand repeating one seamless texture over and over would require more polygons. So you weight the pros and cons and perhaps go middle way. Usually my take is that few polygons does less harm than needing to use larger textures or more textures.

Remember MipMapping and Antialiasing when texturing – stop texture bleed

When the game creates a MipMap from your texture, or when the texture gets antialiased, it gets blurred. This is a problem at edges of the uv-island in your uv-map. Either the background color of your texture bleeds in or the alpha channel does(usually as black colour). As result the uv-edges become visible on your model in game.
To prevent texture bleeding problem, push the textures themselves well over the uv-seams. Then, when the blurring happens, you still have the correct colors at uv-seams.
http://en.wikipedia.org/wiki/Mipmapping

Acknowledge uv-area repeating – optimize texturing

If a part of your uv-map goes over the uv-area, it will come out at the opposite end. This is not displayed visually in your program(not in Max or Modo at least), but knowing it you can use it to texture uv-parts that do not fit in your uv-space. Mind you this works only with seamless texture.

Have less seams in UV-map

Models that have their UV-map slipt to numerous parts count as having more vertexes as far as game engine is concerned – each split means more vertexes and so heavier to load. To minimize vertex count you should have your uv-map as continuous as possible – say a character skin could be one big open pelt like an animal skin. Of course uv-mapping and texturing poly-by-poly, like written above on seamless texturing, does the exact opposite.
Do note that going for less UV-seams is a fine-tuning type of optimizing – it is best used in addition to other tricks, where possible, and not to replace them.

Make textures details to fit size displayed in-game – optimize textures

The size that the objects appear in game, be it because of optimum camera distance or whatever, defines maximum texture detail you need. Say you have a character face that is 85×85 pixels on screen in game. You need no more than that for it in the texture map. Of course if your game offers free camera, modifiable resolutions and such tools for player, things get more complicated. But even in free camera games there has to an optimum to aim for – what is the size of texture detail at camera distance where the game is Designed to be played at?

This ends second collection of art asset tips, especially useful when working with low-poly 3D assets. I hope some of these come handy in your projects.

How to keep Modeling fun?

I’ve written bits about polygon flow and modeling for animation and a comparison of a model built for animation with another that’s not. What about modeling technique? What do you use?  Have you weighted the pros and cons?  Note that this is only about polygon modeling, not about nurbs or sculpting.

I think modeling should be fun. To be fun it needs to be fast and without fear of making mistakes, of getting stuck.  Fun modeling is safe.

First way of making things easier would be designing with a pen. Polygons can’t beat drawing in planning. Second,  having the option of displacements and normal maps I would do very fine detail with those – not with polygons.

In modeling the fly in my soup has been keeping polygons 4-sided and relocating  ‘poles’, aka points where 5 or more edges meet, to where I want them. I have spent endless hours on these two things.

Why 4-sided aka quads? Quad polygons are something many programs prefer and also what displaces(i.e. sculpted detail coming out via displacement map) and deforms(animation) in the most relieable way.

And why move poles?  Areas with poles don’t deform well in animation and may produce render artifacts.  Push them where they are unnoticeable, to places that don’t deform much.

So, fun modeling would be a process that keeps polygons as quads and lets you control pole placement.  And ideally it would all happen without having to think about it.

Modeling methods

1. edge-out / detail-out / poly-by-poly method

Modeling technique: detail out / edge out / poly by polyStarts from a quad polygon or a strip of such polys, and extends more quads out from their edges. Often in this style you start from detail areas such as the eyes or mouth and then draw polygons to connect them. Everything stays as quads by default as long as you know where the extended polygon strips should go and connect. Same goes with the poles – you need to know where and how to place them. This style requires a design drawing to follow.  Also it takes some skill to either have the polygon flow setup in your head or to plan ahead of time and draw it on the design drawing.
pros: polygons stay as quads, not much clean-up work, good for details and fast to build when you know what you’re doing
cons: need to know what works where and what connects to what beforehand

One very nice example of poly-by-poly modeling is base mesh creation for this Yeti.

2. detail-in / box-modeling / sub-division modeling

Modeling - box modelingBeginning is a box or other base shape in your 3D software, which you shape to overall figure and start to carve detail in. You work more with polygons than edges.

This style is often connected to subdivision modeling, where you model just like above but view the subdivided version of your model instead (or on the side) of your actual work-model. The work-model stays as low-poly(easier to animate) while the final rendered result is the subdivision-surface.
pros: can go ‘freeform’ – model with little planning, can conceptualisize still in mid-process, easy to start with, easy to do major changes, fast workflow when done right
cons: detailing is more difficult than with no. 1, can be hard to keep polys as quads unless done ‘right’, can get difficult to direct the edgeloops when you are dealing with overall shape rather than just the loops themselves

Some tutorials:
Wiro’s tutorials
Southern’s Minotaur series

Which to use? You can use both.  Box-modeling is best for big things, poly-by-poly does well in detailing.

Fun modeling

This solution is all box-modeling: a way that keeps to quads and allows moving poles around.

Limit tools to the following (in addition to standard move, rotate and scale). This pretty much ensures you create only quads.  Tie the commands to hot-keys for speedy workflow.

Modeling - bevelbevel/extrude
modeling -collapsecollapse
modeling - merge polysmerge (to clean after collapse)
modeling - turn polygonturn polygons
Modeling - bevel groupCreate areas and edgeloops by beveling a group of polygons. This creates loops around and keeps quads. Go as far as you can with bevel – it is the easiest tool to use. See around the mouth and nose-loop beveling in the image.
Modeling - add polygon with bevels and collapseAdd one polygon. Select 2 or 3 polys, bevel and collapse. Remove the offending edges/merge polygons and you have one new polygon.
Modeling - remove polygon with polygon turn and mergeRemove one polygon. Turn 2 polygons like shown and merge to remove one polygon.
Turn polygon/edge (or similiar tool) to direct polygon flow. This is also how you can move poles around (to where they do the least harm) and in some cases even remove them.  See ‘Remove polygon’ above how the geometry changes.

Some of you may describe this as Taron-style modeling. It is very much the same, but I don’t often model with subdivision on. My end result is frequently for games where subdivision sorface is not an option (yet), so I stick to regular polygons.

That’s it. Box-modeling with certain tools used in certain manner gives just quads. This is a way to stop worrying, just relax and have fun. Of course the style is not completely trouble free, can get confusing with polygon turning, but still highly recommended.  If you still end up with a triangle somewhere, if it does no harm there then leave it in.  I’m not an advocate for Quads only – I just like to keep mostly to quads.

BTW the above method is also shown in brief in the latter half of this video: Animation Character Creation Tutorial – Modeling Tools and Method.  I will go futher into the workflow logic of it later.

What type of modeling feels natural to you?  What do you think of the ‘fun modeling’ style?

Low-poly Tips

These are basic tips for optimizing and improving low-polygon work in the areas of modeling and texturing.

Core of low-polygon work is to do a lot with little; Do something good with limited number of polygons and with small textures. Limitations are set by the game engine and platform the content is for – these days low-poly is usually for mobiles and other handheld devices. One very good way of getting the most out of your limited resources is doing stylised designs. I recommend this: Start by designing for low-poly. One of the best examples of such styling is the look of World of Warcraft.

I learned low-poly working on the Ultima 6 Project.  U6P uses Dungeon Siege(1) game engine and creates a large world filled with low-poly models. You can view some of my U6P work here.

  • Using  edges in low-poly, exampleMake use of every triangle. Sounds simple but is easy to overlook when people are used to modeling with quads or n-gons(more than 4 sided-polys). Since every polygon is triangulated(divided to triangles) anyway when exported to game engine(or rendered), you could just as well divide the polygons yourself. That gives you one more edge to define the shape with. The example shows how shape is created by placing edges dividing the quad polygons and what the result would be if the edges were misaligned – something you may get if you let the software triangulate for you.
  • model volume on joint outside, knee exampleModel volume on the outside of the joints. This way, when the limb bends, the outside preserves shape even when bent ‘open’. The example shows setup you could use for knee or elbow and some others for fingers.model volume on joint outside, finger examplemodel  volume on joint outside, finger example
  • Model for profile and main shapes, exampleFocus in modeling the profile and the main shape landmarks. Polygons not used for better joint deformation or for defining general shape are extra – something you can do without. Create that extra detail with texture-map, instead.
  • detailing with texture, exampleAdd shape with texture by drawing some shadows&highlights into textures. But do it sparingly. Strong always present shadows or highlights look false.
  • Make holes with texture. Say you need a grate with lots of holes. Modeling them would mean a lot of polygons. Why not make plane and texture that with a transparent texture (if your game engine supports it)?  You could even have two planes, one see-through grate above and other below it with a well/whatever deeper place painted on it.  Simplest solution is of course ‘holes’ painted in the main texture.
  • Multiple textures exampleUse multiple textures on your object. Game engines(and other software) allow textures only up to a certain size. But if they allow multiple textures, you can get around the limitation. Of course don’t go adding textures beyond what the target device can comfortably handle. One main reason to use multiple textures is when your game engine allows replacing parts of geometry in game. Say you have a character with skin(texture1) and clothes(texture2). The latter texture(2) and the geometry it covers could be interchanged in-game to another version when character changes set of clothes or armor. Even if your engine doesn’t allow interchanging parts, using multiple textures is a good way to do many variations of one model.
  • UVW-map exampleFor more texture detail somewhere on your model make that part bigger in your UV-map. Sure this leaves less space for other parts, but some areas are more important than others (character face for one). Example has the gargoyle skin-texture with uvw-map overlayed.
  • Paint your texture in 3D-painting software and detail futher in 2D-software. Sure you could paint all in 2D-software like Photoshop but there is no comparison to 3D-painting. Simple thing like making a straight line around a character becomes a pain if you have only 2D-paint to work with. Here are several 3D-paint softwares listed. Tattoo for one is free for personal non-commercial use.
  • Colour your textures by hand. Painting in shades of grey, black’n white, and then overlaying colour on might be easier, but if you instead both choose and paint the colours by hand, the end result is more vibrant and alive. Same applies to gradients. They tend to be too mechanical, too perfect. Paint the colour-shift yourself.
  • Use duplicate-parts in your character/object. Hands can often be mirror-copies of each other, same with legs. This is how you can save in UV-space and hence make all parts bigger in the UV-map and so more detailed. This is ever more true with objects like buildings where you can use same textures over and over. Clever and creative UV-worker can create many variations from just one texture-map.

Tutorial – Model an Animation Ready Male Body

This tutorial is for those wanting to learn character modeling or modeling for animation.  It shows how to model a male body, a base mesh.  End result is good for both animation and sculpting.  For more info read an article about modeling for animation or see another where I test the tutorial-mesh against another mesh.

The model doesn’t have a proper head nor does the quide show how to make one.  That is topic big enough for another tutorial.

I’m also giving away the final mesh in OBJ-format. You are free to change and use it any project you like.  But selling, model or tutorial, is not allowed.  This is free learning material.

Grab both tutorial and 3 variations of the mesh at Files.
If you like this tutorial or have critique please leave a word.

Modeling for animation – Test

Earlier I wrote why surface flow matters and a bit about why model for animation. Here I wish to show the benefits with visual examples.

I will compare how two character meshes deform in animation.  To make this comparison mean something, I have selected one of the best base meshes I could find without directed edgeflow.  This mesh is made by unknown person.  It has nice even division of polygons – good for sculpting.  Second mesh, seen on the right, is mine and built for both sculpting and animation.  It is almost exactly the same size and shape as the first.  I have rigged both meshes in Messiah with one rig – they both do exactly same motions.   I haven’t done any weighting of bones to the mesh – Messiah bones have a good effect on the mesh by default. Point is that with this setup the only difference you can see comes from the meshes.

Modeling for animation Test - the meshes we test with

Here are the meshes in rigging pose.  My Edgelooped-mesh has different head and no toes as I was lazy and many characters will be wearing shoes anyway.    The edgelooped mesh has about the same number of polygons in the body as the ‘normal’ mesh, but more definition because the flows define shape.  The flow also helps maintain shape in extreme motions, like seen in the stretching example.  Observe the general form, especially upper shoulder and chest area and the hip.  See how the edgeflow helps to keep the shape and how it deforms it a bit better?  Difference is not notable everywhere, but it is there and it is important.

Modeling for animation Test - stretching

Modeling for animation Test - arms

You may argue the first mesh would show the same definition if we just pushed points around and added a few polygons.  But that’s just it – unless you add those polygons as carefully placed loops, you will have to add a lot more than ‘a few’ to get the same definition edgelooped mesh Modeling for animation Test, knee examplehas with less polygons.

Some might also say that the flows don’t matter that much in animation production, because when final mesh is subdivided to gazillion polygons at rendertime there will be more than enough for joints and to keep the definition.  I disagree.  Base mesh is the one that gets animated, it sets base grid for the final – any problems in the base are still present in the final.  And I dare say they become more visible in a highly detailed mesh.

Last examples show how the edgeloops help in joint areas. With the knee I’m using the loop shown here (see image with lots of loops).  Same works at elbow and at shoulder-top.  The loop ‘binds’ the parts together and provides material for both sides of the outward bending limb – keeps the volume.

Modeling for animation Test - finger loops

With fingers I’m using a simpler ‘loop’ to keep polycount low.  It adds one more edge on the out bending part and helps to keep the volume.  It also introduces triangle-polys.  If triangles are a problem, add a full around the finger loop instead.

Conclusion

Mesh with evenly spaced polygons does well in animation and a mesh with planned edgeflow does even better.  No suprise there, but I needed to test it anyway.

What’s your take on this?  Is edgeflow really that important?  What is your preferred flow – care to show it?  I know mine is just one way to do it.

Modeling for animation – Body

Character modeled for animation is modeled to deform well in animation. This article desbribes benefits of edgeflow and edgeloops and the general ways to use them on a human/humanoid torso and limbs.

To begin check out  my article on surface flow, if you already haven’t.   And then, to go futher we need to understand edgeloops.  Quoting guru Bay Raitt:

An Edge Loop is an interlocking series of continuous mesh edges used to accurately control the smoothed form of an animated subdivision surface.

Edgeflow and Edgeloop are essentially the same thing.  Flow as a loop separates areas, defines shapes and directs edgeflow. You use loops in places where major deformation happens in animation.

Overall edgeflow within each looped area matters, too. Idea of a good flow, as I see it, is that when stretching or compression happens the polygons are already aligned towards the change.  Then the deformation is usually problem free.

The general advice is to model the flows following the main muscles under the skin. It can get needlessly complex considering how many mucles move a human body.  Some people have been obsessing over edgeloops for years.  Luckily you only need loops for main muscle groups or body masses.  Think what main masses/muscles move in a character and how. Direct the surface to flow along them and form loops around them.  The loops may be interlocking and that’s all good.  Then they mix and crease together better.

What if you want superb muscle definition?  Much of the missing detail can be added with displacement maps(from Zbrush, Mudbox and the like), and when the base moves correctly the displaced ‘muscles’ on it move mostly alright, too.   Though if you want realistic flexing muscles, you need a heavier setup – but this gives a good base to start from.

modeling for animation - body loops and flowsWhat are the Main Masses moving in a human/humanoid?

  • Back of a character bends below the chest.  You need loops going around the torso.  Chest bends too, even the rib cage below deforms, however the musle masses on chest and shoulder & back-area have more effect on the shape change, shoulder having clearest effect. You should have flow from chest directed to flow over the shoulder and to the back. Please note my optimization here has lead to a pole, a 6 edges intersection, in the middle of the back. It is not troubling me, but if you build similiar mesh you may want to add few polygons in that region to get rid of the pole.
  • The arm-mass connects to the shoulder and the problem area is over the arm-socket.
  • Head and neck moving about affects the area around the neck and some ways to the back.
  • Legs move the buttocks as well.   The mass movement limits to top of the hip bone, pretty much.
  • With arms and legs the problematic bits are knee and elbow both. Masses of back- and forelimb come together and separate there.  The loops help to give more mass on the outside part of the bending limb, giving material for both parts that bend away and a centerline that stays more or less in place. You may get by with a tube-like structure here for cartoon-characters where things aren’t that exact OR by having many loops that you carefully weight to bones to deform just right.

model    for animation - x-loop

Highlights and edges show the flows I like to use for this model.   Disclaimer: Please note this is definitely not the only way to do this, just A way.  You can get by with less loops, more loops or with different loops. Whatever works, works – I find these work for me.

The loops above, in limbs and at the top of the shoulder, are variations of a very useful x-loop.  See the diagram of a more complete version of it.

Also here is another version, a knee with more ‘mass’ left in.modeling  for animation - knee variation You can do varied X-loops.

Do you find this helpful?  Please let me know.  And if you don’t think the shown flows help animation, please give critique.

Read more about edgeloops here  http://web.archive.org/web/20120422232142/http://www.3darts.com.br/tutoriais/edge_loop.php. Original article has expired but can still be found in Google archive(Thanks Terry). It is in Portuguese.  Here is a translation to English.

Surface flow – why it matters

Often when someone new to 3D shows their first model online, the form lacks definition and the whole thing looks a bit play-doughy.  And if they show the model wireframe, you can see the construction may have lots of polygons but the distribution is not even nor does it flow with the shape.  It’s often because they construct the model without changing surface flow.
Surface flow is directed with either the edges between the polygons or by polygons themselves – the same thing really.  It is usually called edgeflow.

Benefits of good edgeflow

  • more definition with less polygons, an optimized mesh
  • better deformation in animation (if built with anatomy in mind)
  • better shading

surf.flow good flows

I’ve prepared some examples, an organic shape done in two ways:
1.  with edges directing the flow
2.  no flow directions made

First image is our example shape, a cute  nose.

surf.flow nosepolys

Here it is as a basemesh and same with subdivision(control cage showing).  The nose has 42 polygons.  Now lets try achieve same shape and definition without any  flow direction – starting with 42 polys.

no surface flow, examples

As you can see, results don’t get much better with more polygons, somewhat worse in fact.  The model becomes difficult to work with.  Yet I’d still have to increase polycount to achieve the same definition we get with directed surface flow.  You CAN get good results this way in a sculpting program, working with thousands or millions polygons, but to get anything out that you can use in any other program, or animate for that matter,  you have to reconstruct the model surface flow.

surf.flow mouth exampleAnimated models benefit from good surface flow when flow is correct in places where the deformation happens, like around the mouth.  See the pic.  And sorry but I shan’t torture myself by building a poor mesh for comparison.  Just imagine a  rough head shaped tube here with a polygon or few pulled in for the mouth.  If you think that’s bad, imagine what happens when it is animated.

Good surface flow also improves shading simply because having a flow that defines the shape nicely from the ground up, you have every big and small part of said mesh aligned along the shape and not against it.  Also with good flow you do not have mesh issues(poles, etc.). When virtual light rays hit those parts, they reflect and refract as they should.  Shading, that is highlights and shadows and all, looks like it should.  I have no solid proof on this, but can say from experience that models with good edgeflow render better.

To conclude building without edgeflow is harder, needs more polygons and renders worse. Our nose is not the best example, but the difference should be apparent in the render below. Left nose is with good edgeflow, right is without.

comparing model with good surface-flow and other without

Besides the bit about animation, all of the above also applies to hard surfaces models, not just character models.  So model with directed flows, model well.  We’ll go into edgeloops and modeling for animation later.

This whole article, the concept, is one of those things I wish somebody had explained to me a long long time ago.  So here it is, in my words.  I hope somebody out there benefits from it.  If you found the article useful, or if you think it’s all nonsense, please comment – let me know.

8 Animation Production Tips – Modeling and Animation

I wish to encourage lunacy that is Personal Animation Production.
This is Animation Production Tips collection 1.  These were born from problems I’ve faced, from the neurons burnt.  Read and save yourself a great deal of trouble.

Note that these are Tips.  Many could be expanded to full tutorials.  You may find futher info on some of these tips somewhere – maybe even here, later.  Important for now is to get the ideas across.

Tips for animation production

  1. Use each software to their strenghts.  Build a ‘pipeline’. May sound like a costly solution but doesn’t have to be(Wings for modeling+Blender for animation, effects and video&audio editing= all free).  You can build an affordable pipeline even with commercial software and have it all under the price of one Max or Maya licence.  One example of such a combo would be Silo, 3D Coat, Messiah and Vegas Pro.
  2. Model your characters for animation – use edgeloops to create surface flow that deforms well in animation.  See the above picture?  Your model has to be good to get that range of motion without problems.  This is crucial especially in the joint and face-areas.  In short your polygons should mimick the major muscle flows under the skin.  Surface flow is a major topic by itself.  If it is a new concept for you, I suggest you start from the following classic modeling document.  http://www.theminters.com/misc/articles/derived-surfaces/index.htm
  3. Don’t go super low-poly with your character models.  I’m very familiar with the obsession to optimize, but if you go exceedingly low in polys your character deformations become too large – no longer in your control.  A bit more polygons is better for displacement too – it displaces with more reliable results.
  4. Use displacement for detailing.  Sculpt or model the detail in a software that lets you bake it into a displacement-map.  In production use less detailed models and use displacement-maps to bring the detail out at rendertime.  Advantages are a lot lighter animated models and scenes meaning generally better animating conditions, faster manipulation and hopefully less crashes too.  Also you get faster overall rendering as detail is generated only where and when it is seen.  Most software should allow linking displacement to, say, camera distance.  Or you can set the amount of subdivision happening per pixel – meaning only the area that shows well in your current camera frame is subdivided for detail.
  5. Use as few bones in your rig as possible. Unless you’re creating the ultimate in realistic muscle deformation, you can get by with very few bones.  The less you have the smoother deformation created by them can be.  You know, organic.  In reverse the more bones you add the more you have to adjust bone influence or use muscle bones between them or corrective morphs or what have you – all to get rid of the too sharp deformations many bones bring.
  6. Transfer animation from one software to another with MDDs.  MDD is an universal way to transfer Mesh Deformation Data.  It transfers every deformation of the mesh in your animation software, even morphs, meaning all animation, to another software.  This way you can animate in animation specialized software and do the rest in whatever software you like. MDD-support should be common.
  7. Brake your animation into sequences.  Don’t try to animate all in one project-file and don’t try to export long animation mdds. The files can get corrupted and then you lose all at once. And long animations, especially with complex meshes, become huge as mdd-files.
  8. Set your character rig up so that you can do mesh or rig revisions with ease in production.  Lets say you find, right in the middle of production, that you have to change geometry in your characters shoulder area.  It will be an absolute pain if, to get the changed model moving again, you have to re-weight it and set the your mesh-based tricks(morphs and such) up again.  Instead use an animation software that gets by with bones and weight fields and such – so that all is in the rig and not tied to the mesh in any way.  Then you can change the mesh around the rig as much as you like, change to other characters even.  Messiah works like this.  Your software, if other, might not but may have some other way to save you from re-weighting-hassle.  Find it out and test it before you start animating.

Do you use these tricks in your productions?  What would you change?  What would you add?  What tip would you like to see expanded to a tutorial?