Vector input node for Blender

I recently described how to render a complete animation in the compositor. To achieve this effect, I used a texture which was modified in the compositor so that the scene didn’t need rerendering. The texture node has two inputs: scale and offset. Their color is violet, which means it needs some vector coordinates as input – but there is no vector input node available.

While I wished I had a vector input node, I found out that it is possible to connect an RGB node to the inputs, which worked, but had the limitation for values being in the range from 0 to 1. This was just enough for my uses, but it might not be for yours.

Luckily enough, I now figured out how to simulate a vector input node using a compositor node group. First thing to know is that R (red) maps to X, G (green) maps to Y and B (blue) maps to Z. Next, colors in Blender can actually exceed their range from 0 to 1. Anything greater than 1 will be displayed as 1. But for this vector input node, we needn’t care about this, because it will not be used as a color – but if you once need an oversaturated color, you can use the same approach. The pure R, G and B colors are multiplied with the input values of the compositor group and added together for the final vector output.

With this set up, you have a reusable vector input node group which you can save in a separate file and link or append to your scenes. The main benefit is, that you can now set keyframes on the X, Y and Z values which is not possible in the texture node offset or scale directly. So now the texture can be animated in the compositor. Another use for this node could be the vector blur node which has a speed input connector.

  Download vector input node group for Blender

Creating a reusable vignette node

One of the favorite effects of Andrew Price in the Nature Academy is the vignette. Almost every final render has it. The vignette effect consists of five compositor nodes and they are not really intuitive to remember. The next tutorial about snowy mountains is upcoming and so it is time to construct a reusable compositor node group.

The vignette effect I am constructing here is reusable and configurable at the same time, which means that the amount of the effect, the sharpness of the vignette and the color can be customized if needed.

The internal setup of the group and example usages with the default scene look like this:

How to use: go to File/Link (Ctrl+Alt+O), select the Blender file,  go to NodeTree, select the vignette node.

The group has four inputs: input “Image” is the picture you want to apply the vignette effect on. Input “Amount” is the blend amount and should be a value between 0.0 (transparent) and 1.0 (opaque). Unfortunately it is not possible to restrict input values at this time. The third input is “Edge” and controls the sharpness of the vignette effect. It also uses values between 0.0 (sharp) and 1.0 (soft). The last input is the vignette color, which is typically black, but can also be set to any other color, e.g. red like in the given example. You could even use an image or texture as vignette color.

Download vignette compositor node (60 kb, ZIP)

What do you think? Is this nodes setup useful?

Animation for the poor: 1000 HD frames in 1 hour

When I finished the plants tutorial, I was wondering what I could animate in this scene. But when I thought about 7 minutes render time per frame, I realized that I would likely not finish a 3 seconds animation.

A relative of mine posted a comment on my light streaks and he pointed out that the fog creating the light streaks is very homogenous. So I went into the Compositor and fixed it by mixing in a cloud texture on the fog. Meanwhile it’s pretty deprecated to explain, because Andrew uses the same approach in the lakes tutorial as well, but for the sake of completeness, here’s the node setup for it:

With this setting made, I got the following result (it has antialias effects because I didn’t know one needs to rerender; Andrew explained it now in the lakes tutorial):

Plants - enhanced fog

While doing this, an idea came into my mind. If it would be possible to animate the texture, I could render an animation of it and the fog looked like in motion. The only thing I didn’t want to do is rerendering the scene for every image. That’s why I saved intermediate results from the Compositor using File Output nodes. I can’t go too much into detail here, but just save the direct output of the streaks and then the scene including background into separate files (I used EXR files).

Next, open up a new blender file, delete the cube. In the render settings uncheck all passes and all includes (like sky, solid, halo) and simply setup Compositor nodes like this:

One odd thing I found is that Blender does not support vector input nodes, only value input nodes. While a value node holds only one value, a vector holds X, Y and Z values. To fake this, I am using an RGB node. Somehow the RGB values map to XYZ with the limitation that the range is from 0.0 through 1.0 only. This doesn’t matter for this scene as we need very low values anyway.

Now, extend your scene to have 1000 frames and go to frame 0. Make the color of the RGB node which is connected to Offset pure white (1,1,1), then hit the I key to insert a keyframe on it. Do the same for the RGB node which is connected to the scale input. Move over to frame 1000 and set the offset color to pink (1,0,1) and the scale color to bright pink (1, 0.7, 1). Make sure to set keyframes on both of them again.

Jump over to the video editor and make sure that the paths are linear. Press the Home button to view all and make things linear using Shift+E. Now render a single frame (F12) and it should not take longer than 3 seconds.

Ok, so if this is set up, render the animation into single pictures (full HD PNG will need 2 GB hard disk) and later combine it into a video as described by Blenderguru before. With this, I’m getting a full HD video in less than one hour which looks similar to this (note that this early version has non-linear keyframes which looks strange):

Download fog animation in full HD (26 MB) (please right click and save. Bandwidth is probably too low for streaming directly from my server)

I’m sorry I could not provide a video tutorial about it, but I’m going into holidays and wanted to share this with you before I leave. Please post any suggestions or improvements in the comments.

Foggy mountains

For those who have seen the sneak preview of the Nature Academy, a tutorial called “Create Realistic Snowy Mountains”, I’d like to critique the resulting render a little bit. For that you know what I’m talking about, please have a look at week eight (“Mountains”). I want to let you know how fog and mountains perform together in reality, because I’m looking forward to more realistic scenes as I’m not an artist.

First thing to know is that fog is near the ground. In case it is up in the sky, fog is called “clouds” :-) . Maybe you have ever looked up into the sky vertically when it was foggy and you noticed that the sky is actually blue. At the same time you might have seen almost nothing when looking horizontally in front of you (due to the fog).

Let me show an example of a picture taken in Munich, Germany. Munich is near the Alpes, which have quite that type of mountains Andrew is creating in his tutorial. And from that photo you’ll definitely recognize that there is more fog at the bottom of the mountains than at the top. And of course there is less fog for near things like the houses – for near things close to the ground the Z buffer can be used as is.

Back to Andrew’s tutorial, he’s basically using the compositor and the Z buffer to create dust or fog. For those who downloaded the video and still have it, it’s at position 1:01:21. Actually he announced to add atmospheric glow (at 0:59:44), but anyway…

Using the Z buffer in an unmodified way means that there will be the more fog the further away things are. It does not consider the fact that looking more into the sky reduces the amount of fog. Therefore I recommend change the Z buffer before turning it into fog.

How can we achieve this in Blender? Obviously the problem comes from the compositor, so we’ll also fix it there. But first, we need an additional input for the fade out and it’ll be a texture. The texture settings are:

  • Type: Blend
  • Blend: Vertical
  • Check Ramp
  • Use white at the left
  • Use black at whatever position matches your scene best
  • Select Ease

With this texture set up, we’ll modify the Z buffer in the compositor like in the screenshot below. In order to highlight where the fog will be in the final render, I used red fog color here. The nodes are from left to right:

  • Render Layer (always needed)
  • Map Value (always needed for Z buffer to create fog)
  • new: Texture (an input node, select the just created texture)
  • new: Multiply (a color mix node)
  • Mix (always needed to mix the fog into the scene)
  • Composite or Viewer node at the end

Thank you for reading. Please share your opinions using the comment function.