Why I Became a Vegan

If you had approached me even one year ago and told me that I would go vegan, I would never have believed you. For my entire life, eating meat was simply a thing that everyone did, and there was…

Smartphone

独家优惠奖金 100% 高达 1 BTC + 180 免费旋转




How to Create VR Video in Blender

Making 360° 3D video for fun and profit, with free software!

If you want to turn your own 3D scene into a VR video and share it on YouTube, you’re in the right place. Follow this guide, and you’ll end up with an immersive 3D video your viewers will want to reach out and touch.

Note that when I wrote this article, Blender 2.79 was the newest version — that’s no longer the case! If you’re using a newer version, you’ll likely need to adjust these instructions. Hopefully I’ll be able to spend the time on an updated article soon.

For those who are already experienced with Blender and have a scene ready to go, here’s what you need to know. There’s a lot more detail below, so if you’re confused, keep reading.

Blender comes with two render engines: Blender, and Cycles. Of the two, only Cycles can handle the spherical video tricks used in this guide, so whatever scene you choose must be set up for Cycles. If you’re using the seaport scene — or many other scenes designed for photorealism — you’re all set. Scenes originally intended for the Blender Render engine, though, might require some effort to convert. The render engine selector is at the top of the UI near the logo & version number like so:

Render Engine Selector

Once you have a scene that’s textured and lit for Cycles, you’ll want to add some motion to make the video interesting. For example, this seaport has an arch, so why not take our viewers on a trip through it?

To do that, we’ll need to create two “key frames” in Blender’s animation timeline: one key frame for the start location and another for the end location. Once those keys are set, Blender handles smoothly moving the camera between them over time.

In a nutshell, the steps are:

Check your work by peeking through the camera (press 0 on the numpad or browse the 3D Viewport menu: View > Cameras > Active Camera), then watching the animation (Alt+A) and tweaking as necessary. You can add more key frames if you wish, but keep things as subtle and smooth as possible. Your viewers will feel uncomfortable or sick if the camera motion changes sharply in VR.

Since you’re not doing a typical flat image render, you’ll need to dive into some nooks and crannies of Blender’s settings where you’ve probably never had a reason to wander before. Our first stop is the Render Layers properties, where you’ll need to enable the Views section and make sure Stereo 3D is selected.

Render Layers Views, after following this guide

Next up is Render properties. In the Dimensions section, select a render resolution that is twice as wide as it is tall and a power of 2 in each dimension. For example, 2048 by 1024 would work, since 2048 = 2¹¹ and 1024 = 2¹⁰ (and half of 2048). YouTube will accept up to 8192 by 4096, but I’d suggest starting out small and quick for testing unless you’ve got either a free render farm or superhuman patience. For simplicity’s sake, make sure the percentage resolution scale slider is set to 100%, 50%, or 25%, since other values will violate the power-of-two rule. As an example, for my test render I set a base resolution of 2048 by 1024 and set the resolution scale slider to 25%, for a final resolution of 512 pixels wide by 256 pixels tall. This is tiny but fast, perfect for a test render.

In the Output section of the Render properties, you have a choice to make: tell Blender to produce a single video file, or tell Blender to produce a folder full of individual frame images. Having Blender build the video file for you right away is simplest, but risky — you’ll lose the whole render if something goes wrong halfway through. With individual image files, even the worst crash will only ever cause you to lose a single frame, but the trade off is that you’ll have to do slightly more legwork to assemble the images into a video file. Video is easier, images are safer.

No matter which option you go with, set the Views Format to Stereo 3D, set the Stereo Mode to Top-Bottom, and choose a destination folder you’ll be able to find easily later.

For the easier route, choose output type “FFmpeg video.” Then in the Encoding section’s Presets dropdown, choose “h264 in MP4.” For your final render you’ll probably want to increase the quality settings, but for our low-resolution test render there’s no need.

Render Output and Encoding settings, if using FFmpeg video.

If you want the safer route, choose output type “PNG.”

Render Output settings, if using individual frames.

Later, when Blender’s done rendering all of the individual frames as .png images, you’ll need to run a separate program to turn the images into a video. In my case, I just ran FFmpeg myself instead of having Blender run it for me. The terminal command I used looked like this (it should be a single line when you run it):

In the Camera properties, expand the Lens section and choose Panoramic, then in the Type drop-down, choose Equirectangular. If you don’t see Equirectangular in the list of types, double check that your rendering engine is Cycles and not Blender Internal. In the Stereoscopy section, choose Off-Axis, and check the box for Spherical Stereo.

This next part is going to require us to take a break from tedious box-checking and do some arithmetic. To make the 3D effect in our final video both comfortable and realistic, we need to supply Blender with correct values for Convergence Plane Distance and Interocular Distance. If we set the numbers too large, the depth will be over-exaggerated. Too small, and it will be too subtle, making our scene feel flat.

Blender has its internal measurements set to meters (unless you’ve fiddled with the settings), so whenever there are default numbers, they’re based on the assumption that 1 blender unit in your scene corresponds to 1 meter in real life. For example, if our scene is scaled down to half the scale of “real-life,” then we’ll need to divide Blender’s defaults by 2 to match.

So how do we determine our scene’s scale? In the seaport scene, we have 3D models of buildings with doors, and we can compare the height of these doors with the height of a real door, which should be around 80 inches (2.032 meters). Selecting a door model in the scene, I see that it’s 0.323 meters (12 inches!) tall, meaning our scene is roughly scaled to 0.323 / 2.032 or 1 / 6.291, and that we should divide Blender’s values by 6.291 to make them realistic. Blender’s defaults were 1.95 meters for Convergence Plane Distance, and 0.065 meters for Interocular Distance, so after dividing by 6.291 they’re 0.310 and 0.010 respectively.

The keyboard shortcut to render a single frame is F12. Don’t be alarmed when it starts to render in a dim red color — that’s Blender’s way of indicating that you’re only seeing one eye’s perspective (the other eye will be blue, a la old-school anaglyph 3D glasses). If your settings are correct, you should end up seeing something that looks like this within the Blender UI:

If so, then it’s time to render your low resolution test video. Click that Animation button at the top of the Render properties panel, and sit back and wait. Depending on your computer hardware, the Blender settings you chose, and the complexity of your scene, this could take anywhere from minutes to weeks.

When the rendering and encoding are all finished, you should have an odd-looking square video with one eye’s perspective positioned above the other. It should look something like this:

Launch the Metadata Injector, then click Open and select your video. Check the boxes for “My video is spherical (360)” and “My video is stereoscopic 3D (top/bottom layout),” and then click “Inject metadata” to choose where the modified video will be saved.

The settings to use in Metadata Injector.

Now for the big moment! Your low-resolution test video is ready for prime time, so upload it to YouTube just like any other video. When processing is complete, you’ll be able to view it in 360° like this:

Besides the low resolution, it looks good! If you view it on a mobile device from within the YouTube app, you’ll see a Google Cardboard icon.

Google Cardboard icon, in the lower right corner of the YouTube mobile app.

Tap that icon, and you’ll get immersive 3D, like this:

Take a test drive, and make sure everything looks as expected. If so, it’s time to crank up the resolution in Blender (YouTube accepts a max resolution of 8192 by 4096, if you want to go all-out) and then repeat the last few steps. Specifically, you’ll want to increase the video encoding quality settings, render & encode a new video, inject the metadata, and upload.

That’s it — you now have everything you need to create beautiful video experiences in immersive 3D! Let your creativity run wild, and share your hard work with the world. My own full-resolution render is at the top of this page as an example. Have fun!

Thanks for making it this far. If you found this useful or interesting, please give the clap icon a few clicks below. Not only does that help other readers find this more easily, but also it tells me that I was able to help someone out, which motivates me to write more guides like this one. Thank you!

Add a comment

Related posts:

Riguel ini aja aja ada

Ayya menaruh buah tersebut di meja samping ranjang lalu menolehkan kepala nya ke arah Riguel yang masih sibuk dengan handphone nya. “Nih.” Riguel mengembalikan handphone itu kepada pemiliknya…

Best way to use affirmations

When you start using the law of attraction you should use it wholeheartedly because when you give your full attention and feelings to your goals it’s inevitable that you will achieve your goals…

Discovering Your Soul

What does it mean to truly live? What is a meaningful life? For some, like psychologist Abraham Maslow, life is first about meeting basic needs. Once those needs are met, we give our life meaning by…