VIRTUAnimator 0.2a – Saving and Loading

It’s finally here! Saving and Loading is now possible inside of VIRTUAnimator, letting you save and share your animations with others. You can also continue to edit saved/loaded animations, letting you extend them further.

UI Interaction is also in, and works kind of like a pointer – point the controller at a UI element, and press a button to interact. This is how you make new frames per object, and scroll through the timelines.

Character objects will now have a flattened red cylinder at their base, letting you control their root position with that (previously, the body acted funny when away from the root position, so this is just to remedy it). The demo character has also had all ragdoll components removed, since these were actually getting picked up by the save system, which meant 30 extra objects + positions + rotations were being saved every frame, and being interpolated every animated frame!

Here’s a video demonstrating it:

There were also a few fundamental changes behind how keyframes were being saved, but I won’t bore you with that stuff.

So that’s it for now! Next up I’ll be working on letting you re-edit keyframes, as well as insert/delete keyframes as necessary. We’ll see how that goes!

Until next time!

VIRTUAnimator Update Alpha 0.1

Update so soon!

This update includes a lot of new/fixed things. The biggest of which is proper per-object timelines being completely (functionally) implemented. Timelines will now appear above the object, allowing you to work on multiple timelines at once if you wanted.

Content can now be loaded in at runtime, though lots of work still needs to be done on this. Currently, it’s required to export your model/rig/etc from the Unity3D editor as an AssetBundle. If this remains a requirement, I’ll see what I can allow players to do in addition to just exporting (like adding lights, or particles). So far, rigged characters can be imported and automatically IK-ified, and other types of objects should be even easier to import.

I’ve made attempts at rendering an image sequence on a separate thread (to prevent stuttering during the app), but Unity methods don’t work outside of the main thread. So, I’ve got 2 options I can think of:
First, and most likely, is just have a separate desktop app that exports animations. This way, it can stutter all it wants, and it doesn’t matter since it’s not in VR.
Second, and something I don’t really want to get into, is implement a graphics library, and have that handle exporting each ‘frame’. While it’d probably let you export properly, it might be excessive for a VR app. I’ll have a better idea once I have some VR hardware to test with.

HERE’S A VIDEO:

VIRTUAnimator

VIRTUAnimator! That’s now the name of the VR Animator app.

Some decent progress has been made, with motion-controller support added…ish. Without actual controllers to test with, I can’t be certain, but the scripts have been written to be pretty flexible. So long as the controller can move in 3D space, you should be able to move objects easily. Each controller can select a single object, and movement will be directly related to the controller’s movement. My original plans for timeline display were thrown out the window when I realized world-space UI could literally be as big as you could see, which makes my plans pretty confined. I need to come up with UI that takes advantage of VR better, though without having ever used VR it’s hard to comprehend the possibilities.

Which brings me to progress #2: moving objects is now implemented! After experimenting with a Physics/Rigidbodies setup, I’ve decided to stick with explicit transform control – objects won’t move unless you move them, and characters will be IK-based. I tried getting ragdolls to work, along with physics objects for props, objects, etc., but with a keyframe-based system, physics wouldn’t cooperate too well. Especially if you want to move from animated to physics-based and back to animated.

So, progress is going good! Next on the agenda is work on the timeline system, followed by an object/character spawn system. Integration will depend on what I can come up with for UI, but functionality should be doable for now.

Until next update!

VR Animation Tool v0.1

I’ve recently made huge progress on my runtime animation tool for VR. In a 20 dev sprint, I managed to hammer out the timeline/keyframe system. It’s pretty basic at the moment, and doesn’t make a per-object timeline yet, but the core functionality is there. Still no access to actual VR hardware, so it uses a more traditional setup currently. The idea is that, using the motion controls inside of VR space, you can pose a model, and save that pose as a ‘Keyframe’ (currently represented by big brown boxes, but soon a screenshot of the keyframed pose). In between each keyframe is a ‘Settings’ box, which currently displays the duration between the two keyframes. Tapping on it will bring up a window to edit all appropriate settings, such as ease in/out timing (so that the animation isn’t entirely linear).

It utilizes an IK-based model, so that you don’t need to worry about manually rotating each joint – just move one of the big red nodes, and the character will move how you’d expect. I’ve also got the basics of a ragdoll-based model, but won’t try getting it to work with the keyframes until I’ve seen how each feels to use in VR. The biggest benefit of a ragdoll-based model is that I could probably allow hot-loading of custom models, and generate a ragdoll at runtime. The keyframes would simply store each simulated part’s data, and everything would work mostly the same, only you wouldn’t be limited to biped models!

I’ll try to post here about each milestone (most likely every time I make a video about it), since there’s only so much I can talk about in a video description, or on twitter.