Misaki

The Spine Vtuber Prototype 1.0.4 works fine! This time, I did not see any errors that I saw yesterday. I also feel that faces can be detected well at a distance.

I also tried changing the strength settings. Some of the settings could be changed to get better effects (e.g. "face pitch" and "face yaw"), but it was difficult to get good results for the eyes because it sometimes did not work so well depending on the angle of the face. For example, turning the face down can cause the eyes to open wide.
As for the mouth, my skeleton setup is not good (it opens up against the top, which is not appropriate) and I would like to fix it when I have time.

Anyway, it is wonderful that this tool has improved very much in this short period of time :yes:
User avatar
Misaki

Misaki
  • Posts: 842

SilverStraw

I think the eye issue could be resolved by editing the animation in Spine.
Anyway, it is wonderful that this tool has improved very much in this short period of time :yes:
Good thing they were simple fixes :wounded:.
SilverStraw
  • Posts: 76

cydoni

This is really cool! I haven't tried anything VTuber related until now so this was a fun experience for me. :grinteeth:


Dev stuff isn't my area of expertise so I'm not exactly sure what you can do, but here are all the thoughts I had while I was using it:
- It would be cool to see what files are added to the drag and drop zone when they're dropped in to know that they've successfully been uploaded without pressing the button below to see if it works.
- I'm not sure what happened, but when I imported my latest rig, the import wouldn't load all of my assets. It worked when I used an older file though, so it may be an issue with my file and not the prototype (the same thing happened recently when I used that file with Rhubarb as well). But I thought I'd at least mention it (see bad import jpg)
- Being able to adjust the strength of individual parameters was really helpful here. Before I recorded, my eyelids wouldn't open to their full open position so I adjusted that a bit.
- I made a rough test of some eye blinks which I show in the video. I experimented with turning alpha to 0 on the lower lids so they would disappear when the eyes opened. It would be cool if there was a way to disappear the lids just as they open all the way.
- It would be even cooler if there was a way to use an entire animation as the motion rather than a single key (unless I set mine up incorrectly). For example, having one animation control the entire left eye blink. The first frame would be eye completely closed, and the last frame would be the eye completely open. Then the frames in between could be refined to allow custom deformation rather than a linear straight shot from one position to the next. This could also help in selecting the best time to adjust draw order too. I'm thinking of the Moho rigging process as inspiration here.



Overall really cool to use and I'm excited to see how you take this further!
You do not have the required permissions to view the files attached to this post.
cydoni
  • Posts: 3

SilverStraw

- It would be cool to see what files are added to the drag and drop zone when they're dropped in to know that they've successfully been uploaded without pressing the button below to see if it works.
Yes I can do something about that.
- I'm not sure what happened, but when I imported my latest rig, the import wouldn't load all of my assets. It worked when I used an older file though, so it may be an issue with my file and not the prototype (the same thing happened recently when I used that file with Rhubarb as well). But I thought I'd at least mention it (see bad import jpg)
I do not know either about the bad import. Would be hard to figure without any of the files or error log.
- Being able to adjust the strength of individual parameters was really helpful here. Before I recorded, my eyelids wouldn't open to their full open position so I adjusted that a bit
Yeah, not everyone is comfortable or able to stretch their facial features to the extreme. Even if they could, their face would fatigue eventually.
- I made a rough test of some eye blinks which I show in the video. I experimented with turning alpha to 0 on the lower lids so they would disappear when the eyes opened. It would be cool if there was a way to disappear the lids just as they open all the way.
- It would be even cooler if there was a way to use an entire animation as the motion rather than a single key (unless I set mine up incorrectly). For example, having one animation control the entire left eye blink. The first frame would be eye completely closed, and the last frame would be the eye completely open. Then the frames in between could be refined to allow custom deformation rather than a linear straight shot from one position to the next. This could also help in selecting the best time to adjust draw order too. I'm thinking of the Moho rigging process as inspiration here.
The problem is that the live timeline ( from face tracking ) would conflict with the animation timeline using animation mix alpha setup I have now. I think it is still possible but it would require the calculations to be applied to the animation track timeline itself and have the animation mix alpha constant at one. This would require me to create a separate copy of the application to experiment with. Wait for the next update. :nerd:

---



I tested with only with one eye but your request of able to use more than one keyframe for Spine Vtuber Prototype is possible! I keyframed at half way mark on the timeline with green eye and the full way with red eye. I had set the FPS to 100 so each frame is 1% of the movement range. You can set the FPS to any value but you have to keyframe within the one second. I still need to make the changes for the rest of the animation tracks and update it on itch.io :cooldoge:.

---

Spine Vtuber Prototype 1.0.5 update

  • Moved the names of uploaded files from bottom of the page into the Drag and Drop Zone area. This allows easier viewing of loaded files.
  • Changed how calculations are applied. This allows animators to use multiple keyframes on the timeline for each animation track. Animating within one second is recommended. Previously, each track only allowed one keframe on the timeline.

https://silverstraw.itch.io/spine-vtuber-prototype

I have also updated the Spine Vtube Test Model to reflect the change in 1.0.5

https://silverstraw.itch.io/spine-vtube-test-model
SilverStraw
  • Posts: 76

Misaki

The new animation system looks really interesting! However, I have tried the v1.0.5 with my skeleton which has not been changed since the previous test, but it did not work. It doesn't work when I turn on the camera active, and it has been in a weird state from the start. Here is the screenshot:
Screen Shot 2022-08-02 at 9.33.24.png

spine_vtuber.js:1 Uncaught TypeError: Cannot read properties of undefined (reading 'length')
at l (spine_vtuber.js:1:12224)
at t.ondrop (spine_vtuber.js:1:12560)
Does the error log above seem to indicate the cause?
You do not have the required permissions to view the files attached to this post.
User avatar
Misaki

Misaki
  • Posts: 842

SilverStraw

I was able to recreate your error and patched it. I forgot that drag and drop uses a slightly different code than clicking to upload files.

The weird state of your model is normal and is separate from the error. Before you were only allowed to keyframe the end state of the animation movement. Now that you can key more than one frame, you are going to have move the previous keyframe to the where 1 second would be at. For example. if your Spine file is set at 60 FPS, 1 second is on the 60th frame. If it set to 30 FPS, then 30th frame is the 1 second mark. Then you are going to need to add a keyframe at the 0th frame. It is going to take some time to make the changes to the Spine file but you are free to keyframe whatever you want within that 1 second timeline. Note: I set my FPS to 100 so that each frame is 1% of the movement range.
SilverStraw
  • Posts: 76

Misaki

Thank you for your quick response! I understand the new specification of the animations. I fixed my skeleton and now it is working very well! :D

It seems to be smoother than before, is this also thanks to the update? Also, when testing multiple face tracking, there was a problem that if you opened the mouth as wide as possible, the mouth would open wider than specified in the "mouth height" animation, but this problem has been fixed and the model no longer moves wider than the created animation. This is really awesome.
User avatar
Misaki

Misaki
  • Posts: 842

SilverStraw

Excellent. You got your model to work now.

I also noticed that your model animation got smoother from the video. I think the update definitely played a role. As to why, I would not be able to provide a concrete answer. I speculate it could better in-betweening when there is more than one keyframe. The other thing could be it better performance changing the track time as opposed to changing animation track alpha. Nate would probably know more about it.

The " the mouth would open wider than specified in the 'mouth height' animation" is a number capping issue. Out of the box, Spine runtime caps off the track time with track ending time while the animation track alpha is uncapped.
SilverStraw
  • Posts: 76

Misaki

Thank you for explaining about them! It seems like a lot of things worked better by using the track time instead of using track alpha to change animations, right? That’s an interesting finding.

By the way, this change has also made the creation of the model much easier. It used to be difficult to make sure that the eye's mesh was clean when in the middle of the eye-open animation, but now it is easy to check and modify.
I think the animation of turning the face sideways could be improved, and I would like to modify it eventually.
User avatar
Misaki

Misaki
  • Posts: 842

SilverStraw

Thank you for explaining about them! It seems like a lot of things worked better by using the track time instead of using track alpha to change animations, right? That’s an interesting finding.
The benefit of using the track time is much noticeable on your character model than my test model. Weighted meshes behave better when they are animated on a timeline as opposed to being a single keyframe.

I started using the track alpha because I saw it in a demonstration code and mimicked the usage. I had to research the Spine runtime documentation to find information about track time.
By the way, this change has also made the creation of the model much easier. It used to be difficult to make sure that the eye's mesh was clean when in the middle of the eye-open animation, but now it is easy to check and modify.
Awesome! I always wanted the creation process to be easy.
I think the animation of turning the face sideways could be improved, and I would like to modify it eventually.
Are speaking about your character model or the animation tracks setup in the Spine Vtube Prototype?
SilverStraw
  • Posts: 76

Misaki

Are speaking about your character model or the animation tracks setup in the Spine Vtube Prototype?
Oops, sorry for not being obvious about it. I meant to talk about improvements to my model. My current model has just a narrow angle that can be moved left or right, but thanks to the ability to move the character using the track time, I expect that I should be able to switch the attachment at a specific time and have it face completely sideways. I'm hoping to get that done at some point!
User avatar
Misaki

Misaki
  • Posts: 842

SilverStraw

My current model has just a narrow angle that can be moved left or right, but thanks to the ability to move the character using the track time, I expect that I should be able to switch the attachment at a specific time and have it face completely sideways. I'm hoping to get that done at some point!
I am not sure if the face tracking can turn all the way to the side and behave correctly. You can increase the strength of the face yawing signal so you do not have to turn your head completely sideways.

For the next update, I am working on adding index numbers to the face mesh to help sort the confusion with multiple face tracking and corresponding Spine models. I also want to work on saving and loading JSON files for Prototype settings.
SilverStraw
  • Posts: 76

Misaki

I am not sure if the face tracking can turn all the way to the side and behave correctly.
Probably it is no problem, because what I'm planning is that I'd like to make it so that the character moves larger, such as turning sideways about 60 degrees, while the real face only moves at about 40 degrees.
For the next update, I am working on adding index numbers to the face mesh to help sort the confusion with multiple face tracking and corresponding Spine models. I also want to work on saving and loading JSON files for Prototype settings.
Those things would be very useful! I'm looking forward to the next update :D
User avatar
Misaki

Misaki
  • Posts: 842

SilverStraw

I have a working saving and loading mechanism added to the project. The downloadable JSON will have a ( .svp ) extension name to avoid confusing with the Spine exported JSON. I will update https://silverstraw.itch.io/spine-vtuber-prototype in a few days. Hopefully there are not any bugs to fix so I can start working on expanding additional animations and skins implementation.

---

1.0.6

  • Added a button, located right next to the property drop down menu, to save the model's settings. The model number will determine which settings the button will save from. The downloadable file is formatted as a JSON with ( .svp ) file extension to avoid confusion with Spine's exported JSON file.
  • The drag-and-drop zone and upload file button now reads ( .svp ) file as a JSON text.

https://silverstraw.itch.io/spine-vtuber-prototype
SilverStraw
  • Posts: 76

Misaki

I have updated my skeleton and deleted the old one.
chara-for-Spine-Vtuber-Prototype.zip


@SilverStraw I'm surprised the v1.0.6 release came sooner than I thought! I confirmed that it works fine, but personally I wanted to save the skin setting, so it was a bit sad that the skin is not included in what is saved in the .svp file. (In my case, my skeleton has green and blue eye skins, but it's a bit of a hassle to have to set the skin each time I load it.)
You do not have the required permissions to view the files attached to this post.
User avatar
Misaki

Misaki
  • Posts: 842

SilverStraw

Misaki, I fixed the ( .svp ) file to include the skin with the help of your character model for debugging. I thought about adding skin to ( .svp ) after mix and match skin implementation.

I could have updated sooner but the hot and humid weather delayed me.

Later on the day, after work, I did some experimentation with injecting recorded video into the camera feed. I thought it would be interesting to share. Also, Misaki, I used your model for my test.

My DeviantArt test video because stock video license didn't allow Youtube: https://wixmp-ed30a86b8c4ca887773594c2.wixmp.com/v/mp4/1e351697-997f-420a-b339-55dc604511c0/dfayw2p-8bac9980-e12a-4d89-b384-0978995dbef1.1080p.0ddcac221fef40d48b9618a24a4f0d73.mp4

Stock video from: https://mixkit.co/free-stock-video/closeup-of-young-woman-thinking-about-decision-15776/

Mixkit Restricted License
SilverStraw
  • Posts: 76

Misaki

Misaki, I fixed the ( .svp ) file to include the skin with the help of your character model for debugging. I thought about adding skin to ( .svp ) after mix and match skin implementation.
Great! This is really helpful. Thank you! :D

I uploaded another video of the current model I updated yesterday with a few more modifications (I made a part of the outline of the face and the side hair disappear when turning face left or right.):


Regarding your new experimentation, it is really interesting! It would be possible to record facial footage outdoors and then replace the face with their Vtuber model based on that video at home. Or simply, it would be easy and helpful to use recorded video for testing. (I always had to call my husband to help me when testing multiple face tracking, so it would be useful if I could record our faces and test multiple face tracking whenever I wanted.)

The heat is really severe, so please be careful of heat stroke!
You do not have the required permissions to view the files attached to this post.
User avatar
Misaki

Misaki
  • Posts: 842

SilverStraw

The heat is really severe, so please be careful of heat stroke!
Yes I am staying cool. Thank you for your concern.
Great! This is really helpful. Thank you! :D
You are welcome!
Regarding your new experimentation, it is really interesting! It would be possible to record facial footage outdoors and then replace the face with their Vtuber model based on that video at home. Or simply, it would be easy and helpful to use recorded video for testing. (I always had to call my husband to help me when testing multiple face tracking, so it would be useful if I could record our faces and test multiple face tracking whenever I wanted.)
Yes, I plan to include uploading video file for Prototype. I still need more time to work on this feature.

-- Erika on Discord --
If only the yes jitter could be fixed, this could be usable! 😄 maybe with an average slider that the user can self adjust?
Animation Smoothing Function Test. I removed the low-pass jitter filter and replaced it with an unweighted moving average smoothing.

https://wixmp-ed30a86b8c4ca887773594c2.wixmp.com/v/mp4/1e351697-997f-420a-b339-55dc604511c0/dfb3li1-a1190425-4803-495e-943d-7e5e6fdda611.1080p.351b7ec9d4a04555bb1f5979b35b00c7.mp4
SilverStraw
  • Posts: 76

Nate

FWIW, I like this smoothing function (based on ola.js):
function Smooth(f,t){var p=performance,b=p.now(),o=f,s=0,m,n,x,d,k=f;return function(v){n=p.now()-b;m=t[i]t/n/n;d=o-f;x=s[/i]t+d+d;if(n>0)k=n<t?o-x/m/t[i]n+(x+x-d)/m-s[/i]n-d:o;if(v!=void 0&&v!=o){f=k;b+=n;o=v;s=n>0&&n<t?s+3[i]x/m/t-(4[/i]x-d-d)/m/n:0}return k}}
It is very easy to use:
// Setup: initial value, milliseconds over which to smooth
var smooth = new Smooth(0, 350);
// Then later, use the smoothed value returned by this instead of the raw value:
var smoothedValue = smooth(rawValue);
It gives a smoothed value that moves to the raw value over time. It's better than the usual interpolation between a start and end value because it dynamically adjusts the curve smoothly when the raw value is changing a lot. You can see it in action here. Try changing the time from 1000 to 1500 (click Run afterward) for a more interesting demo, showing what happens when the raw value hasn't been reached yet and the raw value changes.
User avatar
Nate

Nate
  • Posts: 12016

cydoni

The updates since I last tried are awesome! Very excited about the timeline animation addition for getting inbetweens looking nice. Switching attachments, mesh deformation on eyelids, color changes, all that seems to work great. The skins selector is also nice to have. It does seem smoother too after the smoothing code was added, despite the fact that my lighting is very dim right now and my camera isn't picking up my face too well. :upsidedown: I'm going to make a more complex character rig and see how refined I can get with this. Hopefully I can get that done sometime this weekend. Great work!
cydoni
  • Posts: 3

Misaki

I have updated my model, and uploaded a new video using it:

(I noticed after I finished recording that an attachment for the mouth had scaled weirdly and was visible in the front of the back of the hair… This is just my fault.)
The new model just replaced the hair and modified the shape of the face to make it look better, but while I was at it, I did a test of multiple face tracking again. At 0:58 in the video, the index of the face has been replaced, but otherwise it was fine and fun :D
User avatar
Misaki

Misaki
  • Posts: 842

SilverStraw

​1.0.7

  • Removed the low-pass filter for face tracking.
  • Added unweighted moving average smoothing for face tracking.​

https://silverstraw.itch.io/spine-vtuber-prototype

https://itch.io/t/2255285/change-log
FWIW, I like this smoothing function (based on ola.js):

It is very easy to use:

It gives a smoothed value that moves to the raw value over time. It's better than the usual interpolation between a start and end value because it dynamically adjusts the curve smoothly when the raw value is changing a lot. You can see it in action here. Try changing the time from 1000 to 1500 (click Run afterward) for a more interesting demo, showing what happens when the raw value hasn't been reached yet and the raw value changes.
Thank you for the information, Nate.
The updates since I last tried are awesome! Very excited about the timeline animation addition for getting inbetweens looking nice. Switching attachments, mesh deformation on eyelids, color changes, all that seems to work great. The skins selector is also nice to have. It does seem smoother too after the smoothing code was added, despite the fact that my lighting is very dim right now and my camera isn't picking up my face too well. :upsidedown: I'm going to make a more complex character rig and see how refined I can get with this. Hopefully I can get that done sometime this weekend. Great work!
Thank you, Cydoni. Keep me posted with your progress on your character rig. I hope you get some time and not distracted ;) . Also, congrats on being able to have your wedding ceremony.
(I noticed after I finished recording that an attachment for the mouth had scaled weirdly and was visible in the front of the back of the hair… This is just my fault.)
The new model just replaced the hair and modified the shape of the face to make it look better, but while I was at it, I did a test of multiple face tracking again. At 0:58 in the video, the index of the face has been replaced, but otherwise it was fine and fun :D
The model came out awesome. The hair is detailed with depth when the head turns. I noticed the mouth part sticking out at the back of the neck :D . I don't know if I could fix the face index when the face tracking looses focus on one of the faces. There might be a work around but I haven't figured it out yet. I am still working on trying to integrate uploading a video for face tracking. The prototype 1.0.7 has face tracking smoothing so the models should have a smoother animation. The jitter filter was not working that well.

---

Can a monkey vtube?

https://wixmp-ed30a86b8c4ca887773594c2.wixmp.com/v/mp4/1e351697-997f-420a-b339-55dc604511c0/dfba7l6-b9406265-ba8f-441f-a105-9f8949b3282a.1080p.c7f990c5b4274defbf0bdd3b27c4d2e3.mp4
SilverStraw
  • Posts: 76

Misaki

Wow, ha ha, your tool can even track the monkey's facial movements, awesome! :D :grinteeth:
I am glad you are using my new model so soon!

By the way, I was going to try 1.0.7, but I got an error and failed to load the resource:
Failed to load resource: the server responded with a status of 403 ()
Screen Shot 2022-08-15 at 16.31.18.png

Can you think of what might be causing it?
You do not have the required permissions to view the files attached to this post.
User avatar
Misaki

Misaki
  • Posts: 842

SilverStraw

Misaki wrote:Wow, ha ha, your tool can even track the monkey's facial movements, awesome! :D :grinteeth:
I tried a cat before but it didn't work too well. Primate facial characteristics are similar to a human so I thought the face tracking AI model might be able to classify a monkey's face.
Misaki wrote:I am glad you are using my new model so soon!
I was researching constraints for future physics constraint. My test model didn't use any constraints but your model does.
Misaki wrote:By the way, I was going to try 1.0.7, but I got an error and failed to load the resource:
That is from my experimentation with loading a recorded video and face tracking for future update. I didn't include the mp4 file in the folder. Prototype should work without breaking.
SilverStraw
  • Posts: 76

Misaki

I tried the 1.0.7 this morning and the skeleton loaded fine. I never tried to upload a video yesterday, just a skeleton file as usual, but perhaps the problem I encountered yesterday might be a server-side problem. Anyway, it certainly seems to be able to move smoother!


I have also made some minor modifications and uploaded the latest model here:
chara-for-Spine-Vtuber-Prototype.zip

The main modification this time was to separate the eye parts into smaller pieces and adjust the weights.
You do not have the required permissions to view the files attached to this post.
User avatar
Misaki

Misaki
  • Posts: 842


Return to Showcase