Pretty cool I think.http://www.tv-tokyo.co.jp/wbs/2005/08/2 ... ma/tt.html
This technology is called 'Motion Portrait' and developed by Sony-Kihara Research Center that designed Graphics Synthesizer for PS2. It generates realtime (30fps) 3D facial animation from a single 2D picture, and is possible on an ordinary PC (you don't have to have a massive render farm). As you see in the movie, you can add facial animation to anything including pineapples and anime characters. The researcher says in the movie its application to games is interesting, for example animating your face or an anime character face in a game.
How much will mp be of use in AMV's?
- giga_d
- Joined: Wed Oct 20, 2004 4:09 pm
How much will mp be of use in AMV's?
- Zarxrax
- Joined: Sun Apr 01, 2001 6:37 pm
- Contact:
- Otohiko
- Joined: Mon May 05, 2003 8:32 pm
Yea, the demos can be a little deceiving.Zarxrax wrote:Looks awesome, but I'll believe it when I see the software.
I can believe it's possible though; just depends on the level of quality.
That said, I've always thought that writing an auto-lip-sync program based on rudimentary phonetics/phonology principles wouldn't be too complicated.
The Birds are using humanity in order to throw something terrifying at this green pig. And then what happens to us all later, that’s simply not important to them…
- ssj4lonewolf
- Joined: Fri Jan 28, 2005 11:24 am
- Location: Stuck in Hell, i mean Phoenix....
simply amazing however, ur comp would proly have to be pimp shit in order to run it. On my comp I can barley run full time on vegas...
Oh god, that black dude with the afro is always making those damn trash ass music hip hop amvs...he needs to do something with techno or rock....
.......as if I would do something like that.
おおかみなく
.......as if I would do something like that.
おおかみなく
- Otohiko
- Joined: Mon May 05, 2003 8:32 pm
I don't think it neccesarily has to be. Look at even Half-Life 2 - they managed to put out some fairly impressive facial animation there, in real time, on very modest hardware requirements.ssj4lonewolf wrote:simply amazing however, ur comp would proly have to be pimp shit in order to run it. On my comp I can barley run full time on vegas...
Naturalistic speech production has been studied in a lot of detail for a while. If I were a little less lazy, I could probably apply to good effect some of the phonetics/phonology I've learned in the last few years into making better lip-sync in AMVs (but eh...).
Personally though... I have suspicions that, even if it works as it works, it may end up in a lot of lazy lip-sync in AMVs. People will naively decide that the program will do everything for them, not really pay attention to nuisances, and as a result we may just end up with much of what we have now, only with 50x more lazyness
(on the bright side, those who use this intelligently may end up with some very neat stuff)
The Birds are using humanity in order to throw something terrifying at this green pig. And then what happens to us all later, that’s simply not important to them…
-
trythil
- is
- Joined: Tue Jul 23, 2002 5:54 am
- Status: N͋̀͒̆ͣ͋ͤ̍ͮ͌ͭ̔̊͒ͧ̿
- Location: N????????????????
It's been done to an extent by Square Pictures, at least; I'd bet that other animation houses do something similar.Otohiko wrote: That said, I've always thought that writing an auto-lip-sync program based on rudimentary phonetics/phonology principles wouldn't be too complicated.
Basically, create a library mapping phonemes to meshes, and then given a line you select where each mesh should appear in line with a given phoneme. The mesh is then tweened by the computer.
- Castor Troy
- Ryan Molina, A.C.E
- Joined: Tue Jan 16, 2001 8:45 pm
- Status: Retired from AMVs
- Location: California
- Contact:
- DeinReich
- Joined: Sun Mar 27, 2005 10:40 am
- Location: College
- Otohiko
- Joined: Mon May 05, 2003 8:32 pm
Well, sure. In industry, I'm fairly sure that'd be fairly standard by now; it'd be pretty ridiculous to do all that stuff just by hand.trythil wrote: Basically, create a library mapping phonemes to meshes, and then given a line you select where each mesh should appear in line with a given phoneme. The mesh is then tweened by the computer.
Anime is generally way the hell simpler than this. In fact I think it'd feel pretty awkward is Pikachu's mouth suddenly had a full-fledged 30-FPS animation for his um... expressionisms
It's really a BIG exception in conventional, 2-D anime, when a character (unless there's a deliberate close-up on the lips) speaks and you see more than a few actual frames for mouth position (closed vs partially open vs fully open rounded/unrounded). It's rare to have anything beyond that; luckily most of us usually don't pay attention as long as they're flapping in rhythm. That's why I think some of this may well end up as overkill.
The Birds are using humanity in order to throw something terrifying at this green pig. And then what happens to us all later, that’s simply not important to them…
- SarahtheBoring
- Joined: Sun Apr 07, 2002 11:45 am
- Location: PA, USA
- Contact:
You'll need to stock a lot of Ethers, but...
...oh.
As far as overuse goes, I think the community will learn very fast what sloppy use of a program like that looks like, and opinionate accordingly.
...and as for full mo-cap precisely articulated Pikachu, I'm getting mental images of that what's-it-called animation that they use on Conan. O_o Now that's disturbing. Heh.
...oh.
As far as overuse goes, I think the community will learn very fast what sloppy use of a program like that looks like, and opinionate accordingly.
...and as for full mo-cap precisely articulated Pikachu, I'm getting mental images of that what's-it-called animation that they use on Conan. O_o Now that's disturbing. Heh.
