A new take on video algorithms
-
- Village Idiot
- Joined: Fri May 03, 2002 12:17 am
- Location: Denver, CO Banned: Several times!
- Contact:
A new take on video algorithms
I was just thinking over myobject-descripting vestor algorith today, and I just thought of another way to indirectly "recycle" data in a compression algorithm, although it IS very difficult:
Express it as a solid
Express the colour as a dimensions, akin to density. The two dimensions of a frame shall be two dimensions of the index, and the stream of two-dimensional frames a three-dimensional solid, the colour being the data.
You may even express it to multiple dimensions. Do you want one stream just for light intensity and the other for colour, as in YUV split from the Y? Light intensity is a great way to ddefine and determine an object.
Physically represented, this would look similar to you printing out every frame in the video stream and putting them in a huge stack.
Of course, apply the usual DCT wavelet transforms from there, except we'd have a whole lot of problems if we expressed the solid as an assembly of parametric (defined rationally) objects.
You'd have to store the videostream "solid" as an assembly of vector-based gradient "regions", solids which may overlap into each other. This would allow comples, live-action high-action movements to bbe stored, as well as anything invloving blurs, soft-focus and of course transparency.
So complex, if I porposed this "idea", patented of course, to a good institute such as Franhofer IIS, they'd laugh me out. infeasible, they'd say.
But the benefits would be tremendous. I don't even have to say that discrete mathematics would beneft is Franhofer did develop this concept. But in terms of anime compression? You'd be able to scale without a hitch. Interpolate frames to slow or speed up motion. Applying the principles of vector-based compression, not only could you seperate the character deterministically, not only in a frame but in a set. With enough work, the app could, say, draw a blue outline around a character when you're playing the vid.
As for audio compression, my theory on representing sound as a multimensional "solid" or data, stands. Small impossible idea to add to this one, though. In Anime, you can have the studios release the source code to the anime, including digital files, unrendered parametric data and scans of their cels. You could do that witha symphony, too. Play the music together, with headphones over each player's ears, in seperated silence rooms. Record independently, compress independently. Render realtime on the client - allowing the client to, say, turn up the Uillean pipes, delete that symbal clash and mute the strings.
Express it as a solid
Express the colour as a dimensions, akin to density. The two dimensions of a frame shall be two dimensions of the index, and the stream of two-dimensional frames a three-dimensional solid, the colour being the data.
You may even express it to multiple dimensions. Do you want one stream just for light intensity and the other for colour, as in YUV split from the Y? Light intensity is a great way to ddefine and determine an object.
Physically represented, this would look similar to you printing out every frame in the video stream and putting them in a huge stack.
Of course, apply the usual DCT wavelet transforms from there, except we'd have a whole lot of problems if we expressed the solid as an assembly of parametric (defined rationally) objects.
You'd have to store the videostream "solid" as an assembly of vector-based gradient "regions", solids which may overlap into each other. This would allow comples, live-action high-action movements to bbe stored, as well as anything invloving blurs, soft-focus and of course transparency.
So complex, if I porposed this "idea", patented of course, to a good institute such as Franhofer IIS, they'd laugh me out. infeasible, they'd say.
But the benefits would be tremendous. I don't even have to say that discrete mathematics would beneft is Franhofer did develop this concept. But in terms of anime compression? You'd be able to scale without a hitch. Interpolate frames to slow or speed up motion. Applying the principles of vector-based compression, not only could you seperate the character deterministically, not only in a frame but in a set. With enough work, the app could, say, draw a blue outline around a character when you're playing the vid.
As for audio compression, my theory on representing sound as a multimensional "solid" or data, stands. Small impossible idea to add to this one, though. In Anime, you can have the studios release the source code to the anime, including digital files, unrendered parametric data and scans of their cels. You could do that witha symphony, too. Play the music together, with headphones over each player's ears, in seperated silence rooms. Record independently, compress independently. Render realtime on the client - allowing the client to, say, turn up the Uillean pipes, delete that symbal clash and mute the strings.
<a href="http://www.animetheory.com/" title="AnimeTheory" class="gensmall">AnimeTheory.</a>
<a href="http://www.animemusicvideos.org/search/ ... %20park%22" title="Seach videos NOT by danielwang" class="gen">Make sure you don't download videos that suck!</a>
<a href="http://www.animemusicvideos.org/search/ ... %20park%22" title="Seach videos NOT by danielwang" class="gen">Make sure you don't download videos that suck!</a>
- jonmartensen
- Joined: Sat Aug 31, 2002 11:50 pm
- Location: Gimmickville USA
- jonmartensen
- Joined: Sat Aug 31, 2002 11:50 pm
- Location: Gimmickville USA