
We convert image to png data and send it accross the noisy wire.

If you want blender to pipe you need to have it return a png image and flush stdout.įfmpeg cannot read a piped in image without image2pipe. The main issue is that blender is noisy and piping out I looked into getting a pipe out from blender:

if you're worried about space, you could write a script similar to this one that changes the location of the storage after every 1000 frames or so, then compile the files together later through image sequences and video concatenations. This is all to say that doing it with a video and image buffer that ffmpeg concats at the end of every frame is doable, but image sequences are better. While _current <= _end:īpy._set(_current +1 ) Next we need to call all this in Blender : This ends up copying the video to a temp as well since the input cannot be the same as the output in ffmpeg.

Next we need to copy the frame's boostrapped video into a temp video with do_ffmpeg_with_tmp.sh : ffmpeg -i input1.mp4 -filter_complex "concat=n=2:v=1:a=0" -r 24 -f concat -safe 0 -i video-input-list.txt -safe 0 -pix_fmt yuv420p -crf 23 -r 24 -shortest -y video-from-frames.mp4``` Video-input-list.txt looks something like this: file 'tmp.png'
#Ffmpeg vstack different size mp4
My first solution involves taking the rendered image, saving it to a single tmp.png, then appending that to an ffmpeg mp4 at the end of every frame.įirst we need to bootstrap ffmpeg -r 24 -f concat -safe 0 -i video-input-list.txt -safe 0 -pix_fmt yuv420p -crf 23 -r 24 -shortest -y video-from-frames.mp4
