In my previous post, I demonstrated how FFmpeg can be used to pipe raw audio samples in and out of a simple C program to or from media files such as WAV files (based on something similar for Python I found on Zulko’s blog). The same idea can be used to perform video processing, as shown in the program below.
Reading and writing a video file
In this example I use two pipes, each connected to its own instance of FFmpeg. Basically, I read frames one at time from the input pipe, invert the colour of every pixel, and then write the modified frames to the output pipe. The input video I’m using is teapot.mp4, which I recorded on my phone. The modified video is saved to a second file, output.mp4. The video resolution is 1280×720, which I checked in advance using the ffprobe utility that comes with FFmpeg.
The full code is below, but first let’s see the original and modified videos:
The original and modified MP4 video files can be downloaded here:
The program I wrote to convert the original video into the modified version is shown below.
// // Video processing example using FFmpeg // Written by Ted Burke - last updated 12-2-2017 // #include <stdio.h> // Video resolution #define W 1280 #define H 720 // Allocate a buffer to store one frame unsigned char frame[H][W][3] = {0}; void main() { int x, y, count; // Open an input pipe from ffmpeg and an output pipe to a second instance of ffmpeg FILE *pipein = popen("ffmpeg -i teapot.mp4 -f image2pipe -vcodec rawvideo -pix_fmt rgb24 -", "r"); FILE *pipeout = popen("ffmpeg -y -f rawvideo -vcodec rawvideo -pix_fmt rgb24 -s 1280x720 -r 25 -i - -f mp4 -q:v 5 -an -vcodec mpeg4 output.mp4", "w"); // Process video frames while(1) { // Read a frame from the input pipe into the buffer count = fread(frame, 1, H*W*3, pipein); // If we didn't get a frame of video, we're probably at the end if (count != H*W*3) break; // Process this frame for (y=0 ; y<H ; ++y) for (x=0 ; x<W ; ++x) { // Invert each colour component in every pixel frame[y][x][0] = 255 - frame[y][x][0]; // red frame[y][x][1] = 255 - frame[y][x][1]; // green frame[y][x][2] = 255 - frame[y][x][2]; // blue } // Write this frame to the output pipe fwrite(frame, 1, H*W*3, pipeout); } // Flush and close input and output pipes fflush(pipein); pclose(pipein); fflush(pipeout); pclose(pipeout); }
The FFmpeg options used for the input pipe are as follows.
FFmpeg option | Explanation |
---|---|
-i teapot.mp4 | Selects teapot.mp4 as the input file. |
-f image2pipe | Tells FFmpeg to convert the video into a sequence of frame images (I think!). |
-vcodec rawvideo | Tells FFmpeg to output raw video data (i.e. plain unencoded pixels). |
-pix_fmt rgb24 | Sets the pixel format of the raw data produced by FFmpeg to 3-bytes per pixel – one byte for red, one for blue and one for green. |
- | This final “-” tells FFmpeg to write to stdout, which in this case will send it into our C program via the input pipe. |
The FFmpeg options used for the output pipe are as follows.
FFmpeg option | Explanation |
---|---|
-y | Tells FFmpeg to overwrite the output file if it already exists. |
-f rawvideo | Sets the input format as raw video data. I’m not too sure about the relationship between this option and the next one! |
-vcodec rawvideo | Tells FFmpeg to interpret its input as raw video data (i.e. unencoded frames of plain pixels). |
-pix_fmt rgb24 | Sets the input pixel format to 3-byte RGB pixels – one byte for red, one for blue and one for green. |
-s 1280x720 | Sets the frame size to 1280×720 pixels. FFmpeg will form the incoming pixel data into frames of this size. |
-r 25 | Sets the frame rate of the incoming data to 25 frames per second. |
-i - | Tells FFmpeg to read its input from stdin, which means it will be reading the data out C program writes to its output pipe. |
-f mp4 | Sets the output file format to MP4. |
-q:v 5 | This controls the quality of the encoded MP4 file. The numerical range for this option is from 1 (highest quality, biggest file size) to 32 (lowest quality, smallest file size). I arrived at a value of 5 by trial and error. Subjectively, it seemed to me to give roughly the best trade off between file size and quality. |
-an | Specifies no audio stream in the output file. |
-vcodec mpeg4 | Tells FFmpeg to use its “mpeg4” encoder. I didn’t try any others. |
output.mp4 | Specifies output.mp4 as the output file. |
Epilogue: concatenating videos and images
As an interesting aside, the video I embedded above showing the original and modified teapot videos was spliced together using a slightly modified version of the example program above. Of course, it’s possible to concatenate (splice together) multiple files of different formats using ffmpeg on its own, but I couldn’t quite figure out the correct command line, so I just wrote my own little program to do it.
The files being joined together are:
The full source code is shown below.
// // combine.c - Join multiple MP4 videos and PNG images into one video // Written by Ted Burke - last updated 12-2-2017 // // To compile: // // gcc combine.c // #include <stdio.h> // Video resolution #define W 1280 #define H 720 // Allocate a buffer to store one frame unsigned char frame[H][W][3] = {0}; void main() { int count, n; FILE *pipein; FILE *pipeout; // Open output pipe pipeout = popen("ffmpeg -y -f rawvideo -vcodec rawvideo -pix_fmt rgb24 -s 1280x720 -r 25 -i - -f mp4 -q:v 5 -an -vcodec mpeg4 combined.mp4", "w"); // Write first 50 frames using original video title image from title_original.png pipein = popen("ffmpeg -i title_original.png -f image2pipe -vcodec rawvideo -pix_fmt rgb24 -", "r"); count = fread(frame, 1, H*W*3, pipein); for (n=0 ; n<50 ; ++n) { fwrite(frame, 1, H*W*3, pipeout); fflush(pipeout); } fflush(pipein); pclose(pipein); // Copy all frames from teapot.mp4 to output pipe pipein = popen("ffmpeg -i teapot.mp4 -f image2pipe -vcodec rawvideo -pix_fmt rgb24 -", "r"); while(1) { count = fread(frame, 1, H*W*3, pipein); if (count != H*W*3) break; fwrite(frame, 1, H*W*3, pipeout); fflush(pipeout); } fflush(pipein); pclose(pipein); // Write next 50 frames using modified video title image from title_modified.png pipein = popen("ffmpeg -i title_modified.png -f image2pipe -vcodec rawvideo -pix_fmt rgb24 -", "r"); count = fread(frame, 1, H*W*3, pipein); for (n=0 ; n<50 ; ++n) { fwrite(frame, 1, H*W*3, pipeout); fflush(pipeout); } fflush(pipein); pclose(pipein); // Copy all frames from output.mp4 to output pipe pipein = popen("ffmpeg -i output.mp4 -f image2pipe -vcodec rawvideo -pix_fmt rgb24 -", "r"); while(1) { count = fread(frame, 1, H*W*3, pipein); if (count != H*W*3) break; fwrite(frame, 1, H*W*3, pipeout); fflush(pipeout); } fflush(pipein); pclose(pipein); // Flush and close output pipe fflush(pipeout); pclose(pipeout); }
Note: I used Inkscape to create the video title images. Click here to download the editable Inkscape SVG file.
Hi
I have tried the code above using project setting Console(/SUBSYSTEM:CONSOLE) under LINKER–>System. It compiles fine. All files are in the same directory. I am using Visual C 2010 Windows 7. Unfortunately, I get bad pointers for both FILES pipein and pipeout and no bytes read after fread (count==0). Can you make some suggestions? I have lots of old MOV (1000’s) files that I need to automate for conversion to mp4 and other reason why this piece of code is EXACTLY what I was looking for.
Dan
dsmail@att.net
I am referring to the teapot.mp4 to output.mp4 program in its simple form not the more complex final program.
Hi Dan,
I compiled and ran this in Linux, so it’s possible that your problem is related to running it in Windows.
A few things to check:
Ted
I was able to get this to almost work in Pure Basic under Windows 10. It reads and displays the frames OK and seems to write them to the output file, but there is no video when trying to play back those files using VLC or Windows Media Player. Thoughts?
Does the video file size look big enough to be video?
Ted
On the C version it is 1 Kb. On the PureBasic version it is 3073 Kb.
Here are the messages I’m getting from ffmpeg:
[mov,mp4,m4a,3gp,3g2,mj2 @ 0000020ed8489bc0] Format mov,mp4,m4a,3gp,3g2,mj2 detected only with low score of 1, misdetection possible!
[mov,mp4,m4a,3gp,3g2,mj2 @ 0000020ed8489bc0] moov atom not found
teapot.mp4: Invalid data found when processing input
and
Finishing stream 0:0 without any data written to it.
The teapot.mp4 file had become mysteriously corrupt so I re-downloaded it. Now here is the error from ffmpeg on the new copy:
av_interleaved_write_frame(): Broken pipe
Error writing trailer of pipe:: Broken pipe
Clearly the program/pipe is communicating with ffmpeg.
The file teapot.mp4 should be about 3.5MB. Is it possible that your program is inadvertently writing the output to the same filename as the input? If so, that would explain why it’s not working and also why teapot.mp4 became corrupted. The input pipe shouldn’t modify teapot.mp4 at all, even if it doesn’t work. That’s why I’m wondering about the output pipe.
Ted
I double checked the code and the output file is named output.mp4
I have run the program and teapot.mp4 is still intact so it is not corrupting teapot.mp4.
Some progress. I got this line of code to run directly in ffmpeg as a batch file and it works OK.
ffmpeg -y -i teapot.mp4 -vcodec rawvideo -pix_fmt yuv420p -s 1280×720 -r 25 -q:v 5 -an -vcodec mpeg4 testout.mp4
Here are the modifications I made:
eliminated -f rawvideo
changed to pix_fmt yuv420p
Hi Chris,
Oh, that’s interesting. I’m not sure why the yuv420p pixel format would work when rgb24 doesn’t, but there must be something going on that I’m not aware of. Unfortunately, the sample C code outputs the pixels in rgb24 as it’s currently written. If you write the data to the pipe in rgb24, but tell ffmpeg that it’s in yuv420p format, the video will presumably get completely mangled, although you’ll probably be just about able to make out something recognisable. The size in bytes of a rgb24 frame will be considerably larger than a yuv420 frame (like twice as big maybe?), so you might expect the output video to somehow alternate back and forth (frame by frame) between two currupted versions of the video.
Anyway, if you really can’t get ffmpeg (at the output pipe) to accept rgb24 pixels, then in principle you could convert your frame to that format before writing it to the pipe. Alternatively, you could modify the pixel format at the input pipe’s ffmpeg command and then do all your processing with the yuv24 frames.
I’ll be interested to hear how you get on.
Ted
Pingback: FFmpeg « jponsoftware
Now it works when writing rgb24 pixels! It plays back OK in VLC. This line is run directly from a batch file.
ffmpeg -y -i teapot.mp4 -vcodec rawvideo -pix_fmt rgb24 -s 1280×720 -r 25 -q:v 5 -an -vcodec mpeg4 testout.mp4
Phew! Glad to hear you got it working!
So ultimately, what was the change that did it? Was it the removal of “-f rawvideo” that made the difference?
Ted
The C program doesn’t work under Windows; only the batch file with the command line I gave. I can only conclude that the Windows version of ffmpeg is not working.
I transferred everything to Linux and now the C program runs just fine — all is ducky. This puts a crimp in my workflow as all of my other video tools are written for windows, but I’ll manage 🙂
Best wishes and I’ll keep you updated.
Hi Chris,
A distant memory is slowly coming back to me. In Windows, I think the popen function works differently. I haven’t really used it in Windows, so I can’t say for sure, but from a quick investigation online it looks like each instance of “popen” in the sample code should be replaced with either “_popen” or “_wpopen” (the second one is just the wide character version of the same function).
Did you already do that?
The resulting lines would be:
…or maybe…
I’m not sure about the corresponding pclose function calls – maybe they need to be replaced with _pclose or something like that too?
Oh, and apparently _popen only works in a console application. I guess that’s what your program is though.
Ted
Hi Ted –
I use the pelles C compiler and it doesn’t recognize wpopen.
I’m going to try running Linux virtually on Windows.
Ah, I see. I’ve heard of Pelles C, but I’ve never used it.
Best of luck.
Ted
Here is a question for you, Ted. Do you know how to address the pixels in a planar YUV bitmap? Thanks.
There are several planar YUV formats, and the method of addressing an individual pixel depends on which one is being used. If you can tell me the exact format you’re working with, I’ll try to show you an example.
See here for a long list of formats, including a whole section on planar YUV:
https://www.fourcc.org/yuv.php
Ted
Here’s a quick example for yuv420p format, that shows how to address individual pixels in a frame. In this case, I’m using ffmpeg to capture a raw yuv420p frame straight from my webcam. I read it into my C program through a pipe, then convert to a raw rgb24 frame (pixel by pixel) and write the frame out to a PNM file.
I subsequently converted to jpg using ImageMagick, as follows:
As you can see, the image is intact (and it’s cold here!):

Anyway, hopefully that illustrates how to address the pixels? If you’re using a different planar YUV format, the details could be different though.
Ted
Hi Ted –
It looks like this is the one that corresponds to ffmpeg’s yuv420p pixel format:
https://www.fourcc.org/pixel-format/yuv-i420/
Here is what I’ve come up with so far. I realize it will process some “empty” pixels due to the 4:2:0 sampling, unless the u and v samples are sited adjacent to each other.
My quandry is how to get these pixels into their arrays and how to discover the base address of each array.
// Video processing example using FFmpeg
// Written by Ted Burke – last updated 12-2-2017
// Now works in YUV space
// Frame rate = 29.97
// To compile: gcc clipper.c -o clipper
#include
//#define Kr 0.2126
//#define Kg 0.7152
//#define Kb 0.0722 //REC.709
// Video resolution
#define W 1280
#define H 720
// Allocate a buffer to store one frame
unsigned char frame[H][W][3] = {0};
unsigned char lum[H][W] = {0};
unsigned char u[H][W] = {0};
unsigned char v[H][W] = {0};
int main(void)
{
int x, y, count;
// Open an input pipe from ffmpeg and an output pipe to a second instance of ffmpeg
FILE *pipein = popen(“ffmpeg -i KodakChart.mp4 -f image2pipe -vcodec rawvideo -pix_fmt yuv420p -“, “r”);
FILE *pipeout = popen(“ffmpeg -y -f rawvideo -vcodec rawvideo -pix_fmt yuv420p -s 1280×720 -r 29.97 -i – -f mp4 -q:v 5 intermediate.mp4”, “w”);
//-pix_fmt yuv420p
// Process video frames
while(1)
{
// Read a frame from the input pipe into the buffer
count = fread(frame, 1, H*W*3, pipein);
// If we didn’t get a frame of video, we’re probably at the end
if (count != H*W*3) break;
// Process this frame
for (y=0 ; y<H ; y++)
{
for (x=0 ; x235) {lum[y][x]=235;}
if(u[y][x]>240) {u[y][x]=240;}
if(v[y][x]>240) {v[y][x]=240;}
if(lum[y][x]<16) {lum[y][x]=16;}
if(u[y][x]<16) {u[y][x]=16;}
if(v[y][x]<16) {v[y][x]=16;}
}
}
// Write this frame to the output pipe
fwrite(frame, 1, H*W*3, pipeout);
}
// Flush and close input and output pipes
fflush(pipein);
pclose(pipein);
fflush(pipeout);
pclose(pipeout);
}
Hi Chris,
Try this:
Hi Ted –
I didn’t see your most recent post until after I had posted mine.
Here is another approach. It skips alternate chroma samples in the x and y dimensions, as does 4:2:0. I’m not sure which if either will work. This one assumes the U and V pixels are not adjacent.
Ultimately the video must play back in VLC.
=======================================================================
// Video processing example using FFmpeg
// Written by Ted Burke – last updated 12-2-2017
// Now works in YUV space
// Frame rate = 29.97
// To compile: gcc clipper.c -o clipper
#include
//#define Kr 0.2126
//#define Kg 0.7152
//#define Kb 0.0722 //REC.709
// Video resolution
#define W 1280
#define H 720
// Allocate a buffer to store one frame
unsigned char frame[H][W][3] = {0};
unsigned char lum[H][W] = {0};
unsigned char u[H][W] = {0};
unsigned char v[H][W] = {0};
int main(void)
{
int x, y, count;
// Open an input pipe from ffmpeg and an output pipe to a second instance of ffmpeg
FILE *pipein = popen(“ffmpeg -i KodakChart.mp4 -f image2pipe -vcodec rawvideo -pix_fmt yuv420p -“, “r”);
FILE *pipeout = popen(“ffmpeg -y -f rawvideo -vcodec rawvideo -pix_fmt yuv420p -s 1280×720 -r 29.97 -i – -f mp4 -q:v 5 intermediate.mp4”, “w”);
//-pix_fmt yuv420p
// Process video frames
while(1)
{
// Read a frame from the input pipe into the buffer
count = fread(frame, 1, H*W*3, pipein);
// If we didn’t get a frame of video, we’re probably at the end
if (count != H*W*3) break;
// Process this frame
for (y=0 ; y<H ; y++) //process every luma pixel
{
for (x=0 ; x235) {lum[y][x]=235;}
if(lum[y][x]<16) {lum[y][x]=16;}
}
}
for (y=0 ; y<H ; y+2) //process alternate chroma pixels in x and y in 4:2:0 format
{
for (x=0 ; x240) {u[y][x]=240;}
if(v[y][x]>240) {v[y][x]=240;}
if(u[y][x]<16) {u[y][x]=16;}
if(v[y][x]<16) {v[y][x]=16;}
}
}
// Write this frame to the output pipe
fwrite(frame, 1, H*W*3, pipeout);
}
// Flush and close input and output pipes
fflush(pipein);
pclose(pipein);
fflush(pipeout);
pclose(pipeout);
}
Hi Chris,
Yeah, there may be a bit of a delay in the comments appearing, so you probably posted this before I posted the last example, but I didn’t see your message until now. Anyway, the example I posted above works perfectly on my computer (Linux) and the output file plays without any problem in VLC.
In the yuv420p format, each frame consists of three separate chunks:
1. The first is for Y at full resolution (W x H),
2. The second is for U at half resolution (W/2 x H/2),
3. The third is for V at half resolution (W/2 x H/2).
Hence the size of the complete frame is (W*H*3) / 2.
In my previous example, I create a 1-dimensional array of unsigned chars (i.e. bytes) to store the complete frame, then assign a pointer to access each of the three individual chunks. Hence, for pixel x,y you would access the three components as follows:
Hopefully that makes sense?
Ted
I’m trying to convert your code to output a 420p mp4 file rather than a pnm file. It is not necessary to convert to RGB. VLC can play back a 420p file. An 8-bit value cannot be 255 so I’m eliminating that code.
Hi Chris,
This example I posted above already does output an mp4 file and doesn’t convert to RGB. If you can’t see it, reload the page.
By the way, 8-bit values can be anything between 0 and 255 inclusive. So 255 is no problem.
Ted
I think the system skipped a post as I didn’t see your mp4 code until you posted the link to it. I will give it a try now.
It works perfectly now! Thank you very much for your help — it has been invaluable.
Now it is up to me to get the video to clip the way I need it to 🙂
Thanks again!
Great! Best of luck getting everything else working.
Ted
Here is what I’m trying to do: clip off everything above 235 (on the DIG. scale). So far it isn’t going well. I added a video filter to the output pipe and it isn’t helping: scale=out_range=tv
Hi Chris,
I don’t really understand what I’m looking at here or exactly what you’re trying to do. I mean, I can see that the image you attached includes what looks like a video of a colour panel on the left and on the right is some kind of graph of the distribution of pixel luminance (or something like that) as a function of horizontal position in the video. But when you say “clip off everything above 235” what exactly do you mean by that?
Presumably you mean to modify the video signal (i.e. the part on the left that shows the colour panel) rather than the graph image?
And by “clip off”, do you mean to limit pixel luminance to 235 (i.e. areas that are very bright become slightly less bright)? Or do you mean to actually remove regions above that luminance from the image by resizing the frame?
If you just mean to limit pixel luminance to 235 across the entire image, the example I gave you earlier already does that. It also limits the range of U and V, because that’s what you had been doing in the code you posted, but perhaps you don’t need to do that? Y (the luminance) affects the brightness of the pixel, whereas U and V determine its colour. if you’re just trying to limit the brightness, then you probably don’t need to modify U and V at all.
Ted
For example, here’s pixel luminance distribution as a function of horizontal position for the last frame in the test video I used (the teapot one). The white line marks the level Y = 235. You can see all the pixels have been limited to below that level of luminance.
It is not a distribution. It is simply a time series — video amplitude (0 – 255 on Y axis) vs X coordinate. Here is the unconditional rule used to process this video: lum[y*W+x] = 235;. You can see that the level goes all the way to 255.
Now here is another which behaves predictably. The unconditional rule used: lum[y*W+x] = 110;
It tells me that my “scope” is correctly calibrated as all of the levels are (around) digital 110.
Clip level at 190:

So something is messing with my video levels despite having “tv” levels (16 – 235) enabled on both input and output. Perhaps ffmpeg?
Hi Chris,
Hmmm. I suppose it’s possible that ffmpeg is applying some color range scaling or something like that. This is new to me, so I’ll have to look into it to figure out what’s going on. I’ll let you know if I work it out. If it is the ffmpeg encoding at the output that’s scaling the upper end of the range back up to 255, then hopefully the solution will be as simple as finding the right command line switch to disable that behaviour.
Ted
I may join the ffmpeg mailing list and report this as a bug.
I can’t say for certain that a bug isn’t responsible, but it seems more likely that it’s just a matter of figuring out the correct command line to elicit the behaviour you require. ffmpeg is a complex and powerful piece of software with countless configuration options. Personally, I have only scratched the surface. If you go to the ffmpeg mailing list seeking guidance, unless you’re absolutely certain that it’s bug, I suggest treading carefully!
To test what was happening on my computer, I stretched out the Y values of a video to the range 0-255, then processed the video through my program to clamp all pixel Y values to the range 16-235. It seems to be clamping the values mostly to the range 16-235, but some pixels do fall outside that range as one would expect due to the mp4 encoding process being lossy. It’s nothing like how your distribution appears though, so I suspect something is different in the way you’re encoding the mp4 output file with ffmpeg.
These are the exact lines that open the input and output pipes’ instances of ffmpeg in my program:
Ted
Here are the exact lines I am using. I will scrutinize them to see what might cause a shift in video levels.
FILE *pipein = popen(“ffmpeg -i KodakChart.mp4 -f image2pipe -vcodec rawvideo -vf scale=in_range=tv -pix_fmt yuv420p -“, “r”);
FILE *pipeout = popen(“ffmpeg -y -f rawvideo -vcodec rawvideo -pix_fmt yuv420p -s 1280×720 -r 29.97 -i – -f mp4 -q:v 5 -vf scale=out_range=tv cliptest.mp4”, “w”);
Other than the range filters I don’t see anything that would explicitly alter the video levels. I have tried it with and without the range filters and get the same result.
Here are video levels at 235 with a scope graticule line at 235. This isn’t right.

sir I have
$ ffmpeg -f alsa -i hw:0,0 -af astats=metadata=1:reset=1,ametadata=print:key=lavfi.astats.Overall.RMS_level -f null – 2> log.fifo
as input and
$ tail -f log.fifo |grep -i RMS_level
as output I need to program this code in C language to measure the RMS level can You please help me out.
Hi Omy,
Try this:
That code above is working correctly for me. It opens a pipe from alsa, correctly parses the RMS value from each incoming line (if it is present), converts the value to a double and prints it in the terminal.
At the end of the ffmpeg command I use when opening the input pipe, you’ll see this: “- 2>&1”. The hyphen directs the ffmpeg output to stdout. The “2>&1” redirects ffmpeg’s stderr to the stdout stream. This seems to be necessary because, as I understand it, the lines of text that include the RMS_level metadata seem to be sent to stderr by default.
Anyway, I hope that helps!
Ted
By the way, if you want to do the same thing without using C at all, here’s a single command line which pipes the output of ffmpeg through grep to parse the RMS_level value directly from any lines that contain it:
Thank You so much. It’s working for me
Great, best of luck with whatever you’re doing!
Hi Ted –
Do you know of a way to pass video to a file using ffmpeg without altering the video levels at all? I have tried using -vf null with no success.
https://ffmpeg.org/ffmpeg-filters.html#null
Hi Chris,
Sorry, I can’t really advise on this. Although I use ffmpeg all the time, I just don’t have a very deep understanding of how it sets up its video pipeline internally. I recommend trying the mailing list on the ffmpeg website:
https://www.ffmpeg.org/contact.html#MailingLists
Ted
Hi Ted
I need a help to conversion of MPTS to SPTS.
Actually I have a UDP single stream with multiple channel I need to separate it and send to different – different machine.
I am using linux. Is it possible to use ffmpeg of DVBlast.
Sorry Omy, I don’t know anything about that.
Ted
Hello,
thank you for the nice example! Do you plan to do a third part, where you combine video and audio? Would be interesting to see an concat example with both.
Have a good day!
Jonathan
Hi Ted –
Do you have a snippet of code for reading 4:2:2 pixels?
Many thanks in advance.
Chris
I’ve done this before a few times, but it’s been a while. I think it depends on how the 422 pixels are packed into the frame. You can find the formulae for converting YUV to RGB here:
https://en.wikipedia.org/wiki/YUV#Y%E2%80%B2UV422_to_RGB888_conversion
Ted
Hi Ted
I am posting my question again because I don’t know if the previous one was published, as it doesn’t apear in this listing.
I am capturing a stream from a camera and, after playing with the frame, I output the video frame by frame with a modification (I transform the frames to sepia).
I also am sending the error output to try to deal with it using another pipe but can’t find a way to do so.
The errors output is in the part of the code as:
-f hls \”/mnt/sdd/html/hls/live/CAM1/video240/stream.m3u8\” 2>&1 >> – | grep \”error\”
Is this possible?
Can You help me?
Thank you very much.
The code is below:
Hi Ted
Just in case, this is the rest of the above code:
Great post!
I did exactly as in your code, it runs – the file is opened but unfortunately fread() always returns 0. I checked with feof() and it reaches end of file (without error) and never reads it. Any ideas? tried with several mp4 files. appreciate you reply, Thank you.
I forgot to mention that I tried to use this on Android native code.
Thanks for sharing your code.
I wanna use that code for reading video frame by frame and then convert the RGB to HSV and some other post-processing like colour-quantization, etc. My current problem is I can access the RGB data of each frame. When I want to read a frame from the input pipe, it showed the “count” variable as “607”, which means no frame. I would appreciate it if u can help me through this.
++ I read above comments and my subsystem hs been set to the Console, “Console (/SUBSYSTEM:CONSOLE)”,. And already I did the change to “_popen”.
To get it to work in windows I had to use:
FILE *pipein = _popen(“ffmpeg -i teapot.mp4 -f image2pipe -vcodec rawvideo -pix_fmt rgb24 -“, “rb”);
to tell it to read the pipe as binary. Otherwise I get a Broken Pipe error.
Thanks for the useful tip Thomas!
Ted
Thanks Thomas! It’s really helpful!
Using windows 11, to make it work we use “rb +” and “wb +” in both functions _popen.
“`
FILE *pipein = _popen(“ffmpeg -i teapot.mp4 -f image2pipe -vcodec rawvideo -pix_fmt rgb24 -“, “rb+”);
FILE *pipeout = _popen(“ffmpeg -y -f rawvideo -vcodec rawvideo -pix_fmt rgb24 -s 1280×720 -r 25 -i – -f mp4 -q:v 5 -an -vcodec mpeg4 output.mp4”, “wb+”);
“`
Hi Chranium,
I haven’t seen the plus character used in the _popen mode like that before. What does it do when used like this? I can’t see anything about it in the documentation for _popen. I know what it would mean in an fopen function call (basically, open for reading and writing), but that doesn’t seem applicable to these _popen calls.
Ted
Nice tutorial but if you want to concatenate videos with the same encoding really quickly you should use the built in demuxer. It’s lightning fast because it doesn’t need to reencode frames, it works almost as fast as your hard drive can copy the files.
Example from here https://stackoverflow.com/a/49373401/2116716
Create a text file named vidlist.txt in the following format:
file ‘/path/to/clip1’
file ‘/path/to/clip2’
file ‘/path/to/clip3’
Note that these can be either relative or absolute paths.
Then issue the command:
ffmpeg -f concat -safe 0 -i vidlist.txt -c copy output.mp4
Thanks Daniel!
Ted
i need to write and read audio/video files to SD card using embedded c
is your code suitable to me??
Hi,to all , i have checked the tea-pot program ,its works fine, thanks for adding a pipe concept on ffmpeg, but my doubt how to pipe audio and video at the same time ,in the tea-pot example..can you please help to find out the solution
Hi Ted –
How could your code be modified to read 10-bit pixels? I’m having trouble with the pixel addressing and am able to get a black-and-white image from 10-bit Y values but am having trouble with the U and V pixels.
Many thanks.
Hi Chris,
I’m not sure, but I’m curious to try it. Coud you provide a link to an example video file?
Ted
Here is a link to a 10-bit video file.
http://jell.yfish.us/media/jellyfish-40-mbps-hd-hevc-10bit.mkv
I am taking into account the fact that I will have to shift off the lowest 2 bits to display it on my 8-bit hardware, but I will be able to read and process it.
Hi Chris.
I’ve had a go at reading the 10-bit pixels, but have not succeeded so far. I’ll try again, hopefully tomorrow, but in the meantime here’s my most recent attempt (pasted in below).
Ted
————–
Hi Ted –
Thanks for taking a stab at this. It’s trickier than it seems 🙂
Someone said that ffmpeg’s yuv420p10le is not the same as P010, so one wonders if this documentation is valid:
https://docs.microsoft.com/en-us/windows/win32/medfound/10-bit-and-16-bit-yuv-video-formats#420-formats
A good video player can play that 10-bit video with no problem, so a conversion is being made somewhere.
Hi Chris,
Can I just double check: You need to access the full 10-bit samples, right? I was able to read the 10-bit video as rgb24 no problem (i.e. 1 red byte, 1 green byte, 1 blue byte). It’s just reading the rgb components in 10-bit resolution that I didn’t figure out.
In fact, do you even want RGB or does it suit you better to get the pixels in YUV?
Ted
Hi Ted –
I imagine a typical video file will have YUV samples in it, either 4:2:0 or 4:2:2. Converting YUV to RGB is no problem for me. I know I will have to lose 2 bits from the 10-bit sample to display it, as I don’t own a 10-bit monitor. I wonder if 10-bit video will ever catch on with the public and services like YouTube. It does have its place in high-end video, though.
Here is how I convert 8-bit YUV to RGB. It took a lot of trial and error and testing to settle on this. There are many such YUV-to-RGB formulae out there, but not all of them handle the colors accurately, and we are meticulous about color accuracy. Color space is BT.709. Kr, Kg and Kb are the luma coefficients. rf, gf, bf,yf,uf and vf are all floats.
http://avisynth.nl/index.php/Color_conversions
It occurs to me that these constants will change for 10-bits.
BT.709 COEFFICIENTS
Kr = 0.2126: Kg = 0.7152: Kb = 0.0722
rf = (255/219)*yf + (255/112)*vf*(1-Kr) – (255*16/219 + 255*128/112*(1-Kr))
gf = (255/219)*yf – (255/112)*uf*(1-Kb)*Kb/Kg – (255/112)*vf*(1-Kr)*Kr/Kg – (255*16/219 – 255/112*128*(1-Kb)*Kb/Kg – 255/112*128*(1-Kr)*Kr/Kg)
bf = (255/219)*yf + (255/112)*uf*(1-Kb) – (255*16/219 + 255*128/112*(1-Kb))
This did not work for me on Windows using Visual Studio. So I’m trying to debug it first with ffplay – otherwise I just get a file with gray image. Any idea how this can be use with ffplay to display the video ?