A simple way to read and write audio and video files in C using FFmpeg (part 2: video)

In my previous post, I demonstrated how FFmpeg can be used to pipe raw audio samples in and out of a simple C program to or from media files such as WAV files (based on something similar for Python I found on Zulko’s blog). The same idea can be used to perform video processing, as shown in the program below.

Reading and writing a video file

In this example I use two pipes, each connected to its own instance of FFmpeg. Basically, I read frames one at time from the input pipe, invert the colour of every pixel, and then write the modified frames to the output pipe. The input video I’m using is teapot.mp4, which I recorded on my phone. The modified video is saved to a second file, output.mp4. The video resolution is 1280×720, which I checked in advance using the ffprobe utility that comes with FFmpeg.

The full code is below, but first let’s see the original and modified videos:

The original and modified MP4 video files can be downloaded here:

The program I wrote to convert the original video into the modified version is shown below.

//
// Video processing example using FFmpeg
// Written by Ted Burke - last updated 12-2-2017
//

#include <stdio.h>

// Video resolution
#define W 1280
#define H 720

// Allocate a buffer to store one frame
unsigned char frame[H][W][3] = {0};

void main()
{
    int x, y, count;
    
    // Open an input pipe from ffmpeg and an output pipe to a second instance of ffmpeg
    FILE *pipein = popen("ffmpeg -i teapot.mp4 -f image2pipe -vcodec rawvideo -pix_fmt rgb24 -", "r");
    FILE *pipeout = popen("ffmpeg -y -f rawvideo -vcodec rawvideo -pix_fmt rgb24 -s 1280x720 -r 25 -i - -f mp4 -q:v 5 -an -vcodec mpeg4 output.mp4", "w");
    
    // Process video frames
    while(1)
    {
        // Read a frame from the input pipe into the buffer
        count = fread(frame, 1, H*W*3, pipein);
        
        // If we didn't get a frame of video, we're probably at the end
        if (count != H*W*3) break;
        
        // Process this frame
        for (y=0 ; y<H ; ++y) for (x=0 ; x<W ; ++x)
        {
            // Invert each colour component in every pixel
            frame[y][x][0] = 255 - frame[y][x][0]; // red
            frame[y][x][1] = 255 - frame[y][x][1]; // green
            frame[y][x][2] = 255 - frame[y][x][2]; // blue
        }
        
        // Write this frame to the output pipe
        fwrite(frame, 1, H*W*3, pipeout);
    }
    
    // Flush and close input and output pipes
    fflush(pipein);
    pclose(pipein);
    fflush(pipeout);
    pclose(pipeout);
}

The FFmpeg options used for the input pipe are as follows.

FFmpeg option Explanation
-i teapot.mp4 Selects teapot.mp4 as the input file.
-f image2pipe Tells FFmpeg to convert the video into a sequence of frame images (I think!).
-vcodec rawvideo Tells FFmpeg to output raw video data (i.e. plain unencoded pixels).
-pix_fmt rgb24 Sets the pixel format of the raw data produced by FFmpeg to 3-bytes per pixel – one byte for red, one for blue and one for green.
- This final “-” tells FFmpeg to write to stdout, which in this case will send it into our C program via the input pipe.

The FFmpeg options used for the output pipe are as follows.

FFmpeg option Explanation
-y Tells FFmpeg to overwrite the output file if it already exists.
-f rawvideo Sets the input format as raw video data. I’m not too sure about the relationship between this option and the next one!
-vcodec rawvideo Tells FFmpeg to interpret its input as raw video data (i.e. unencoded frames of plain pixels).
-pix_fmt rgb24 Sets the input pixel format to 3-byte RGB pixels – one byte for red, one for blue and one for green.
-s 1280x720 Sets the frame size to 1280×720 pixels. FFmpeg will form the incoming pixel data into frames of this size.
-r 25 Sets the frame rate of the incoming data to 25 frames per second.
-i - Tells FFmpeg to read its input from stdin, which means it will be reading the data out C program writes to its output pipe.
-f mp4 Sets the output file format to MP4.
-q:v 5 This controls the quality of the encoded MP4 file. The numerical range for this option is from 1 (highest quality, biggest file size) to 32 (lowest quality, smallest file size). I arrived at a value of 5 by trial and error. Subjectively, it seemed to me to give roughly the best trade off between file size and quality.
-an Specifies no audio stream in the output file.
-vcodec mpeg4 Tells FFmpeg to use its “mpeg4” encoder. I didn’t try any others.
output.mp4 Specifies output.mp4 as the output file.

Epilogue: concatenating videos and images

As an interesting aside, the video I embedded above showing the original and modified teapot videos was spliced together using a slightly modified version of the example program above. Of course, it’s possible to concatenate (splice together) multiple files of different formats using ffmpeg on its own, but I couldn’t quite figure out the correct command line, so I just wrote my own little program to do it.

The files being joined together are:

The full source code is shown below.

//
// combine.c - Join multiple MP4 videos and PNG images into one video
// Written by Ted Burke - last updated 12-2-2017
//
// To compile:
// 
//    gcc combine.c
// 

#include <stdio.h>

// Video resolution
#define W 1280
#define H 720

// Allocate a buffer to store one frame
unsigned char frame[H][W][3] = {0};

void main()
{
    int count, n;
    FILE *pipein;
    FILE *pipeout;
    
    // Open output pipe
    pipeout = popen("ffmpeg -y -f rawvideo -vcodec rawvideo -pix_fmt rgb24 -s 1280x720 -r 25 -i - -f mp4 -q:v 5 -an -vcodec mpeg4 combined.mp4", "w");
    
    // Write first 50 frames using original video title image from title_original.png
    pipein = popen("ffmpeg -i title_original.png -f image2pipe -vcodec rawvideo -pix_fmt rgb24 -", "r");
    count = fread(frame, 1, H*W*3, pipein);
    for (n=0 ; n<50 ; ++n)
    {
        fwrite(frame, 1, H*W*3, pipeout);
        fflush(pipeout);
    }
    fflush(pipein);
    pclose(pipein);
    
    // Copy all frames from teapot.mp4 to output pipe
    pipein = popen("ffmpeg -i teapot.mp4 -f image2pipe -vcodec rawvideo -pix_fmt rgb24 -", "r");
    while(1)
    {
        count = fread(frame, 1, H*W*3, pipein);
        if (count != H*W*3) break;
        fwrite(frame, 1, H*W*3, pipeout);
        fflush(pipeout);
    }
    fflush(pipein);
    pclose(pipein);

    // Write next 50 frames using modified video title image from title_modified.png
    pipein = popen("ffmpeg -i title_modified.png -f image2pipe -vcodec rawvideo -pix_fmt rgb24 -", "r");
    count = fread(frame, 1, H*W*3, pipein);
    for (n=0 ; n<50 ; ++n)
    {
        fwrite(frame, 1, H*W*3, pipeout);
        fflush(pipeout);
    }
    fflush(pipein);
    pclose(pipein);
    
    // Copy all frames from output.mp4 to output pipe
    pipein = popen("ffmpeg -i output.mp4 -f image2pipe -vcodec rawvideo -pix_fmt rgb24 -", "r");
    while(1)
    {
        count = fread(frame, 1, H*W*3, pipein);
        if (count != H*W*3) break;
        fwrite(frame, 1, H*W*3, pipeout);
        fflush(pipeout);
    }
    fflush(pipein);
    pclose(pipein);
    
    // Flush and close output pipe
    fflush(pipeout);
    pclose(pipeout);
}

Note: I used Inkscape to create the video title images. Click here to download the editable Inkscape SVG file.

This entry was posted in Uncategorized and tagged , , , , , , , , , , , , , , , , , , , , . Bookmark the permalink.

82 Responses to A simple way to read and write audio and video files in C using FFmpeg (part 2: video)

  1. Dan says:

    Hi
    I have tried the code above using project setting Console(/SUBSYSTEM:CONSOLE) under LINKER–>System. It compiles fine. All files are in the same directory. I am using Visual C 2010 Windows 7. Unfortunately, I get bad pointers for both FILES pipein and pipeout and no bytes read after fread (count==0). Can you make some suggestions? I have lots of old MOV (1000’s) files that I need to automate for conversion to mp4 and other reason why this piece of code is EXACTLY what I was looking for.
    Dan
    dsmail@att.net

    • Dan says:

      I am referring to the teapot.mp4 to output.mp4 program in its simple form not the more complex final program.

    • batchloaf says:

      Hi Dan,

      I compiled and ran this in Linux, so it’s possible that your problem is related to running it in Windows.

      A few things to check:

      • Have you got ffmpeg installed?
      • Have you checked that ffmpeg is working correctly? Specifically, I would try running the ffmpeg commands embedded inside the popen function calls on lines 20 and 21 of the example. If those ffmpeg commands produce errors then that’s likely to be why the popen calls are not returning valid pointers.
      • Assuming those ffmpeg are working correctly, is the ffmpeg installation folder in your PATH? i.e. if you just type ffmpeg in a command window, can windows actually find the ffmpeg executable?

      Ted

      • chris319 says:

        I was able to get this to almost work in Pure Basic under Windows 10. It reads and displays the frames OK and seems to write them to the output file, but there is no video when trying to play back those files using VLC or Windows Media Player. Thoughts?

      • batchloaf says:

        Does the video file size look big enough to be video?

        Ted

  2. chris319 says:

    On the C version it is 1 Kb. On the PureBasic version it is 3073 Kb.

  3. chris319 says:

    Here are the messages I’m getting from ffmpeg:

    [mov,mp4,m4a,3gp,3g2,mj2 @ 0000020ed8489bc0] Format mov,mp4,m4a,3gp,3g2,mj2 detected only with low score of 1, misdetection possible!
    [mov,mp4,m4a,3gp,3g2,mj2 @ 0000020ed8489bc0] moov atom not found
    teapot.mp4: Invalid data found when processing input

    and

    Finishing stream 0:0 without any data written to it.

  4. chris319 says:

    The teapot.mp4 file had become mysteriously corrupt so I re-downloaded it. Now here is the error from ffmpeg on the new copy:

    av_interleaved_write_frame(): Broken pipe
    Error writing trailer of pipe:: Broken pipe

  5. chris319 says:

    Clearly the program/pipe is communicating with ffmpeg.

  6. batchloaf says:

    The file teapot.mp4 should be about 3.5MB. Is it possible that your program is inadvertently writing the output to the same filename as the input? If so, that would explain why it’s not working and also why teapot.mp4 became corrupted. The input pipe shouldn’t modify teapot.mp4 at all, even if it doesn’t work. That’s why I’m wondering about the output pipe.

    Ted

  7. chris319 says:

    I double checked the code and the output file is named output.mp4

    I have run the program and teapot.mp4 is still intact so it is not corrupting teapot.mp4.

  8. chris319 says:

    Some progress. I got this line of code to run directly in ffmpeg as a batch file and it works OK.

    ffmpeg -y -i teapot.mp4 -vcodec rawvideo -pix_fmt yuv420p -s 1280×720 -r 25 -q:v 5 -an -vcodec mpeg4 testout.mp4

    Here are the modifications I made:

    eliminated -f rawvideo
    changed to pix_fmt yuv420p

    • batchloaf says:

      Hi Chris,

      Oh, that’s interesting. I’m not sure why the yuv420p pixel format would work when rgb24 doesn’t, but there must be something going on that I’m not aware of. Unfortunately, the sample C code outputs the pixels in rgb24 as it’s currently written. If you write the data to the pipe in rgb24, but tell ffmpeg that it’s in yuv420p format, the video will presumably get completely mangled, although you’ll probably be just about able to make out something recognisable. The size in bytes of a rgb24 frame will be considerably larger than a yuv420 frame (like twice as big maybe?), so you might expect the output video to somehow alternate back and forth (frame by frame) between two currupted versions of the video.

      Anyway, if you really can’t get ffmpeg (at the output pipe) to accept rgb24 pixels, then in principle you could convert your frame to that format before writing it to the pipe. Alternatively, you could modify the pixel format at the input pipe’s ffmpeg command and then do all your processing with the yuv24 frames.

      I’ll be interested to hear how you get on.

      Ted

  9. Pingback: FFmpeg « jponsoftware

  10. chris319 says:

    Now it works when writing rgb24 pixels! It plays back OK in VLC. This line is run directly from a batch file.

    ffmpeg -y -i teapot.mp4 -vcodec rawvideo -pix_fmt rgb24 -s 1280×720 -r 25 -q:v 5 -an -vcodec mpeg4 testout.mp4

  11. chris319 says:

    The C program doesn’t work under Windows; only the batch file with the command line I gave. I can only conclude that the Windows version of ffmpeg is not working.

    I transferred everything to Linux and now the C program runs just fine — all is ducky. This puts a crimp in my workflow as all of my other video tools are written for windows, but I’ll manage 🙂

    Best wishes and I’ll keep you updated.

    • batchloaf says:

      Hi Chris,

      A distant memory is slowly coming back to me. In Windows, I think the popen function works differently. I haven’t really used it in Windows, so I can’t say for sure, but from a quick investigation online it looks like each instance of “popen” in the sample code should be replaced with either “_popen” or “_wpopen” (the second one is just the wide character version of the same function).

      Did you already do that?

      The resulting lines would be:

          FILE *pipein = _popen("ffmpeg -i teapot.mp4 -f image2pipe -vcodec rawvideo -pix_fmt rgb24 -", "r");
          FILE *pipeout = _popen("ffmpeg -y -f rawvideo -vcodec rawvideo -pix_fmt rgb24 -s 1280x720 -r 25 -i - -f mp4 -q:v 5 -an -vcodec mpeg4 output.mp4", "w");
      

      …or maybe…

          FILE *pipein = _wpopen("ffmpeg -i teapot.mp4 -f image2pipe -vcodec rawvideo -pix_fmt rgb24 -", "r");
          FILE *pipeout = _wpopen("ffmpeg -y -f rawvideo -vcodec rawvideo -pix_fmt rgb24 -s 1280x720 -r 25 -i - -f mp4 -q:v 5 -an -vcodec mpeg4 output.mp4", "w");
      

      I’m not sure about the corresponding pclose function calls – maybe they need to be replaced with _pclose or something like that too?

      Oh, and apparently _popen only works in a console application. I guess that’s what your program is though.

      Ted

  12. chris319 says:

    Hi Ted –
    I use the pelles C compiler and it doesn’t recognize wpopen.

    I’m going to try running Linux virtually on Windows.

  13. chris319 says:

    Here is a question for you, Ted. Do you know how to address the pixels in a planar YUV bitmap? Thanks.

    • batchloaf says:

      There are several planar YUV formats, and the method of addressing an individual pixel depends on which one is being used. If you can tell me the exact format you’re working with, I’ll try to show you an example.

      See here for a long list of formats, including a whole section on planar YUV:

      https://www.fourcc.org/yuv.php

      Ted

      • batchloaf says:

        Here’s a quick example for yuv420p format, that shows how to address individual pixels in a frame. In this case, I’m using ffmpeg to capture a raw yuv420p frame straight from my webcam. I read it into my C program through a pipe, then convert to a raw rgb24 frame (pixel by pixel) and write the frame out to a PNM file.

        //
        // Example of using FFmpeg to capture a yuv420p frame from a cam using
        // ffmpeg, converting to rgb24 and then writing to a PNM file
        // 
        // Written by Ted Burke - last updated 17-12-2017
        //
        
        #include <stdio.h>
         
        // Video resolution
        #define W 640
        #define H 480
         
        // Allocate buffers to store one frame in yuv420p ("I420") and rgb24 formats
        unsigned char yuv_frame[(W*H*3)/2];
        unsigned char rgb_frame[H][W][3] = {0};
         
        void main()
        {
            int i, j, count, n;
             
            // Open an input pipe from ffmpeg and an output pipe to a second instance of ffmpeg
            FILE *pipein = popen("ffmpeg -f v4l2 -framerate 30 -video_size 640x480 -i /dev/video0 -f image2pipe -vcodec rawvideo -pix_fmt yuv420p -vframes 1 -", "r");
            
            // Read a frame from the input pipe into the buffer
            count = fread(yuv_frame, 1, (H*W*3)/2, pipein);
            fprintf(stderr, "frame %d, bytes read: %d\n", n, count);
            
            // Flush and close input and output pipes
            fflush(pipein);
            pclose(pipein);
            
            // Variables to store all components of a pixel in both formats
            int R, G, B;
            int Y, U, V;
            
            // Convert frame from YUV to RGB
            for (i=0 ; i<H ; ++i) for (j=0 ; j<W ; ++j)
            {
                // Get YUV components from yuv420p frame, as described here:
                // https://www.fourcc.org/pixel-format/yuv-i420/
                Y = yuv_frame[i*W+j];
                U = yuv_frame[W*H + (i/2)*(W/2) + (j/2)];
                V = yuv_frame[W*H + (W*H/4) + (i/2)*(W/2) + (j/2)];
                
                // Calculate RGB values from YUV values using formulae from here:
                // https://www.fourcc.org/fccyvrgb.php
                B = 1.164*(Y - 16)                   + 2.018*(U - 128);
                G = 1.164*(Y - 16) - 0.813*(V - 128) - 0.391*(U - 128);
                R = 1.164*(Y - 16) + 1.596*(V - 128);
                
                // Ensure RGB values are in legal range
                // (Not sure if this is necessary!)
                if (R < 0) R = 0; if (R > 255) R = 255;
                if (G < 0) G = 0; if (G > 255) G = 255;
                if (B < 0) B = 0; if (B > 255) B = 255;
        
                // Copy RGB values into rgb_frame buffer
                rgb_frame[i][j][0] = R; // red
                rgb_frame[i][j][1] = G; // green
                rgb_frame[i][j][2] = B; // blue
            }
            
            // Write the RGB frame to a PNM file
            FILE *fpgm = fopen("image.pnm", "w");
            fprintf(fpgm, "P6\n%d %d\n255\n", W, H);
            fwrite(rgb_frame, 1, H*W*3, fpgm);
            fclose(fpgm);
        }
        

        I subsequently converted to jpg using ImageMagick, as follows:

        convert image.pnm image.jpg
        

        As you can see, the image is intact (and it’s cold here!):

        Anyway, hopefully that illustrates how to address the pixels? If you’re using a different planar YUV format, the details could be different though.

        Ted

  14. chris319 says:

    Hi Ted –

    It looks like this is the one that corresponds to ffmpeg’s yuv420p pixel format:
    https://www.fourcc.org/pixel-format/yuv-i420/
    Here is what I’ve come up with so far. I realize it will process some “empty” pixels due to the 4:2:0 sampling, unless the u and v samples are sited adjacent to each other.
    My quandry is how to get these pixels into their arrays and how to discover the base address of each array.

    // Video processing example using FFmpeg
    // Written by Ted Burke – last updated 12-2-2017
    // Now works in YUV space
    // Frame rate = 29.97
    // To compile: gcc clipper.c -o clipper

    #include

    //#define Kr 0.2126
    //#define Kg 0.7152
    //#define Kb 0.0722 //REC.709

    // Video resolution
    #define W 1280
    #define H 720

    // Allocate a buffer to store one frame
    unsigned char frame[H][W][3] = {0};

    unsigned char lum[H][W] = {0};
    unsigned char u[H][W] = {0};
    unsigned char v[H][W] = {0};

    int main(void)
    {
    int x, y, count;

    // Open an input pipe from ffmpeg and an output pipe to a second instance of ffmpeg
    FILE *pipein = popen(“ffmpeg -i KodakChart.mp4 -f image2pipe -vcodec rawvideo -pix_fmt yuv420p -“, “r”);
    FILE *pipeout = popen(“ffmpeg -y -f rawvideo -vcodec rawvideo -pix_fmt yuv420p -s 1280×720 -r 29.97 -i – -f mp4 -q:v 5 intermediate.mp4”, “w”);
    //-pix_fmt yuv420p

    // Process video frames
    while(1)
    {
    // Read a frame from the input pipe into the buffer
    count = fread(frame, 1, H*W*3, pipein);
    // If we didn’t get a frame of video, we’re probably at the end
    if (count != H*W*3) break;

    // Process this frame
    for (y=0 ; y<H ; y++)
    {
    for (x=0 ; x235) {lum[y][x]=235;}
    if(u[y][x]>240) {u[y][x]=240;}
    if(v[y][x]>240) {v[y][x]=240;}

    if(lum[y][x]<16) {lum[y][x]=16;}
    if(u[y][x]<16) {u[y][x]=16;}
    if(v[y][x]<16) {v[y][x]=16;}
    }
    }

    // Write this frame to the output pipe
    fwrite(frame, 1, H*W*3, pipeout);
    }

    // Flush and close input and output pipes
    fflush(pipein);
    pclose(pipein);
    fflush(pipeout);
    pclose(pipeout);
    }

    • batchloaf says:

      Hi Chris,

      Try this:

      // Video processing example using FFmpeg
      // Written by Ted Burke – last updated 12-2-2017
      // Now works in YUV space
      // Frame rate = 29.97
      // To compile: gcc clipper.c -o clipper
      
      #include <stdio.h>
      
      //#define Kr 0.2126
      //#define Kg 0.7152
      //#define Kb 0.0722 //REC.709
      
      // Video resolution
      #define W 1280
      #define H 720
      
      // Allocate a buffer to store one frame
      unsigned char frame[(H*W*3)/2];
      
      int main(void)
      {
          int x, y, count;
      
          // Create a pointer for each component's chunk within the frame
          // Note that the size of the Y chunk is W*H, but the size of both
          // the U and V chunks is (W/2)*(H/2). i.e. the resolution is halved
          // in the vertical and horizontal directions for U and V.
          unsigned char *lum, *u, *v;
          lum = frame;
          u = frame + H*W;
          v = u + (H*W/4);
      
      
          // Open an input pipe from ffmpeg and an output pipe to a second instance of ffmpeg
          FILE *pipein = popen("ffmpeg -i KodakChart.mp4 -f image2pipe -vcodec rawvideo -pix_fmt yuv420p -", "r");
          FILE *pipeout = popen("ffmpeg -y -f rawvideo -vcodec rawvideo -pix_fmt yuv420p -s 1280x720 -r 29.97 -i - -f mp4 -q:v 5 intermediate.mp4", "w");
      
          // Process video frames
          while(1)
          {
              // Read a frame from the input pipe into the buffer
              // Note that the full frame size (in bytes) for yuv420p
              // is (W*H*3)/2. i.e. 1.5 bytes per pixel. This is due
              // to the U and V components being stored at lower resolution.
              count = fread(frame, 1, (H*W*3)/2, pipein);
              
              // If we didn’t get a frame of video, we’re probably at the end
              if (count != (H*W*3)/2) break;
      
              // Process this frame
              for (y=0 ; y<H ; y++)
              {
                  for (x=0 ; x<W ; ++x)
                  {
                      if (lum[y*W+x] > 235) lum[y*W+x] = 235;
                      if (u[(y/2)*(W/2) + x/2] > 240) u[(y/2)*(W/2) + x/2] = 240;
                      if (v[(y/2)*(W/2) + x/2] > 240) v[(y/2)*(W/2) + x/2] = 240;
           
                      if (lum[y*W+x] < 16) lum[y*W+x] = 16;
                      if (u[(y/2)*(W/2) + x/2] < 16) u[(y/2)*(W/2) + x/2] = 16;
                      if (v[(y/2)*(W/2) + x/2] < 16) v[(y/2)*(W/2) + x/2] = 16;
                  }
              }
      
              // Write this frame to the output pipe
              fwrite(frame, 1, (H*W*3)/2, pipeout);
          }
      
          // Flush and close input and output pipes
          fflush(pipein);
          pclose(pipein);
          fflush(pipeout);
          pclose(pipeout);
      }
      
  15. chris319 says:

    Hi Ted –

    I didn’t see your most recent post until after I had posted mine.

    Here is another approach. It skips alternate chroma samples in the x and y dimensions, as does 4:2:0. I’m not sure which if either will work. This one assumes the U and V pixels are not adjacent.
    Ultimately the video must play back in VLC.
    =======================================================================
    // Video processing example using FFmpeg
    // Written by Ted Burke – last updated 12-2-2017
    // Now works in YUV space
    // Frame rate = 29.97
    // To compile: gcc clipper.c -o clipper

    #include

    //#define Kr 0.2126
    //#define Kg 0.7152
    //#define Kb 0.0722 //REC.709

    // Video resolution
    #define W 1280
    #define H 720

    // Allocate a buffer to store one frame
    unsigned char frame[H][W][3] = {0};

    unsigned char lum[H][W] = {0};
    unsigned char u[H][W] = {0};
    unsigned char v[H][W] = {0};

    int main(void)
    {
    int x, y, count;

    // Open an input pipe from ffmpeg and an output pipe to a second instance of ffmpeg
    FILE *pipein = popen(“ffmpeg -i KodakChart.mp4 -f image2pipe -vcodec rawvideo -pix_fmt yuv420p -“, “r”);
    FILE *pipeout = popen(“ffmpeg -y -f rawvideo -vcodec rawvideo -pix_fmt yuv420p -s 1280×720 -r 29.97 -i – -f mp4 -q:v 5 intermediate.mp4”, “w”);
    //-pix_fmt yuv420p

    // Process video frames
    while(1)
    {
    // Read a frame from the input pipe into the buffer
    count = fread(frame, 1, H*W*3, pipein);

    // If we didn’t get a frame of video, we’re probably at the end
    if (count != H*W*3) break;

    // Process this frame
    for (y=0 ; y<H ; y++) //process every luma pixel
    {
    for (x=0 ; x235) {lum[y][x]=235;}
    if(lum[y][x]<16) {lum[y][x]=16;}
    }
    }

    for (y=0 ; y<H ; y+2) //process alternate chroma pixels in x and y in 4:2:0 format
    {
    for (x=0 ; x240) {u[y][x]=240;}
    if(v[y][x]>240) {v[y][x]=240;}

    if(u[y][x]<16) {u[y][x]=16;}
    if(v[y][x]<16) {v[y][x]=16;}
    }
    }

    // Write this frame to the output pipe
    fwrite(frame, 1, H*W*3, pipeout);
    }

    // Flush and close input and output pipes
    fflush(pipein);
    pclose(pipein);
    fflush(pipeout);
    pclose(pipeout);
    }

    • batchloaf says:

      Hi Chris,

      Yeah, there may be a bit of a delay in the comments appearing, so you probably posted this before I posted the last example, but I didn’t see your message until now. Anyway, the example I posted above works perfectly on my computer (Linux) and the output file plays without any problem in VLC.

      In the yuv420p format, each frame consists of three separate chunks:

      1. The first is for Y at full resolution (W x H),
      2. The second is for U at half resolution (W/2 x H/2),
      3. The third is for V at half resolution (W/2 x H/2).

      Hence the size of the complete frame is (W*H*3) / 2.

      In my previous example, I create a 1-dimensional array of unsigned chars (i.e. bytes) to store the complete frame, then assign a pointer to access each of the three individual chunks. Hence, for pixel x,y you would access the three components as follows:

      lum[y*W + x]
      u[(y/2)*(W/2) + (x/2)]
      v[(y/2)*(W/2) + (x/2)]
      

      Hopefully that makes sense?

      Ted

  16. chris319 says:

    I’m trying to convert your code to output a 420p mp4 file rather than a pnm file. It is not necessary to convert to RGB. VLC can play back a 420p file. An 8-bit value cannot be 255 so I’m eliminating that code.

  17. chris319 says:

    I think the system skipped a post as I didn’t see your mp4 code until you posted the link to it. I will give it a try now.

  18. chris319 says:

    It works perfectly now! Thank you very much for your help — it has been invaluable.

    Now it is up to me to get the video to clip the way I need it to 🙂

    Thanks again!

    • batchloaf says:

      Great! Best of luck getting everything else working.

      Ted

      • chris319 says:

        Here is what I’m trying to do: clip off everything above 235 (on the DIG. scale). So far it isn’t going well. I added a video filter to the output pipe and it isn’t helping: scale=out_range=tv

      • batchloaf says:

        Hi Chris,

        I don’t really understand what I’m looking at here or exactly what you’re trying to do. I mean, I can see that the image you attached includes what looks like a video of a colour panel on the left and on the right is some kind of graph of the distribution of pixel luminance (or something like that) as a function of horizontal position in the video. But when you say “clip off everything above 235” what exactly do you mean by that?

        Presumably you mean to modify the video signal (i.e. the part on the left that shows the colour panel) rather than the graph image?

        And by “clip off”, do you mean to limit pixel luminance to 235 (i.e. areas that are very bright become slightly less bright)? Or do you mean to actually remove regions above that luminance from the image by resizing the frame?

        If you just mean to limit pixel luminance to 235 across the entire image, the example I gave you earlier already does that. It also limits the range of U and V, because that’s what you had been doing in the code you posted, but perhaps you don’t need to do that? Y (the luminance) affects the brightness of the pixel, whereas U and V determine its colour. if you’re just trying to limit the brightness, then you probably don’t need to modify U and V at all.

        Ted

      • batchloaf says:

        For example, here’s pixel luminance distribution as a function of horizontal position for the last frame in the test video I used (the teapot one). The white line marks the level Y = 235. You can see all the pixels have been limited to below that level of luminance.

  19. chris319 says:

    It is not a distribution. It is simply a time series — video amplitude (0 – 255 on Y axis) vs X coordinate. Here is the unconditional rule used to process this video: lum[y*W+x] = 235;. You can see that the level goes all the way to 255.

    Now here is another which behaves predictably. The unconditional rule used: lum[y*W+x] = 110;

    It tells me that my “scope” is correctly calibrated as all of the levels are (around) digital 110.

    Clip level at 190:

    So something is messing with my video levels despite having “tv” levels (16 – 235) enabled on both input and output. Perhaps ffmpeg?

    • batchloaf says:

      Hi Chris,

      Hmmm. I suppose it’s possible that ffmpeg is applying some color range scaling or something like that. This is new to me, so I’ll have to look into it to figure out what’s going on. I’ll let you know if I work it out. If it is the ffmpeg encoding at the output that’s scaling the upper end of the range back up to 255, then hopefully the solution will be as simple as finding the right command line switch to disable that behaviour.

      Ted

      • chris319 says:

        I may join the ffmpeg mailing list and report this as a bug.

      • batchloaf says:

        I can’t say for certain that a bug isn’t responsible, but it seems more likely that it’s just a matter of figuring out the correct command line to elicit the behaviour you require. ffmpeg is a complex and powerful piece of software with countless configuration options. Personally, I have only scratched the surface. If you go to the ffmpeg mailing list seeking guidance, unless you’re absolutely certain that it’s bug, I suggest treading carefully!

      • batchloaf says:

        To test what was happening on my computer, I stretched out the Y values of a video to the range 0-255, then processed the video through my program to clamp all pixel Y values to the range 16-235. It seems to be clamping the values mostly to the range 16-235, but some pixels do fall outside that range as one would expect due to the mp4 encoding process being lossy. It’s nothing like how your distribution appears though, so I suspect something is different in the way you’re encoding the mp4 output file with ffmpeg.

        These are the exact lines that open the input and output pipes’ instances of ffmpeg in my program:

        FILE *pipein = popen("ffmpeg -i input.mp4 -f image2pipe -vcodec rawvideo -pix_fmt yuv420p -", "r");
        FILE *pipeout = popen("ffmpeg -y -f rawvideo -vcodec rawvideo -pix_fmt yuv420p -s 1280x720 -r 29.97 -i - -f mp4 -q:v 5 output.mp4", "w");
        

        Ted

  20. chris319 says:

    Here are the exact lines I am using. I will scrutinize them to see what might cause a shift in video levels.

    FILE *pipein = popen(“ffmpeg -i KodakChart.mp4 -f image2pipe -vcodec rawvideo -vf scale=in_range=tv -pix_fmt yuv420p -“, “r”);

    FILE *pipeout = popen(“ffmpeg -y -f rawvideo -vcodec rawvideo -pix_fmt yuv420p -s 1280×720 -r 29.97 -i – -f mp4 -q:v 5 -vf scale=out_range=tv cliptest.mp4”, “w”);

    Other than the range filters I don’t see anything that would explicitly alter the video levels. I have tried it with and without the range filters and get the same result.

    Here are video levels at 235 with a scope graticule line at 235. This isn’t right.

  21. Omy says:

    sir I have
    $ ffmpeg -f alsa -i hw:0,0 -af astats=metadata=1:reset=1,ametadata=print:key=lavfi.astats.Overall.RMS_level -f null – 2> log.fifo
    as input and

    $ tail -f log.fifo |grep -i RMS_level
    as output I need to program this code in C language to measure the RMS level can You please help me out.

    • batchloaf says:

      Hi Omy,

      Try this:

      //
      // audiorms.c - Track RMS level of audio in real time
      // Written by Ted Burke - 15-1-2018
      //
      // To compile:
      //
      //    gcc audiorms.c -o audiorms
      //
       
      #include <stdio.h>
      #include <string.h>
      #include <stdlib.h>
       
      void main()
      {
          FILE *pipein;
          pipein  = popen("ffmpeg -f alsa -i hw:0,0 -af astats=metadata=1:reset=1,ametadata=print:key=lavfi.astats.Overall.RMS_level -f null - 2>&1", "r");
          
          char line[1024]; // longer than required - only stores one line of text at a time!
          char *s;
          double rms_value;
          
          while(1)
          {
              // Read a line of text from the input pipe
              fscanf(pipein, "%[^\n]\n", line);
              
              // Find the substring "RMS_level" if it is present in this line
              s = strstr(line, "RMS_level");
              if (s)
              {
                  // Substring "RMS_level" was found, so jump to beginning of value
                  s += 10;
                  
                  // Convert the value from a string (the rest of the line) to a double
                  rms_value = atof(s);
                  
                  // Print the RMS value
                  fprintf(stderr, "%lf\n", rms_value);
              }        
          }
      }
      
      • batchloaf says:

        That code above is working correctly for me. It opens a pipe from alsa, correctly parses the RMS value from each incoming line (if it is present), converts the value to a double and prints it in the terminal.

        At the end of the ffmpeg command I use when opening the input pipe, you’ll see this: “- 2>&1”. The hyphen directs the ffmpeg output to stdout. The “2>&1” redirects ffmpeg’s stderr to the stdout stream. This seems to be necessary because, as I understand it, the lines of text that include the RMS_level metadata seem to be sent to stderr by default.

        Anyway, I hope that helps!

        Ted

      • batchloaf says:

        By the way, if you want to do the same thing without using C at all, here’s a single command line which pipes the output of ffmpeg through grep to parse the RMS_level value directly from any lines that contain it:

        
        ffmpeg -f alsa -i hw:0,0 -af astats=metadata=1:reset=1,ametadata=print:key=lavfi.astats.Overall.RMS_level -f null - 2>&1 | grep -oP '(?<=RMS_level=).*$'
        
        
      • Omy says:

        Thank You so much. It’s working for me

      • batchloaf says:

        Great, best of luck with whatever you’re doing!

  22. chris319 says:

    Hi Ted –

    Do you know of a way to pass video to a file using ffmpeg without altering the video levels at all? I have tried using -vf null with no success.

    https://ffmpeg.org/ffmpeg-filters.html#null

  23. Omy says:

    Hi Ted
    I need a help to conversion of MPTS to SPTS.
    Actually I have a UDP single stream with multiple channel I need to separate it and send to different – different machine.
    I am using linux. Is it possible to use ffmpeg of DVBlast.

  24. jbalvarado says:

    Hello,
    thank you for the nice example! Do you plan to do a third part, where you combine video and audio? Would be interesting to see an concat example with both.

    Have a good day!
    Jonathan

  25. chris319 says:

    Hi Ted –

    Do you have a snippet of code for reading 4:2:2 pixels?

    Many thanks in advance.

    Chris

  26. ssaguiar says:

    Hi Ted

    I am posting my question again because I don’t know if the previous one was published, as it doesn’t apear in this listing.

    I am capturing a stream from a camera and, after playing with the frame, I output the video frame by frame with a modification (I transform the frames to sepia).
    I also am sending the error output to try to deal with it using another pipe but can’t find a way to do so.

    The errors output is in the part of the code as:

    -f hls \”/mnt/sdd/html/hls/live/CAM1/video240/stream.m3u8\” 2>&1 >> – | grep \”error\”

    Is this possible?

    Can You help me?

    Thank you very much.

    The code is below:

    
    void main()
    {
        memset(pipeoutcommand, 0, sizeof(pipeoutcommand));
        //sprintf(&pipeoutcommand[strlen(pipeoutcommand)], "ffmpeg -y -report -loglevel debug -f rawvideo -vcodec rawvideo -pix_fmt rgb24 -s 1280x720 -r 15 -i - -c:v libx264 ");
        sprintf(&pipeoutcommand[strlen(pipeoutcommand)], "ffmpeg -y -f rawvideo -vcodec rawvideo -pix_fmt rgb24 -s 1280x720 -r 15 -i - -c:v libx264 ");
        sprintf(&pipeoutcommand[strlen(pipeoutcommand)], "-b:v 500k -minrate 450k -maxrate 550k -bufsize 1000k -g 30 ");
        sprintf(&pipeoutcommand[strlen(pipeoutcommand)], "-keyint_min 30 -sc_threshold 0 -filter:v \"scale='trunc(oh*a/2)*2:240'\" ");
        sprintf(&pipeoutcommand[strlen(pipeoutcommand)], "-pix_fmt yuvj420p -hls_time 5 ");
        sprintf(&pipeoutcommand[strlen(pipeoutcommand)], "-f hls -use_localtime 1 ");
        sprintf(&pipeoutcommand[strlen(pipeoutcommand)], "-hls_time 6 -hls_list_size 10 -hls_allow_cache 0 -start_number 0 ");
        sprintf(&pipeoutcommand[strlen(pipeoutcommand)], "-hls_segment_filename \"/mnt/sdd/html/hls/live/CAM1/video240/_%%Y%%m_%%H%%M%%S.ts\" ");
        //sprintf(&pipeoutcommand[strlen(pipeoutcommand)], "-f hls \"/mnt/sdd/html/hls/live/CAM1/video240/stream.m3u8\" 2>&1 >> /opt/iptv/LOGS/ffmpeg_pipe.log | grep \"error\"");
        sprintf(&pipeoutcommand[strlen(pipeoutcommand)], "-f hls \"/mnt/sdd/html/hls/live/CAM1/video240/stream.m3u8\" 2>&1 >> - | grep \"error\"");
         
        // Open an input pipe from ffmpeg and an output pipe to a second instance of ffmpeg
        pipein = popen("ffmpeg -hide_banner -nostats -loglevel 0 -i rtsp://192.168.1.100:554/live/ch00_0 -f image2pipe -vcodec rawvideo -pix_fmt rgb24 -", "r");
    
        //FILE *pipeout = popen("ffmpeg -y -f rawvideo -vcodec rawvideo -pix_fmt rgb24 -s 1280x720 -r 15 -i - -f mp4 -q:v 5 -an -vcodec mpeg4 output.mp4", "w");
        pipeout = popen(pipeoutcommand, "w");
         
        // Process video frames
        while(1)
        {
            // Read a frame from the input pipe into the buffer
            count = fread(frameIn, 1, H*W*3, pipein);
            //unsigned char buffer[count][3];
    
            // If we didn't get a frame of video, we're probably at the end
            if (count != H*W*3) 
            {
                printf("\nEmpty frame detected...\n");
                break;
            }
    
            sepiaFrame();
    
            // Write this frame to the output pipe
            fwrite(frameOut, 1, H*W*3, pipeout);
        }
         
        // Flush and close input and output pipes
        fflush(pipein);
        pclose(pipein);
        fflush(pipeout);
        pclose(pipeout);
    }
    
    
  27. ssaguiar says:

    Hi Ted

    Just in case, this is the rest of the above code:

    
    void invertFrame()
    {
        // Process this frame
        for (y=0 ; y<H ; ++y) for (x=0 ; x<W ; ++x)
        {
            // Invert each colour component in every pixel
            frameOut[y][x][0] = 255 - frameIn[y][x][0]; // red
            frameOut[y][x][1] = 255 - frameIn[y][x][1]; // green
            frameOut[y][x][2] = 255 - frameIn[y][x][2]; // blue
        }
    }
    
    void sepiaFrame()
    {
        r = 0;
        g = 0;
        b = 0;
    
        for (y=0 ; y<H ; ++y) for (x=0 ; x<W ; ++x)
        {
            buffer[i][2] = frameIn[y][x][2];							//blue
            buffer[i][1] = frameIn[y][x][1];							//green
            buffer[i][0] = frameIn[y][x][0];							//red
    
            //conversion formula of rgb to sepia
            r = (buffer[i][0]*0.393) + (buffer[i][1]*0.769)	+ (buffer[i][2]*0.189);
            g = (buffer[i][0]*0.349) + (buffer[i][1]*0.686)	+ (buffer[i][2]*0.168);
            b = (buffer[i][0]*0.272) + (buffer[i][1]*0.534)	+ (buffer[i][2]*0.131);
    
            if(r > MAX_VALUE){											//if value exceeds
                r = MAX_VALUE;
            }
            if(g > MAX_VALUE){
                g = MAX_VALUE;
            }
            if(b > MAX_VALUE){
                b = MAX_VALUE;
            }
    
            frameOut[y][x][0] = r;
            frameOut[y][x][1] = g;
            frameOut[y][x][2] = b;
        }
    }
    
  28. Orit Malki says:

    Great post!
    I did exactly as in your code, it runs – the file is opened but unfortunately fread() always returns 0. I checked with feof() and it reaches end of file (without error) and never reads it. Any ideas? tried with several mp4 files. appreciate you reply, Thank you.

  29. Orit Malki says:

    I forgot to mention that I tried to use this on Android native code.

  30. afarinnote says:

    Thanks for sharing your code.
    I wanna use that code for reading video frame by frame and then convert the RGB to HSV and some other post-processing like colour-quantization, etc. My current problem is I can access the RGB data of each frame. When I want to read a frame from the input pipe, it showed the “count” variable as “607”, which means no frame. I would appreciate it if u can help me through this.
    ++ I read above comments and my subsystem hs been set to the Console, “Console (/SUBSYSTEM:CONSOLE)”,. And already I did the change to “_popen”.

  31. Thomas Zaugg says:

    To get it to work in windows I had to use:

    FILE *pipein = _popen(“ffmpeg -i teapot.mp4 -f image2pipe -vcodec rawvideo -pix_fmt rgb24 -“, “rb”);

    to tell it to read the pipe as binary. Otherwise I get a Broken Pipe error.

    • batchloaf says:

      Thanks for the useful tip Thomas!

      Ted

    • naivemog says:

      Thanks Thomas! It’s really helpful!

    • chranium says:

      Using windows 11, to make it work we use “rb +” and “wb +” in both functions _popen.
      “`
      FILE *pipein = _popen(“ffmpeg -i teapot.mp4 -f image2pipe -vcodec rawvideo -pix_fmt rgb24 -“, “rb+”);
      FILE *pipeout = _popen(“ffmpeg -y -f rawvideo -vcodec rawvideo -pix_fmt rgb24 -s 1280×720 -r 25 -i – -f mp4 -q:v 5 -an -vcodec mpeg4 output.mp4”, “wb+”);
      “`

      • batchloaf says:

        Hi Chranium,
        I haven’t seen the plus character used in the _popen mode like that before. What does it do when used like this? I can’t see anything about it in the documentation for _popen. I know what it would mean in an fopen function call (basically, open for reading and writing), but that doesn’t seem applicable to these _popen calls.
        Ted

  32. Daniel Arnett says:

    Nice tutorial but if you want to concatenate videos with the same encoding really quickly you should use the built in demuxer. It’s lightning fast because it doesn’t need to reencode frames, it works almost as fast as your hard drive can copy the files.

    Example from here https://stackoverflow.com/a/49373401/2116716
    Create a text file named vidlist.txt in the following format:

    file ‘/path/to/clip1’
    file ‘/path/to/clip2’
    file ‘/path/to/clip3’
    Note that these can be either relative or absolute paths.

    Then issue the command:

    ffmpeg -f concat -safe 0 -i vidlist.txt -c copy output.mp4

  33. pratap reddy says:

    i need to write and read audio/video files to SD card using embedded c
    is your code suitable to me??

  34. niranjan n says:

    Hi,to all , i have checked the tea-pot program ,its works fine, thanks for adding a pipe concept on ffmpeg, but my doubt how to pipe audio and video at the same time ,in the tea-pot example..can you please help to find out the solution

  35. Chris says:

    Hi Ted –
    How could your code be modified to read 10-bit pixels? I’m having trouble with the pixel addressing and am able to get a black-and-white image from 10-bit Y values but am having trouble with the U and V pixels.

    Many thanks.

  36. chrisnology says:

    I am taking into account the fact that I will have to shift off the lowest 2 bits to display it on my 8-bit hardware, but I will be able to read and process it.

    • batchloaf says:

      Hi Chris.

      I’ve had a go at reading the 10-bit pixels, but have not succeeded so far. I’ll try again, hopefully tomorrow, but in the meantime here’s my most recent attempt (pasted in below).

      Ted

      ————–

      //
      // Video processing example using FFmpeg
      // Written by Ted Burke - last updated 3-3-2020
      //
       
      #include 
      #include 
       
      // Video resolution
      #define W 1920
      #define H 1080
       
      // Allocate a buffer to store one frame
      uint16_t frame_in[H][W][3] = {0};
      uint8_t frame_out[H][W][3] = {0};
       
      void main()
      {
          int x, y, count;
           
          // Open an input pipe from ffmpeg and an output pipe to a second instance of ffmpeg
          FILE *pipein = popen("ffmpeg -i jellyfish-40-mbps-hd-hevc-10bit.mkv -f image2pipe -vcodec rawvideo -pix_fmt gbrp10le -", "r");
          FILE *pipeout = popen("ffmpeg -y -f rawvideo #-vcodec rawvideo -pix_fmt rgb24 -s 1920x1080 -r 25 -i - -f mp4 -q:v 5 -an -vcodec mpeg4 output.mp4", "w");
          
          // Process video frames
          while(1)
          {
              // Read a frame from the input pipe into the buffer
              count = fread(frame_in, 2, H*W*3, pipein);
               
              // If we didn't get a frame of video, we're probably at the end
              if (count != H*W*3*2) break;
               
              // Copy pixels from 10-bit input frame into 8-bit output frame
              for (y=0 ; y<H ; ++y) for (x=0 ; x> 2; // red
                  frame_out[y][x][1] = frame_in[y][x][0] >> 2; // green
                  frame_out[y][x][2] = frame_in[y][x][1] >> 2; // blue
              }
               
              // Write this frame to the output pipe
              fwrite(frame_out, 1, H*W*3, pipeout);
          }
           
          // Flush and close input and output pipes
          fflush(pipein);
          pclose(pipein);
          fflush(pipeout);
          pclose(pipeout);
      }
      
  37. chrisnology says:

    Hi Ted –
    Thanks for taking a stab at this. It’s trickier than it seems 🙂
    Someone said that ffmpeg’s yuv420p10le is not the same as P010, so one wonders if this documentation is valid:
    https://docs.microsoft.com/en-us/windows/win32/medfound/10-bit-and-16-bit-yuv-video-formats#420-formats
    A good video player can play that 10-bit video with no problem, so a conversion is being made somewhere.

    • batchloaf says:

      Hi Chris,

      Can I just double check: You need to access the full 10-bit samples, right? I was able to read the 10-bit video as rgb24 no problem (i.e. 1 red byte, 1 green byte, 1 blue byte). It’s just reading the rgb components in 10-bit resolution that I didn’t figure out.

      In fact, do you even want RGB or does it suit you better to get the pixels in YUV?

      Ted

  38. chrisnology says:

    Hi Ted –
    I imagine a typical video file will have YUV samples in it, either 4:2:0 or 4:2:2. Converting YUV to RGB is no problem for me. I know I will have to lose 2 bits from the 10-bit sample to display it, as I don’t own a 10-bit monitor. I wonder if 10-bit video will ever catch on with the public and services like YouTube. It does have its place in high-end video, though.

    Here is how I convert 8-bit YUV to RGB. It took a lot of trial and error and testing to settle on this. There are many such YUV-to-RGB formulae out there, but not all of them handle the colors accurately, and we are meticulous about color accuracy. Color space is BT.709. Kr, Kg and Kb are the luma coefficients. rf, gf, bf,yf,uf and vf are all floats.

    http://avisynth.nl/index.php/Color_conversions

    It occurs to me that these constants will change for 10-bits.

    BT.709 COEFFICIENTS
    Kr = 0.2126: Kg = 0.7152: Kb = 0.0722

    rf = (255/219)*yf + (255/112)*vf*(1-Kr) – (255*16/219 + 255*128/112*(1-Kr))

    gf = (255/219)*yf – (255/112)*uf*(1-Kb)*Kb/Kg – (255/112)*vf*(1-Kr)*Kr/Kg – (255*16/219 – 255/112*128*(1-Kb)*Kb/Kg – 255/112*128*(1-Kr)*Kr/Kg)

    bf = (255/219)*yf + (255/112)*uf*(1-Kb) – (255*16/219 + 255*128/112*(1-Kb))

  39. Anmol says:

    This did not work for me on Windows using Visual Studio. So I’m trying to debug it first with ffplay – otherwise I just get a file with gray image. Any idea how this can be use with ffplay to display the video ?

  40. Johann Weber says:

    Hallo Mr. Burke,

    I am currently working on a Fortran port of the GD graphics library (LGPL-licensed, https://github.com/johandweber/fortran-gdlib).
    In this context I also looked for possibilities to manipulate viedos. Here your blog post proved to be extremely useful.

    I’d like to add a Fortran example top my language binding that closely resembles the C code in your source code.

    Do you agree? If so, in what way should I quote you? (Of course, I will make clear, that all bugs are my own…)

    Yours,
    Johann

    • batchloaf says:

      Hi Johann. Yes, I agree. You have my permission to use my code in any way you want, and for any purpose commercial or non-commercial. If you wish to give credit, just use my name (Ted Burke) and link to the blog post, but you’re 100% welcome to use it without attribution too. Maybe have a look at Zulko’s blog that I linked in the post, just to check whether he/she should be credited?

      Best of luck completing your GD port!

      Ted

      • Johann Weber says:

        Hallo Ted,

        thanks a lot for your permission! I will add you in the Acknoledgement section of my Documentation (as soon as I will have cleaned up my code enoungh to upload it to Github…) and also look at Zulko’s blog.

        Understanding the option of ffmpeg is not trivial, so I might never been able to adapt the binding to the output format required by ffmpeg (and a “raw interface” is also useful for other applications anyway) and find the correct ffmpeg options. The reason is that the port is a pure hobbyist project so I guess I would have lost patience to research it purely by using the FFMPEG Documentation.
        So I believe that acknowlegements are very well justified.

        With the help of your blog, I was able to create a first (still somewhat clumsy) prototype within two days (including GDlib and Fortran specific code), where I turned an entire film clip into greyscale – except the redish parts – and added a caption.

        A comparison of the modified and unmodified clips can be found at
        https://www.youtube.com/watch?v=QVt_o6hwdp4 (contains some flickering).

  41. batchloaf says:

    Hi Johann,

    You’re welcome. And yes, I agree that the ffmpeg options are a bit overwhelming! Best of luck completing your GDlib port.

    Ted

  42. Johann Weber says:

    I have now pushed my video modifications (along with demo program and documentation) to Github.

Leave a reply to naivemog Cancel reply