Secrets of Cinelerra

Node:Top, Next:, Up:(dir)


Table of Contents

Node:ABOUT CINELERRA, Next:, Previous:Top, Up:Top


There are two types of moviegoers: producers who create new content, going back over their content at future points for further refinement, and consumers who want to acquire the content and watch it. Cinelerra is not intended for consumers. Cinelerra has many features for uncompressed content, high resolution processing, and compositing, with very few shortcuts. Producers need these features because of the need to retouch many generations of footage with alterations to the format, which makes Cinelerra very complex. There are many more standard tools for consumers like MainActor, Kino, or Moxy, which you should consider before using Cinelerra.

In 1996 our first editor came out: Broadcast 1.0. It was just a window with a waveform in it, it could cut and paste stereo audio waveforms on a UNIX box, except unlike other audio editors it could handle files up to 2 gigabytes with only 64 megs of RAM. That was a feature normally only accessible to the highest end professional audio houses.

In 1997 Broadcast 1.0 was replaced by Broadcast 2.0. This time the window had a menubar, patchbay, console, and transport control. Broadcast 2.0 still only handled audio but this time it handled unlimited tracks, and it could perform effects on audio and save the resulting waveform to disk. More notably a few effects could be performed as the audio was playing back, in realtime. A user could mix unlimited numbers of tracks, adjust fade, pan, and EQ, and hear the result instantly. Amazingly this real time tweeking is still unavailable on most audio programs.

But Broadcast 2.0 still didn't handle video and it wasn't very graceful at audio either. In 1999 video broke into the story with Broadcast 2000. This iteration of the Broadcast series could do wonders with audio and offered a pretty good video feature set. It could edit video files up to 64 terabytes. It could do everything Broadcast 2.1 did with audio except now all effects for video and audio could be chained and performed on the fly, with instant feedback as a user tweeked parameters during playback. Broadcast 2000 made it very easy to do a lot of processing and editing on video and audio that would otherwise involve many hours setting up command line sequences and writing to disk. For a time it seemed as if the original dream of immersive movie making for everyone regardless of income level had arrived.

Later on Broadcast 2000 began to come short. Its audio and video was graceful if you knew how to use it efficiently, but quality issues and new user interface techniques were emerging. Broadcast 2000 kept the audio interface from its ancestors, which didn't apply well to video. Users likewise were maturing. No longer would it be sufficient to just edit video on a UNIX box. Most users expected on UNIX the same thing they got in Win or Mac. In mid 2000 designs for a Broadcast 2000 replacement were drafted. The Broadcast name was officially retired from the series and the software would now be called Cinelerra. Cinelerra would allow users to configure certain effects in much less time than required with Broadcast 2000. It would begin to emulate some of the features found in Win and Mac software while not attempting to become a clone. It's interface would be designed for video from the ground up, while supplementing that with the Broadcast audio interface. As always, quality improvements would happen.



After many years of searching for the perfect documentation format we've arrived at TexInfo. This format can be converted to HTML, printed, automatically indexed, but most importantly is not bound to any commercial word processor. Documents written in Texinfo will be readable as long as there's a C compiler.

There are no screenshots in this manual. Screenshots become obsolete quickly and as a result confuse the users. What looks one way in a screenshot will always look different in the real program because the real program and the manual are always evolving, never perfectly synchronized. It is true that manuals should have screenshots, but our objective in omitting screenshots is to keep the software costs minimal so you don't have to pay for it. That includes additional labor to synchronize the manual with the software.

In addition to telling you the basic editing features of Cinelerra this manual covers tricks that won't be described anywhere else. We're going to try to come up with certain things you can do with Cinelerra that you wouldn't think of on your own.



The Cinelerra package contains Cinelerra and most of the libraries needed to run it. We try to include all the dependancies because of the difficulty in tracking down the right versions. Also included are some utilities for handling files.



Cinelerra is best installed by downloading an RPM and running

rpm -U --force --nodeps hvirtual*.rpm

on a RedHat system.

On systems which don't support RPM look for a utility called rpm2cpio. Download a Cinelerra RPM and from the / directory run

rpm2cpio hvirtual*.rpm | cpio -i --make-directories



It should be noted that the compiler used in building Cinelerra binaries is the free GNU compiler and very conservative optimization flags. You can try different compilers and optimization flags by compiling the source.

The compilation is verified on a vanilla RedHat 9.0 installation, workstation mode. RedHat 9.0 doesn't install nasm. This has to be installed manually for compilation to succeed. Compiling the source is hard and there's no warranty if the source code fails to compile, but the method for compiling starts by downloading the source code and decompressing.

tar jxf hvirtual*.tar.bz2

Enter the hvirtual directory

cd hvirtual

and set the CFLAGS environment variable. The flags for the GCC compiler are constantly changing. These are our most recent flags. For Pentium II use:

export CFLAGS='-O3 -march=i686 -fmessage-length=0 -funroll-all-loops -fomit-frame-pointer -falign-loops=2 -falign-jumps=2 -falign-functions=2'

For Pentium I and old AMD's use:

export CFLAGS='-O3 -fmessage-length=0 -funroll-all-loops -fomit-frame-pointer -falign-loops=2 -falign-jumps=2 -falign-functions=2'

For new AMD's use:

export CFLAGS='-O3 -march=athlon -fmessage-length=0 -funroll-all-loops -fomit-frame-pointer -falign-loops=2 -falign-jumps=2 -falign-functions=2'

Then run


The make procedure should run through all the directories and put binaries in the i686 directories. When we originally supported Alpha it was convenient to compile Alpha and i686 binaries simultaneously, in different directories, so all the binaries are put in subdirectories.

Once finished run

make install

to install the binaries. The output is put in the following directories:

The main binaries are /usr/bin/cinelerra and several utilities for reading MPEG transport streams.

Run Cinelerra by running




Because of the variety of uses, Cinelerra cannot be run optimally without some intimate configuration for your specific needs. Very few parameters are adjustible at compile time. Runtime configuration is the only option for most configuration because of the multitude of parameters.

Go to settings->preferences and run through the options.





These determine what happens when you play sound from the timeline.



These determine what happens when you play video from the timeline.





These determine what happens when you record audio.



These determine what happens when you record video.



You'll spend most of your time configuring this section. The main focus of performance is rendering parameters not available in the rendering dialog.



Background rendering was originally concieved to allow HDTV effects to be displayed in realtime. Background rendering causes temporary output to constantly be rendered while the timeline is being modified. The temporary output is played during playack whenever possible. It's very useful for transitions and previewing effects which are too slow to display in a reasonable amount of time. If renderfarm is enabled, the renderfarm is used for background rendering, giving you the potential for realtime effects if enough network bandwidth and CPU nodes exist.



To use the renderfarm set these options. Ignore them for a standalone system



These parameters affect purely how the user interface works.



When Cinelerra first starts, you'll get four main windows. Hitting CTRL-w in any window closes it.

Under the Window menu you'll find options affecting the main windows. default positions repositions all the windows to a 4 screen editing configuration. On dual headed displays, the default positions operation fills only one monitor with windows.

An additional window, the levels window can be brought up from the Window menu. The levels window displays the output audio levels after all mixing is done.





All data that you work with in Cinelerra is acquired either by recording from a device or by loading from disk. This section describes loading.

The loading and playing of files is just as you would expect. Just go to file->Load, select a file for loading, and hit ok. Hit the forward play button and it should start playing, regardless of whether a progress bar has popped up.

Another way to load files is to pass the filenames as arguments on the command line. This creates new tracks for every file and starts the program with all the arguments loaded.

If the file is a still image, the project's attributes are not changed and the first frame of the track becomes the image. If the file has audio, Cinelerra may build an index file for it to speed up drawing. You can edit and play the file while the index file is being built.



The format of the file affects what Cinelerra does with it. Some formats replace all the project settings. Some just insert data with existing project settings. If your project sample rate is 48khz and you load a sound file with 96khz, you'll still be playing it at 48khz. XML files, however, replace the project settings. If you load an XML file at 96khz and the current project sample rate is 48khz, you'll change it to 96khz. Supported file formats are currently:



Usually three things happen when you load a file. First the existing project is cleared from the screen, second the project's attributes are changed to match the file's, and finally the new file's tracks are created in the timeline.

But Cinelerra lets you change what happens when you load a file.

In the file selection box go to the Insertion strategy box and select it. Each of these options loads the file a different way.

The insertion strategy is a recurring option in many of Cinelerra's functions. In each place the options do the same thing. With these options you can almost do all your editing by loading files.

If you load files by passing command line arguments to Cinelerra, the files are loaded with Replace current project rules.



In the file selection box go to the list of files. Select a file. Go to another file and select it while holding down CTRL. This selects one additional file. Go to another file and select it while holding down SHIFT. This selects every intervening file. This behavior is available in most every list box.

Select a bunch of mp3 files and Replace current project and concatenate tracks in the insertion strategy to create a song playlist.



There is one special XML file on disk at all times. After every editing operation Cinelerra saves the current project to a backup in $HOME/.bcast/backup.xml. In the event of a crash go to file->load backup to load the backup. It is important after a crash to reboot Cinelerra without performing any editing operations. Loading the backup should be the first operation or you'll overwrite the backup.



When Cinelerra saves a file it saves an edit decision list of the current project but doesn't save any media. Go to File->save as.... Select a file to overwrite or enter a new file. Cinelerra automatically concatenates .xml to the filename if no .xml extension is given.

The saved file contains all the project settings and locations of every edit but instead of media it contains pointers to the original media files on disk.

For each media file the XML file stores either an absolute path or just the relative path. If the media is in the same directory as the XML file a relative path is saved. If it's in a different directory an absolute path is saved.

In order to move XML files around without breaking the media linkages you either need to keep the media in the same directory as XML file forever or save the XML file in a different directory than the media and not move the media ever again.

If you want to create an audio playlist and burn it on CD-ROM, save the XML file in the same directory as the audio files and burn the entire directory. This keeps the media paths relative.

XML files are useful for saving the current state before going to sleep and saving audio playlists but they're limited in that they're specific to Cinelerra. You can't play XML files in a dedicated movie player. Realtime effects in an XML file have to be resynthesized every time you play it back. The XML file also requires you to maintain copies of all the source assets on hard drives, which can take up space and cost a lot of electricity to spin. For a more persistent storage of the output there's rendering.



Rendering takes a section of the timeline, performs all the editing, effects and compositing, and stores it in a pure movie file. You can then delete all the source assets, play the rendered file in a movie player, or bring it back into Cinelerra for more editing. It's very difficult to retouch any editing decisions in the pure movie file, however, so keep the original assets and XML file around several days after you render it.

To begin a render operation you need to define a region of the timeline to render. The navigation section describes methods of defining regions. See NAVIGATING THE PROJECT. When a region is highlighted or in/out points are set, the affected region is rendered. When no region is highlighted, everything after the insertion point is rendered.

Go to File->render to bring up the render dialog. Select the magnifying glass to bring up a file selection dialog. This determines the filename to write the rendered file to.

In the render dialog select a format from the File Format menu. The format of the file determines whether you can render audio or video or both. Select Render audio tracks to generate audio tracks and Render video tracks to generate video tracks. Select the wrench next to each toggle to set compression parameters. If the file format can't store audio or video the compression parameters will be blank. If Render audio tracks or Render video tracks is selected and the file format doesn't support it, trying to render will pop up an error.

The Create new file at each label option causes a new file to be created when every label in the timeline is encountered. This is useful for dividing long audio recordings into individual tracks. When using the renderfarm, Create new file at each label causes one renderfarm job to be created at every label instead of using the internal load balancing algorithm to space jobs.

When Create new file at each label is selected, a new filename is created for every output file. If the filename given in the render dialog has a 2 digit number in it, the 2 digit number is overwritten with a different incremental number for every output file. If no 2 digit number is given, Cinelerra automatically concatenates a number to the end of the given filename for every output file.

In the filename /hmov/track01.wav the 01 would be overwritten for every output file. The filename /hmov/track.wav; however, would become /hmov/track.wav001 and so on and so forth. Filename regeneration is only used when either renderfarm mode is active or creating new files for every label is active.

Finally the render dialog lets you select an insertion mode. The insertion modes are the same as with loading files. In this case if you select insert nothing the file will be written out to disk without changing the current project. For other insertion strategies be sure to prepare the timeline to have the output inserted at the right position before the rendering operation is finished. See EDITING. Editing describes how to cause output to be inserted at the right position.

It should be noted that even if you only have audio or only have video rendered, a paste insertion strategy will behave like a normal paste operation, erasing any selected region of the timeline and pasting just the data that was rendered. If you render only audio and have some video tracks armed, the video tracks will get truncated while the audio output is pasted into the audio tracks.



When bicubic interpolation and HDTV was first done on Cinelerra, the time needed to produce the simplest output became unbearable even on the fastest dual 1.7Ghz Xeon of the time. Renderfarm support even in the simplest form brings HDTV times back in line with SD while making SD faster than realtime.

While the renderfarm interface isn't spectacular, it's simple enough to use inside an editing suite with less than a dozen nodes without going through the same amount of hassle you would with a several hundred node farm. Renderfarm is invoked transparently for all file->render operations when it is enabled in the preferences.

It should be noted that Create new file at each label causes a new renderfarm job to be created at each label instead of the default load balancing. If this option is selected when no labels exist, only one job will be created.

A Cinelerra renderfarm is organized into a master node and any number of slave nodes. The master node is the computer which is running the GUI. The slave nodes are anywhere else on the network and are run from the command line.

Cinelerra divides the selected region of the timeline into a certain number of jobs which are then dispatched to the different nodes depending on the load balance. The nodes process the jobs and write their output to individual files on the filesystem. The output files are not concatenated. It's important for all the nodes and the master node to use the same filesystem for assets, mounted over the network.

Since most of the time you'll want to bring in the rendered output and fine tune it on the timeline, the jobs are left in individual files. You can load these using concatenate mode and render them again with renderfarm disabled. If the track and output dimensions equal the asset dimensions, Cinelerra will do a direct copy of all the jobs into a single file. Note that direct copying doesn't work for MPEG Video. MPEG has the distinction that you can concatenate the subfiles with the UNIX cat utility.

Configuration of the renderfarm is described in the configuration chapter See RENDERFARM. The slave nodes traditionally read and write data to a common filesystem over a network, thus they don't need hard drives.

Ideally all the nodes on the renderfarm have similar CPU performance. Cinelerra load balances on a first come first serve basis. If the last segment is dispatched to the slowest node, all the fastest nodes may end up waiting for the slowest node to finish while they themselves could have rendered it faster.



The thing you want to do most of the time is get to a certain time and place in the media. Internally the media is organized into tracks. Each track extends across time. Navigation involves both getting to a track and getting to a certain time in the track.



The program window contains many features for navigation and displays the timeline as it is structured in memory: tracks stacked vertically and extending across time. The horizontal scroll bar allows you to scan across time. The vertical scroll bar allows you to scan across tracks.

Below the timeline you'll find the zoom panel. The zoom panel contains values for sample zoom, amplitude, and track zoom. These values in addition to the scrollbars are all that's needed to position the timeline.


Changing the sample zoom causes the amount of time visible to change. If your mouse has a wheel and it works in X11 go over the tumblers and use the wheel to zoom in and out.

The amplitude only affects audio. It determines how big the waveform is if the waveform is drawn.

The track zoom affects all tracks. It determines the height of each track. If you change the track zoom the amplitude zoom compensates so audio waveforms look proportional.

In addition to the graphical tools, you'll probably more often use the keyboard to navigate. Use PAGE UP and PAGE DOWN to scroll up and down the tracks.

Use the LEFT and RIGHT arrows to move across time. You'll often need to scroll beyond the end of the timeline but scrollbars won't let you do it. Instead use the RIGHT arrow to scroll past the end of timeline.

Use the UP and DOWN arrows to change the sample zoom by a power of 2.

CTRL-UP and CTRL-DOWN cause the amplitude zoom to change.

CTRL-PGUP and CTRL-PGDOWN cause the track zoom to change.



By default you'll see a flashing insertion point in the program window the first time you boot it up. This is where new media is pasted onto the timeline. It's also the starting point of all playback operations. When rendering it defines the region of the timeline to be rendered.

The insertion point is normally moved by clicking inside the timebar. Any region of the timebar not obscured by labels and in/out points is a hotspot for repositioning the insertion point.

main_timebar.png The main timebar

The insertion point also can be moved by clicking in the timeline itself, but not always. The insertion point has two modes of operation:

The mode of operation is determined by selecting the arrow or the i-beam in the buttonbar.

editing_mode.png The editing mode buttons

If the arrow is highlighted it enables drag and drop mode. In drag and drop mode, clicking in the timeline doesn't reposition the insertion point. Instead it selects an entire edit. Dragging in the timeline repositions the edit, snapping it to other edit boundaries. This is normally useful for reordering audio playlists and moving effects around.

If the i-beam is highlighted it enables cut and paste mode. In cut and paste mode clicking in the timeline repositions the insertion point. Dragging in the timeline highlights a region. The highlighted region becomes the playback range during the next playback operation, the rendered range during the next render operation, and the region affected by cut and paste operations.

Shift-clicking in the timeline extends the highlighted region.

Double-clicking in the timeline selects the entire edit the cursor is over.

It should be noted that when moving the insertion point and selecting regions, the positions are either aligned to frames or aligned to samples. When editing video you'll want to align to frames. When editing audio you'll want to align to samples. This is set in settings->align cursor on frames.

If the highlighted region is the region affected by cut and paste operations, how do I cut and paste in drag and drop mode? In this case you need to set in/out points to define an affected region.



In both editing modes you can set in/out points. The in/out points define the affected region. In drag and drop mode they are the only way to define an affected region. In both cut and paste mode and drag and drop mode they override the highlighted area. If a highlighted area and in/out points are set, the highlighted area affects playback while the in/out points affect editing operations. To avoid confusion it's best to use either highlighting or in/out points but not both simultaneously.

To set in/out points go to the timebar and position the insertion point somewhere. Hit the in_point_button.png in point button. Go to a position after the in point and hit the out_point_button.png out point button.

inout_points.png Timebar with in/out points set.

Select either the in point or the out point and the insertion point jumps to that location. After selecting an in point, if you hit the in point button the in point will be deleted. After selecting an out point, if you hit the out point button the out point will be deleted.

If you select a region somewhere else while in/out points already exist, the existing points will be repositioned when you hit the in/out buttons.

Shift-clicking on an in/out point extends the highlighted region to that point.

Instead of using the button bar you can use the [ and ] keys to toggle in/out points.

The insertion point and the in/out points allow you to define an affected region but they don't let you jump to exact points on the timeline very easily. For this purpose there are labels.



Labels are an easy way to set exact locations on the timeline you want to jump to. When you position the insertion point somewhere and hit the label_button.png label button a new label appears on the timeline.

timebar_label.png Timebar with a label on it

No matter what the zoom settings are, clicking on the label positions the insertion point exactly where you set it. Hitting the label button again when a label is selected deletes it.

Shift-clicking on a label extends the highlighted region.

Double-clicking between two labels highlights the region between the labels.

Hitting the l key has the same effect as the label button.

If you hit the label button when a region is highlighted, two labels are toggled at each end of the highlighted region. If one end already has a label, then the existing label is deleted and a label is created at the opposite end.

Labels can reposition the insertion point when they are selected but they can also be traversed with the label_traversal.png label traversal buttons. When a label is out of view, the label traversal buttons reposition the timeline so the label is visible. There are keyboard shortcuts for label traversal, too.

CTRL-LEFT repositions the insertion point on the previous label.

CTRL-RIGHT repositions the insertion point on the next label.

With label traversal you can quickly seek back and forth on the timeline but you can also select regions.

SHIFT-CTRL-LEFT extends the highlighted region to the previous label.

SHIFT-CTRL-RIGHT extends the highlighted region to the next label.

Manually hitting the label button or l key over and over again to delete a series of labels can get tedious. For deleting a set of labels, first highlight a region and second use the Edit->Clear labels function. If in/out points exist, the labels between the in/out points are cleared and the highlighted region ignored.



The navigation features of the Viewer and Compositor behave very similarly. Each has a timebar and slider below the video output. The timebar and slider are critical for navigation.


The timebar represents the entire time covered by the program. When you define labels and in/out points it defines those, too. Finally the timebar defines a region known as the preview region.

The preview region is the region of the timeline which the slider effects. The slider only covers the time covered by the preview region. By using a preview region inside the entire program and using the slider inside the preview region you can quickly and precisely seek in the compositor and viewer.

When you replace the current project with a file the preview region automatically resizes to cover the entire file. When you append data or change the size of the current project, the preview region stays the same size and shrinks. Therefore, you need to resize the preview region.

Load a file and then slide around it using the compositor slider. The insertion point in the main window follows the compositor. Move the pointer over the compositor's timebar until it turns into a left resize pointer. The click and drag right. The preview region should have changed and the slider resized proportionally.

Go to the right of the timebar until a right resize pointer appears. Drag left so the preview region shrinks.

Go to the center of the preview region in the timebar and drag it around to convince yourself if can be moved.


Preview region in compositor

If you go to the slider and slide it around with the preview region shrunk, you'll see the slider only affects the preview region. The timebar and slider in the viewer window work exactly the same.

Labels and in/out points are fully supported in the viewer and compositor. The only difference between the viewer and compositor is the compositor reflects the state of the program while the viewer reflects the state of a clip but not the program.

When you hit the label button in the compositor, the label appears both in the compositor timebar and the program timebar.

When you select a label or in/out point in the compositor, the program window jumps to that position.

viewer_labels.png Labels and in/out points in the viewer.

In the viewer and compositor, labels and in/out points are displayed in the timebar. Instead of displaying just a region of the program, the timebar displays the entire program here.

Like the Program window, the Compositor has a zoom capability. First, the pulldown menu on the bottom of the compositor window has a number of zoom options. When set to Auto the video is zoomed to match the compositor window size as closely as possible. When set to any other percentage, the video is zoomed a power of 2 and scrollbars can be used to scroll around the output. When the video is zoomed bigger than the window size, not only do scrollbars scan around it but middle mouse button dragging in the video output scans around it. This is exactly when The Gimp does.

Furthermore, the zoom magnify.png toggle causes the Compositor window to enter zoom mode. In zoom mode, clicking in the video output zooms in while ctrl-clicking in the video output zooms out. If you have a wheel mouse, rotating the wheel zooms in or out too.

Zooming in or out with the zoom tool does not change the rendered output, mind you. It's merely for scrutinizing video or fitting it in the desktop.



The resource window is divided into two areas. One area lists folders and another area lists folder contents. Going into the folder list and clicking on a folder updates the contents area with the contents of that folder.

The folder and contents can be displayed as icons or text.

Right clicking in the folder or contents area brings up a menu containing formatting options. Select Display text to display a text listing. Select Sort items to sort the contents of the folder alphabetically.



Transport controls are just as useful in navigation as they are in playing back footage, hence they are described here. Each of the Viewer, Compositor, and Program windows has a transport panel.

transport_panel.png The transport panel.

The transport panel is controlled by the keyboard as well as the graphical interface. For each of the operations it performs, the starting position is the position of the insertion point or slider. The ending position is either the end or start of the timeline or the end or start of the selected region if there is one.

The orientation of the end or start depends on the direction of playback. If it's forward the end position is the end of the selected region. If it's backward the end position is the start of the selected region.

The insertion point moves to track playback. When playback stops it leaves the insertion point where it stopped. Thus, by playing back you change the position of the insertion point.

The keyboard interface is usually the fastest and has more speeds. The transport keys are arranged in a T on the number pad.

Hitting any key on the keyboard twice pauses it.

When using frame advance functions the behavior may seem odd. If you frame advance forward and then frame advance backward, the displayed frame doesn't change. This is because the playback position isn't the frame but the time between two frames. The rendered frame is the area that the playback position crosses. When you increment the time between two frames by one and decrement it by one, you cross the same frame both times and so the same frame is displayed.



Background rendering allows impossibly slow effects to play back in realtime shortly after the effect is pasted in the timeline. It continuously renders temporary output. When renderfarm is enabled, background rendering uses the renderfarm continuously. This way, any size video can be seen in realtime merely by creating a fast enough network with enough nodes.

Background rendering is enabled in settings->preferences->performance. It has one interactive function: settings->set background render. This sets the point where background rendering begins to where the in point is. If any video exists, a red bar appears in the time bar showing what has been background rendered.

It's often useful to insert an effect or a transition and then select settings->set background render right before the effect to preview it in full framerates.



Editing comprises both the time domain and the track domain. Since the timeline consists of a stack of tracks, you need to worry about how to sort and create tracks in addition to what time certain media appears on a track.

In the time domain, Cinelerra offers many ways to approach the editing process. The three main methods are two screen editing, drag and drop editing, and cut and paste editing.

There are several concepts Cinelerra uses when editing which apply to all the methods. The timeline is where all editing decisions are represented. Every track on the timeline has a set of attributes on the left, the most important of which is the arm track attribute.

track_attributes.png Track attributes

Only the armed tracks are affected by editing operations. Make sure you have enough armed destination tracks when you paste or splice material or some tracks in the material will get left out.

The other attributes affect the output of the track.

There are two ways to set the same attribute on multiple tracks very quickly. Hold down shift while clicking a track's attribute to match the same attribute in all the other tracks. If you don't want to affect all the other tracks, click on an attribute and drag across other tracks to have the same attribute set in them.

In addition to restricting editing operations, the armed tracks in combination with the active region determine where material is inserted when loading files. If the files are loaded with one of the insertion strategies which doesn't delete the existing project, the armed tracks will be used as destination tracks.

The active region is the range of time in the edit decision on the timeline. The active region is determined first by the presence of in/out points in the timeline. If those don't exist the highlighted region is used. If no highlighted region exists the insertion point is used as the active region and the active length is 0.

Finally, editing decisions never affect source material. Editing only affects pointers to source material, so if you want to have a media file at the end of your editing session which represents the editing decisions, you need to render it. See RENDERING FILES.



Tracks in Cinelerra either contain audio or video. There is no special designation for tracks other than the type of media they contain. When you create a new project, it contains a certain mumber of default tracks. You can still add or delete tracks from a number of menus. The Tracks menu contains a number of options for dealing with multiple tracks simultaneously. Each track itself has a popup menu which affects one track.

Bring up the popup menu by moving over a track and right clicking. The popup menu affects the track whether it's armed or not.

Move up and move down moves the one track up or down in the stack. Delete track deletes the track.

Operations in the Tracks menu affect only tracks which are armed.

Move tracks up and Move tracks down shift all the armed tracks up or down the stack.

Delete tracks deletes the armed tracks.

Delete last track deletes the last track, whether it's armed or not. Holding down the d key quickly deletes all the tracks.

Concatenate tracks is more complicated. It takes every playable track and concatenates it to the end of the first armed tracks. If there are two armed tracks followed by two playable tracks, the concatenate operation puts the two playable tracks after the two armed tracks. If there are three playable tracks instead, two tracks are put after the armed tracks and a third track is put on the end of the first armed track. The destination track wraps around until all the playable tracks are concatenated.

Finally, you'll want to create new tracks. The Audio and Video menus each contain an option to add a track of their specific type. In the case of audio, the new track is put on the bottom of the timeline and the output channel of the audio track is incremented by one. In the case of video, the new track is put on the top of the timeline. This way, video has a natural compositing order. New video tracks are overlayed on top of old tracks.



This is the fastest way to construct a program out of movie files. The idea consists of viewing a movie file in one window and viewing the program in another window. Sections of the movie file are defined in one window and transferred to the end of the program in the other window.

The way to begin a two screen editing session is to load some resources. In file->load load some movies with the insertion mode create new resources. You want the timeline to stay unchanged while new resources are brought in. Go to the Resource Window and select the media folder. The newly loaded resources should appear. Drag a resource from the media side of the window over the Viewer window.

There should be enough armed tracks on the timeline to put the sections of source material that you want. If there aren't, create new tracks or arm more tracks.

In the viewer window seek to the starting point of a clip you want to use. Use either the slider or the transport controls. Use the preview region to narrow down the search. Set the starting point with the in_point_button.png in point button.

Seek to the ending point of the clip you want to use. Set the ending point with the out_point_button.png out point button. The two points should now appear on the timebar and define a clip.

There are several things you can do with the clip now.

Two screen editing can be done purely by keybard shortcuts. When you move the pointer over any button a tooltip should appear, showing what key is bound to that button. In the Viewer window, the number pad keys control the transport and the [ ] v keys perform in/out points and splicing.



The answer is yes, you can you create a bunch of clips and drag them on the timeline. You can also drag edits around the timeline.

Load some files using file->load. Set the insertion mode to Create new resources. This loads the files into the Resource Window. Create some audio and video tracks on the timeline using the video and audio menus.

Open the Media folder in the resource window. Drag a media file from the resource window to the timeline. If the media has video, drag it onto a video track. If the media is pure audio, drag it onto an audio track.

Cinelerra fills out the audio and video tracks below the dragging cursor with data from the file. This affects what tracks you should create initially and which track to drag the media onto. If the media has one video track and two audio tracks, you'll need one video track and two audio tracks on the timeline and the media should be dragged over the first video track. If the media has audio only you'll need one audio track on the timeline for every audio track in the media and the media should be dragged over the first audio track.

When dragging, the media snaps to the start of track if the track is empty. If there are edits on the track, the media snaps to the nearest edit boundary.

You can also drag multiple files from the resource window. Either draw a box around the files, use SHIFT, or use CTRL when selecting files. When you drop the files in the timeline, they are concatenated. The behavior of SHIFT and CTRL changes depending on if the resources are in text or icons.

To display the resources as text or icons, right click inside the media list. Select either display icons or display text to change the list format.

When displaying text in the resource window SHIFT-clicking on media files extends the number of highlighted selections. CTRL-clicking on media files in text mode selects additional files one at a time.

When displaying icons in the resource window SHIFT-clicking or CTRL-clicking selects media files one at a time.

In addition to dragging media files, if you create clips and open the clip folder you can drag clips on the timeline.

In the timeline there is further dragging functionality. To enable the dragging functionality of the timeline, select the arrow toggle arrow.png. Move over an edit and drag it. If more than one track is armed, Cinelerra will drag any edits which start on the same position as the edit the cursur is currently over. During a dragging operation the edit snaps to the nearest boundary.

Dragging edits around the timeline allows you to sort music playlists, sort movie scenes, and give better NAB demos but not much else.



This is the traditional method of editing in audio editors. In the case of Cinelerra, you either need to start a second copy of Cinelerra and copy from one copy to the other, copy from different tracks in the same copy, or load a media file into the Viewer and copy from there.

Load some files onto the timeline. To perform cut and paste editing select the ibeam.png i-beam toggle. Select a region of the timeline and select the cut.png cut button to cut it. Move the insertion point to another point in the timeline and select the paste.png paste button. Assuming no in/out points are defined on the timeline this performs a cut and paste operation.

If in/out points are defined, the insertion point and highlighted region are overridden by the in/out points for clipboard operations. Thus, with in/out points you can perform cut and paste in drag and drop mode as well as cut and paste mode.

When editing audio, it is customary to cut from one part of a waveform into the same part of another waveform. The start and stop points of the cut are identical in each waveform and might be offset slightly, while the wave data is different. It would be very hard to highlight one waveform to cut it and highlight the second waveform to paste it without changing the relative start and stop positions.

One option for simplifying this is to open a second copy of Cinelerra, cutting and pasting to transport media between the two copies. This way two highlighed regions can exist simultanously.

Another option is to set in/out points for the source region of the source waveform and set labels for the destination region of the destination waveform. Perform a cut, clear the in/out points, select the region between the labels, and perform a paste.

A final operation in cut and paste editing is the edit->clear operation. If a region is highlighted or in/out points exist, the affected region is cleared by edit->clear. But if the insertion point is over an edit boundary and the edits on each side of the edit boundary are the same resource, the edits are combined into one edit comprised by the resource. The start of this one edit is the start of the first edit and the end of this one edit is the end of the second edit. This either results in the edit expanding or shrinking.



With some edits on the timeline it's possible to do trimming. By trimming you shrink or grow the edit boundaries by dragging them. In either drag and drop mode or cut and paste mode, move the cursor over an edit boundary until it changes shape. The cursor will either be an expand left or an expand right. If the cursor is an expand left, the dragging operation affects the beginning of the edit. If the cursor is an expand right, the dragging operation affects the end of the edit.

When you click on an edit boundary to start dragging, the mouse button number determines which dragging behavior is going to be followed. 3 possible behaviors are bound to mouse buttons in the interface preferences. See INTERFACE.

The effect of each drag operation not only depends on the behavior button but whether the beginning or end of the edit is being dragged. When you release the mouse button, the trimming operation is performed.

In a Drag all following edits operation, the beginning of the edit either cuts data from the edit if you move it forward or pastes new data from before the edit if you move it backward. The end of the edit pastes data into the edit if you move it forward or cuts data from the end of the edit if you move it backward. All the edits thereafter shift. Finally, if you drag the end of the edit past the start of the edit, the edit is deleted.

In a Drag only one edit operation, the behavior is the same when you drag the beginning or end of an edit. The only difference is none of the other edits in the track shift. Instead, anything adjacent to the current edit expands or shrinks to fill gaps left by the drag operation.

In a Drag source only operation, nothing is cut or pasted. If you move the beginning or end of the edit forward, the source reference in the edit shifts forward. If you move the beginning or end of the edit backward, the source reference shifts backward. Where the edit appears in the timeline remains the same but the source shifts.

For all file formats besides still images, the extent of the trimming operation is clamped to the source file length. Attempting to drag the start of the edit beyond the start of the source clamps it to the source start.

In all trimming operations, all edits which start on the same position as the cursor when the drag operation begins are affected. Unarm tracks to prevent edits from getting affected.

Node:USING EFFECTS, Next:, Previous:EDITING, Up:Top


It would be sufficient to perform all changes to the timeline using editing operations, but this isn't very extensible. Certain timeline changes should produce a different effect in the output without involving a unique procedure to apply each change. This is why we have effects.

Effects fall into three categories, and each effect in a category is applied using the same procedure.



These are layered under the track they apply to. They process the track when the track is played back, with no permanent storage of the output except when the project is rendered.

All the realtime effects are listed in the resource window, divided into two groups: audio effects and video effects. Audio effects should be dragged from the resource window onto audio tracks. Video effects should be dragged onto video tracks.

If there is data on the destination track, the effect is applied to the entire track. If there is no data on the track the effect is deleted. Finally, if a region of the track is selected the effect is pasted into the region, regardless of whether there is data.

Some of the effects don't process data but synthesize data. In the case of a synthesis effect, you'll want to select a region of the track so the dragging operation pastes it without deleting it.

When dragging more than one effect onto a track, you'll see the effects layering from top to bottom, on the bottom of the track. When the track is played back, effects are processed from top to bottom. The output of the top effect becomes the input of the bottom effect and so on and so forth.

In addition to dragging from the resource window, effects may be applied to a track by a popup menu. Right click on a track and select Attach effect from the popup. The attach effect dialog gives you more control than pure dragging and dropping. For one thing, the attach effect dialog lets you attach two more types of effects: shared effects and shared tracks. Select a plugin from the Plugins column and hit Attach under the plugins column to attach it. The effect is the same as if the effect was dragged from the resource window.

When an effect exists under a track, it most often needs to be configured. Go to the effect and right click on it to bring up the effect popup. In the effect popup is a show option. The show option causes the GUI for the effect to appear under the cursor. Most effects have GUI's but some don't. If the effect doesn't have a GUI, nothing pops up when the show option is selected. When you tweek parameters in the effect GUI, the parameters normally effect the entire duration of the effect.



The two other effect types supported by the Attach Effect dialog are recycled effects. In order to use a recycled effect, three requiremenets must be met:

In the case of a shared effect, these conditions must be true. In the case of a shared track, there merely must be another track on the timeline of the same type as the track you're applying an effect to. If you right clicked on a video track to attach an effect, there won't be anything in the shared tracks column if no other video track exists. If you right clicked on an audio track there won't be anything in the shared track column if no other audio track exists.

If shared effects or shared tracks are available, they appear in the shared effects and shared tracks columns. The attach button under each column causes anything highlighted in the column to be attached under the current track.

Shared effects and shared tracks allow very unique things to be done. In the case of a shared effect, the shared effect is treated like a copy of the original effect except in the shared effect the GUI can't be brought up. All configuration of the shared effect is determined by the GUI of the original effect and only the GUI of the original effect can be brought up.

When a shared effect is played back, it's processed just like a normal effect except the configuration is copied from the original effect. Some effects detect when they are being shared, like the reverb effects and the compressor. These effects determine what tracks are sharing them and either mix the two tracks together or use one track to stage some value. The reverb mixes tracks together to simulate ambience. The compressor uses one of the sharing tracks as the trigger.

When an original track has a shared track as one of its effects, the shared track itself is used as a realtime effect. This is more commonly known as bouncing tracks but Cinelerra achieves the same operation by attaching shared tracks. The fade and any effects in the shared track are applied to the original track. Once the shared track has processed the data, the original track performs any effects which come below the shared track and then composites it on the output.

In addition, once the shared track has processed the output of the original track like a realtime effect, the shared track mixes itself into the output with it's settings for pan, mode, and projector. Thus, two tracks are mixing the same data on the output. Most of the time you don't want the shared track to mix the same data as the original track on the output. You want it to stop right before the mixing stage and give the data back to the original track. Do this by enabling the mutepatch_up.png mute toggle next to each track for whom you don't want to mix on the output.

Suppose you were making video and you did want the shared track to composite the original track's data on the output a second time. In the case of video, the video from the shared track would always appear under the video from the original track, regardless of whether it was on top of the original track. This is because shared tracks are composited in order of their attachment. Since it's part of the original track it has to be composited before the original track is composited.



Many operations exist for manipulating effects once they are in the timeline. Because mixing effects and media is such complex business, the methods used in editing effects aren't as concise as cutting and pasting. Some of the editing happens by dragging in/out points, some of the editing happens through popup menus, and some of it happens by dragging effects.

Normally when you edit tracks, the effects follow the editing decisions. If you cut from a track, the effect shrinks. If you drag edit in/out points, the effect changes length. This behavior can be disabled by selecting Settings->edit effects in the project window. This decouples effects from editing operations, but what if you just want to edit the effects?

Move the timeline cursor over the effect borders until it changes to a resize left or resize right icon. In this state, if you drag the end of the effect, it performs an edit just like dragging the end of a track does.

The three editing behaviors of track trimming apply to effect trimming and they are bound to the mouse buttons that you set in interface preferences. See INTERFACE. When you perform a trim edit on an effect, the effect boundary is moved by dragging on it. Unlike track editing, the effect has no source length. You can extend the end of an effect as much as desired without being limited.

Also unlike track editing, the starting position of the drag operation doesn't bind the edit decision to media. The media the effect is bound to doesn't follow effect edits. Other effects; however, do follow editing decisions made on an effect. If you drag the end of an effect which is lined up to effects on other tracks, the effects on the other tracks will be edited while the media stays the same.

What happens if you trim the end of an effect in, leaving a lot of unaffected time near the end of the track? When you drag an effect in from the Resource Window you can insert the effect in the portion of the row unoccupied by the trimming operation. Realtime effects are organized into rows under the track. Each row can have multiple effects.

In addition to trimming, you can move effects up or down. Every track can have a stack of effects under it. By moving an effect up or down you change the order in which effects are processed in the stack. Go to an effect and right click to bring up the effect menu. The Move up and Move down options move the effect up or down.

When you're moving effects up or down, be aware that if they're shared as shared effects, any references will be pointing to a different effect after the move operation.

Finally, there's dragging of effects. Dragging effects works just like dragging edits. You must select the arrow.png arrow to enter drag and drop mode before dragging effects. The effects snap to media boundaries, effect boundaries, and tracks. Be aware if you drag a reference to a shared effect, the reference will usually point to the wrong effect afterwards.

Right click on an effect to bring up a menu for the effect. Select attach... to change the effect or change the reference if it is a shared effect.



Another type of effect is performed on a section of the track and the result stored somewhere before it is played back. The result is usually pasted into the track to replace the original data.

The rendered effects are not listed in the resource window but instead are accessed through the Audio->Render effect and Video->Render effect menu options. Each of these menu options brings up a dialog for the rendered effect. Rendered effects apply to only one type of track, either audio or video. If no tracks of the type exist, an error pops up.

A region of the timeline to apply the effect to must be defined before selecting Render effect.... If no in/out points and no highlighted region exists, the entire region after the insertion point is treated as the affected region. Otherwise, the region between the in/out points or the highlighted region is the affected region.

In the render effect dialog is a list of all the realtime and all the rendered effects. The difference here is that the realtime effects are rendered to disk and not applied under the track. Highlight an effect in the list to designate it as the one being performed.

Define a file to render the effect to in the Select a file to render to box. The magnify.png magnifying glass allows file selection from a list.

Select a file format which can handle the track type. The wrench.png wrench allows configuration specific to the file format.

There is also an option for creating a new file at each label. If you have a CD rip on the timeline which you want to divide into different files, the labels would become dividing points between the files if this option were selected. When the timeline is divided by labels, the effect is re-initialized at every label. Normalize operations take the peak in the current file and not in the entire timeline.

Finally there is an insertion strategy just like in the render dialog. It should be noted that even though the effect applies only to audio or video, the insertion strategy applies to all tracks just like a clipboard operation.

When you click OK in the effect dialog, it calls the GUI of the effect. If the effect is also a realtime effect, a second GUI appears to prompt for acceptance or rejection of the current settings. After accepting the settings, the effect is processed.



When one edit ends and another edit begins, the default behaviour is to have the first edit's output immediately become the output of the second edit when played back. Transitions are a way for the first edit's output to become the second edit's output with different variations.

Cinelerra supports audio and video transitions, all of which are listed in the resource window. Transitions may only apply to the matching track type. Transitions under audio transitions can only apply to audio tracks. Transitions under video transitions can only apply to video tracks.

Load a video file and cut a section from the center so the edit point is visible on the timeline. Go the resource window and click on the Video transitions folder. Drag a transition from the transition list onto the second video edit on the timeline. A box highlights over where the transition will appear. Releasing it over the second edit applies the transition between the first and second edit.

You can now scrub over the transition with the transport controls and watch the output in the Compositor window. Scrubbing with the insertion point doesn't normally show transitions because the transition durations are usually too short. The exact point in time when the transition takes effect isn't straightforward. It starts when the second edit begins and lasts a certain amount of time into the second edit. Therefore, the first asset needs to have enough data after the edit point to fill the transition into the second edit.

Once the transition is in place, it can be edited similarly to an effect. Move the pointer over the transition and right click to bring up the transition menu. The show option brings up specific parameters for the transition in question if there are any. The length option adjusts the length of the transition in seconds. Once these two parameters are set, they are applied to future transitions until they are changed again. Finally, the detach option removes the transition from the timeline.

Dragging and dropping transitions from the Resource window to the Program window can be really slow and tiring. Fortunately, once you drag a transition from the Resource window, the U and u keys will paste the same transition. The U key pastes the last video transition and the u key pastes the last audio transition on all the recordable tracks. If the insertion point or in point is over an edit, the beginning of the edit is covered by the transition.

It should be noted that when playing transitions from the timeline to a hardware accelerated video device, the hardware acceleration will usually be turned off momentarily during the transition and on after the transition in order to render the transition. Using an unaccelerated video device for the entire timeline normally removes the disturbance.



LADSPA effects are supported in realtime and rendered mode for audio. The LADSPA plugins you get from the internet vary in quality. Most can't be tweeked in realtime very easily and work better when rendered. Some crash and some can only be applied to one track due to a lack of reentrancy. Although Cinelerra implements the LADSPA interface as accurately as possible, multiple tracks of realtime, simultaneous processing go beyond the majority of LADSPA users. LADSPA effects appear in the audio folder as the hammer and screwdriver, to signify that they are Plugins for Linux Audio Developers.

LADSPA Effects are enabled merely by setting the LADSPA_PATH environment variable to the location of your LADSPA plugins or putting them in the /usr/lib/cinelerra directory.



When you play media files in Cinelerra, the media files have a certain number of tracks, a certain frame size, a certain sample size, and so on and so forth. No matter what the media file has; however, it is still played back according to the project attributes. If an audio file's samplerate is different than the project attributes, it is resampled. If a video file's frame size is different than the project attributes, it is composited on a black frame, either cropped or bordered with black.

The project attributes are adjusted in settings->format and in to a more limited extent in file->new. When you adjust project settings in file->new a new timeline is created with no data. Every timeline created from this point uses the same settings. When you adjust settings in settings->format, the timeline is not recreated with no data but every timeline created from this point uses the same settings.

In addition to the traditional settings for sample rate, frame rate, frame size, Cinelerra uses some unusual settings like channel positions, color model, and aspect ratio.



A large amount of Cinelerra's binary size is directed towards compositing. When you remove the letterboxing from a widescreen show, you're compositing. Changing the resolution of a show, making a split screen, and fading in and out among other things are all compositing operations in Cinelerra. Cinelerra detects when it's in a compositing operation and plays back through the compositing engine only then. Otherwise, it uses the fastest decoder available in the hardware.

Compositing operations are done on the timeline and in the Compositor window. Shortcuts exist in the Resource window for changing project attributes. Once some video files are on the timeline, the compositor window is a good place to try compositing.



In the compositor window, the most important functions are the camera.png camera button and the projector.png projector button. These control operation of the camera and projector. Inside Cinelerra's compositing pipeline, the camera determines where in the source video the temporary is copied from. The projector determines where in the output the temporary is copied to. The temporary is a frame of video in Cinelerra's memory where all graphics processing is done. Each track has a different temporary which is defined by the track size. By resizing the tracks you can create splitscreens, pans, and zooms.


Visual representation of the compositing pipeline.

When editing the camera and projector in the compositing window, the first track with record enabled is the track affected. Even if the track is completely transparent, it's still the affected track. If multiple video tracks exist, the easiest way to select one track for editing is to shift-click on the record icon of the track. This solos the track.

When the projector button is enabled in the compositor window, you're in projector editing mode. A guide box appears in the video window. Dragging anywhere in the video window causes the guide box to move, hopefully along with the video. shift-dragging anywhere in the video window causes the guide box to shrink and grow along with the video. Once you've positioned the video with the projector, you're ready to master the camera.

Select the camera.png camera button to enable camera editing mode. In this mode, the guide box shows where the camera position is in relation to past and future camera positions but not where it is in relation to the source video. Dragging the camera box in the compositor window doesn't move the box but instead moves the location of the video inside the box.

For example, when you drag the camera left, the video moves right. When you drag the camera up, the video moves down. When you shift-drag the camera, the effect is the same as if you zoomed in or out of the source. The intention of the camera is to produce still photo panning, while the intention of the projector is to composite several sources in the same scene.

In the compositing window, there is a popup menu of options for the camera and projector. Right click over the video portion of the compositing window to bring up the menu.

The camera and projector have shortcut operations neither in the popup menu or represented in video overlays. These are accessed in the Tool window. Most operations in the Compositor window have a tool window which is enabled by activating the toolwindow.png question mark.

In the case of the camera and projector, the tool window shows x, y, and z coordinates. By either tumbling or entering text directly, the camera and projector can be precisely positioned. 9 justification types are also defined for easy access. A popular justification operation is upper left projection after image reduction. This is used when reducing the size of video with aspect ratio adjustment.

The translation effect allows simultaneous aspect ratio conversion and reduction but is easier to use if the reduced video is put in the upper left of the temporary instead of in the center. The track size is set to the original size of the video and the camera is centered. The output size is set to the reduced size of the video. Without any effects, this produces just the cropped center portion of the video in the output.

The translation effect is dropped onto the video track. The input dimensions of the translation effect are set to the original size and the output dimensions are set to the reduced size. To put the reduced video in the center section that the projector shows would require offsetting out x and out y by a complicated calculation. Instead, we leave out x and out y at 0 and use the projector's tool window.

Merely by selecting left_justify.png left justify and top_justify.png top justify, the projector displays the reduced image from the top left corner of the temporary in the center of the output.



Masks select a region of the video for either displaying or hiding. Masks are also used in conjunction with another effect to isolate the effect to a certain region of the frame. A copy of one video track may be delayed slightly and unmasked in locations where the one copy has interference but the other copy doesn't. Color correction may be needed in one section of a frame but not another. A mask can be applied to just a section of the color corrected track while the vanilla track shows through. Removal of boom microphones, airplanes, and housewives are other mask uses.

The order of the compositing pipeline affects what can be done with masks. Mainly, masks are performed on the temporary after effects and before the projector. This means multiple tracks can be bounced to a masked track and projected with the same mask.

Our compositing pipeline graph now has a masking stage. There are 8 possible masks per track. Each mask is defined separately, although they each perform the same operation, whether it's addition or subtraction.


Compositing pipeline with masks

To define a mask, go into the Compositor window and enable the mask.png mask toggle. Now go over the video and click-drag. Click-drag again in another part of the image to create each new point of the mask. While it isn't the conventional bezier curve behavior, this masking interface performs in realtime what the effect of the mask is going to be. Creating each point of the mask expands a rubber band curve.

Once points are defined, they can be moved by ctrl-dragging in the vicinity of the corner. This; however, doesn't smooth out the curve. The in-out points of the bezier curve are accessed by shift-dragging in the vicinity of the corner. Then shift-dragging near the in or out point causes the point to move.

Finally, once you have a mask, the mask can be translated in one piece by alt-dragging the mask. Mask editing in Cinelerra is identical to how The Gimp edits masks except in this case the effect of the mask is always on.

The masks have many more parameters which couldn't be represented with video overlays. These are represented in the tool window for masks. Selecting the toolwindow.png question mark when the mask.png mask toggle is highlighted brings up the mask options.

The mode of the mask determines if the mask removes data or makes data visible. If the mode is subtractive, the mask causes video to disappear. If the mode is additive, the mask causes video to appear and everything outside the mask to disappear.

The value of the mask determines how extreme the addition or subtraction is. In the subtractive mode, higher values subtract more alpha. In the additive mode, higher values make the region in the mask brighter while the region outside the mask is always hidden.

The mask number determines which one of the 8 possible masks we're editing. Each track has 8 possible masks. When you click-drag in the compositor window, you're only editing one of the masks. Change the value of mask number to cause another mask to be edited. The previous mask is still active but only the curve overlay for the currently selected mask is visible.

When multiple masks are used, their effects are ORed together. Every mask in a single track uses the same value and mode.

The edges of a mask are hard by default but this rarely is desired. The feather parameter determines how many pixels to feather the mask. This creates softer edges but takes longer to render.

Finally, there are parameters which affect one point on the current mask instead of the whole mask. These are Delete, x, y. The active point is defined as the last point dragged in the compositor window. Any point can be activated merely by ctrl-clicking near it without moving the pointer. Once a point is activated, Delete deletes it and x, y allow repositioning by numeric entry.



Cropping changes the value of the output dimensions and the projector to reduce the visible picture area. Enable the crop.png crop toggle and the toolwindow.png tool window to perform cropping in the compositing window. This draws a rectangle over the video. Click-drag anywhere in the video to create a new rectangle. Click-drag over any corner of the rectangle to reposition the corner. The tool window allows text entry of the coordinates. When the rectangle is positioned, hit the do it button in the tool window.



On consumer displays the borders of the image are cut off and within the cutoff point is a region which isn't always square like it is in the compositor window. The borders are intended for scratch room and vertical blanking data. You can show where these borders are by enabling the titlesafe.png safe regions toggle. Keep titles inside the inner rectangle and keep action inside the outer rectangle.



Every track has an overlay mode, accessible by expanding the track. Select the expandpatch_checked.png expand track toggle to view all the options for a video track. The overlay mode of the track is normal by default. Select other modes by selecting the normal button. Overlay modes are processed inside the projector stage of compositing. The different modes are summarized below.



The size of the temporary and the size of the output in our compositing pipeline are independant and variable. This fits into everything covered so far. The camera's viewport is the temporary size. Effects are processed in the temporary and are affected by the temporary size. Projectors are rendered to the output and are affected by the output size. If the temporary is smaller than the output, the temporary is bordered by blank regions in the output. If the temporary is bigger than the output, the temporary is cropped.

The temporary size is defined as the track size. Each track has a different size. Right click on a track to bring up the track's menu. Select Resize Track to resize the track to any arbitrary size. Alternatively you can select Match output size to make the track the same size as the output.

The output size is set in either New when creating a new project or Settings->Format. In the Resource window there is another way to change the output size. Right click on a video asset and select Match project size to conform the output to the asset. When new tracks are created, the track size always conforms to the output size specified by these methods.

Node:KEYFRAMES, Next:, Previous:COMPOSITING, Up:Top


Setting static compositing parameters isn't very useful most of the time. Normally you need to move the camera around over time or change mask positions. Masks need to follow objects. We create dymanic changes by defining keyframes. A keyframe is a certain point in time when the settings for one operation change. In Cinelerra, there are keyframes for almost every compositing parameter and effect parameter.

Whenever you adjust any parameter, the value is stored in a keyframe. If the value is stored in a keyframe, why doesn't it always change? The keyframe it is stored in is known as the default keyframe. The default keyframe applies to the entire duration if no other keyframes are present. The default keyframe is not drawn anywhere because it always exists. The only way change occurs over time is if non-default keyframes are created.

Display keyframes for any parameter by using the view menu. When keyframes are selected, they are drawn on the timeline over the tracks they apply to.



Fade and zoom settings are stored in bezier curves. Go to view->fade keyframes or view->...zoom to show curves on the timeline. It's sometimes easier to pull down the view menu and then use the keyboard shortcuts listed in the menu to enable or disable keyframes while the menu is visible. In either arrow editing mode or i-beam editing mode, move the cursor over the curves in the timeline until it changes shape. Then merely by clicking and dragging on the curve you can create a keyframe at the position.

After the keyframe is created, click drag on it again to reposition it. When you click-drag a second keyframe on the curve, it creates a smooth ramp. ctrl-dragging on a keyframe changes the value of either the input control or the output control. This affects the sharpness of the curve. While the input control and the output control can be moved horizontally as well as vertically, the horizontal movement is purely for legibility and isn't used in the curve value.

You may remember that The Gimp and the Compositing masks all use shift to select control points so why does the timeline use ctrl? When you shift-drag on a timeline curve, the keyframe jumps to the value of either the next or previous keyframe, depending on which exists. This lets you set a constant curve value without having to copy the next or previous keyframe.



Mute is the only toggle keyframe. Mute keyframes determine where the track is processed but not rendered to the output. Click-drag on these curves to create a keyframe. Unlike curves, the toggle keyframe has only two values: on or off. Ctrl and shift do nothing on toggle keyframes.



You may have noticed when a few fade curves are set up, moving the insertion point around the curves causes the faders to reflect the curve value under the insertion point. This isn't just to look cool. The faders themselves can set keyframes in automatic keyframe mode. Automatic keyframe mode is usually more useful than dragging curves.

Enable automatic keyframe mode by enabling the automatic keyframe toggle autokeyframe.png. In automatic keyframe mode, every time you tweek a keyframeable parameter it creates a keyframe on the timeline. Since automatic keyframes affect many parameters, it's best enabled just before you need a keyframe and disabled immediately thereafter.

It's useful to go into the View menu and make the desired parameter visible before performing a change. The location where the automatic keyframe is generated is under the insertion point. If the timeline is playing back during a tweek, several automatic keyframes will be generated as you change the parameter.

When automatic keyframe mode is disabled, a similarly strange thing happens. Adjusting a parameter adjusts the keyframe immediately preceeding the insertion point. If two fade keyframes exist and the insertion point is between them, changing the fader changes the first keyframe.

There are many parameters which can only be keyframed in automatic keyframe mode. These are parameters for which curves would take up too much space on the track or which can't be represented easily by a curve.

Effects are only keyframable in automatic mode because of the number of parameters in each individual effect.

Camera and projector translation can only be keyframed in automatic keyframe mode while camera and projector zoom can be keyframed with curves. It is here that we conclude the discussion of compositing, since compositing is highly dependant on the ability to change over time.



Camera and projector translation is represented by two parameters: x and y. Therefore it is cumbersome to adjust with curves. Cinelerra solves this problem by relying on automatic keyframes. With a video track loaded, move the insertion point to the beginning of the track and enable automatic keyframe mode.

Move the projector slightly in the compositor window to create a keyframe. Then go forward several seconds. Move the projector a long distance to create another keyframe and emphasize motion. This creates a second projector box in the compositor, with a line joining the two boxes. The joining line is the motion path. If you create more keyframes, more boxes are created. Once all the desired keyframes are created, disable automatic keyframe mode.

Now when scrubbing around with the compositor window's slider, the video projection moves over time. At any point between two keyframes, the motion path is read for all time before the insertion point and green for all time after the insertion point. It's debatable if this is a very useful feature but it makes you feel good to know what keyframe is going to be affected by the next projector tweek.

Click-drag when automatic keyframes are off to adjust the preceeding keyframe. If you're halfway between two keyframes, the first projector box is adjusted while the second one stays the same. Furthermore, the video doesn't appear to move in step with the first keyframe. This is because, halfway between two keyframes the projector translation is interpolated. In order to set the second keyframe you'll need to scrub after the second keyframe.

By default the motion path is a straight line, but it can be curved with control points. Ctrl-drag to set either the in or out control point of the preceeding keyframe. Once again, we depart from The Gimp because shift is already used for zoom. After the in or out control points are extrapolated from the keyframe, Ctrl-dragging anywhere in the video adjusts the nearest control point. A control point can be out of view entirely yet still controllable.

When editing the camera translation, the behavior of the camera boxes is slightly different. Camera automation is normally used for still photo panning. The current camera box doesn't move during a drag, but if multiple keyframes are set, every camera box except the current keyframe appears to move. This is because the camera display shows every other camera position relative to the current one.

The situation becomes more intuitive if you bend the motion path between two keyframes and scrub between the two keyframes. The division between red and green, the current position between the keyframes, is always centered while the camera boxes move.



Keyframes can be shifted around and moved between tracks on the timeline using similar cut and paste operations to editing media. Only the keyframes selected in the view menu are affected by keyframe editing operations, however.

The most popular keyframe editing operation is replication of some curve from one track to the other, to make a stereo pair. The first step is to solo the source track's record recordpatch_up.png patch by shift-clicking on it. Then either set in/out points or highlight the desired region of keyframes. Go to keyframes->copy keyframes to copy them to the clipboard. Solo the destination track's record recordpatch_up.png patch by shift-clicking on it and go to keyframes->paste keyframes to paste the clipboard.

The media editing commands are mapped to the keyframe editing commands by using the shift key instead of just the keyboard shortcut.

This leads to the most complicated part of keyframe editing, the default keyframe. Remember that when no keyframes are set at all, there is still a default keyframe which stores a global parameter for the entire duration. The default keyframe isn't drawn because it always exists. What if the default keyframe is a good value which you want to transpose between other non-default keyframes? The keyframes->copy default keyframe and keyframes->paste default keyframe allow conversion of the default keyframe to a non-default keyframe.

Keyframes->copy default keyframe copies the default keyframe to the clipboard, no matter what region of the timeline is selected. The keyframes->paste keyframes function may then be used to paste the clipboard as a non-default keyframe.

If you've copied a non-default keyframe, it can be stored as the default keyframe by calling keyframes->paste default keyframe. After using paste default keyframe to convert a non-default keyframe into a default keyframe, you won't see the value of the default keyframe reflected until all the non-default keyframes are removed.

Finally, there is a convenient way to delete keyframes besides selecting a region and calling keyframes->clear keyframes. Merely click-drag a keyframe before its preceeding keyframe or after its following keyframe on the track.



Ideally, all media would be stored on hard drives, CD-ROM, flash, or DVD and loading it into Cinelerra would be a matter of loading a file. In reality, very few sources of media can be accessed like a filesystem but instead rely on tape transport mechanisms and dumb I/O mechanisms to transfer the data to computers. These media types are imported into Cinelerra through the Record dialog.

The first step in recording is to configure the input device. In Settings->preferences are a number of recording parameters described in configuration See RECORDING. These parameters apply to recording no matter what the project settings are, because the recording parameters are usually the maximum capability of the recording hardware while project settings come and go.

Go to File->record to record a dumb I/O source. This prompts for an output format much like rendering does. Once that's done, the record window and the record monitor pop up.

The record window has discrete sections. While many parameters change depending on if the file has audio or video, the discrete sections are always the same.


Recording window areas

Recording in Cinelerra is organized around batches. A batch essentially defines a distinct output file for the recording. For now you can ignore the batch concept entirely and record merely by hitting the record button record.png.

The record button opens the current output file if it isn't opened and writes captured data to it. Use the stop button to stop the recording. Recording can be resumed with the record button without erasing the file at this point. In the case of a video file, there is a single frame record button singleframe.png which records a single frame.

When enough media is recorded, choose an insertion method from the Insertion Strategy menu and hit close.



Now we come to the concept of batches. Batches try to make the dumb I/O look more like a filesystem. Batches are traditionally used to divide tape into different programs and save the different programs as different files instead of recording straight through an entire tape. Because of the high cost of developing frame-accurate deck control mechanisms, the only use of batches now is recording different programs during different times of day. This is still useful for recording TV shows or time lapse movies as anyone who can't afford proper appliances knows.

The record window supports a list of batches and two recording modes: interactive and batch recording. Interactive recording happens when the record button is pressed. Interactive recording starts immediately and uses the current batch to determine everything except start time. By default, the current batch is configured to behave like tape.

Batch recording happens when the start button is pressed. In batch recording, the start time is the time the batch starts recording.

First, you'll want to create some batches. Each batch has certain parameters and methods of adjustment.

The record window has a notion of the current batch. The current batch is not the same as the batch which is highlighted in the batch list. The current batch text is colored red in the batch list. The highlighted batch is merely displayed in the edit batch section for editing.

By coloring the current batch red, any batch can be edited by highlighting it, without changing the batch to be recorded.

All recording operations take place in the current batch. If there are multiple batches, highlight the desired batch and hit activate to make it the current batch. If the start button is pressed, the current batch flashes to indicate it's waiting for the start time in batch mode. If the record button is pressed, the current batch is recorded immediately in interactive mode.

In batch and interactive recording modes, when the current batch finishes recording the next batch is activated and performed. All future recording is done in batch mode. When the first batch finishes, the next batch flashes until its start time is reached.

Interrupt either the batch or the interactive operation by hitting the stop button.

Finally there is the rewind.png rewind button. In either interactive or batch recording, the rewind button causes the current batch to close its file. The next recording operation in the current batch deletes the file.



Sometimes in the recording process and the configuration process, you'll need to define and select tuner channels to either record or play back to. In the case of the Video4Linux and Buz recording drivers, tuner channels define the source. When the Buz driver is also used for playback, tuner channels define the destination.

Defining tuner channels is accomplished by pushing the channel.png channel button. This brings up the channel editing window. In this window you add, edit, and sort channels. Also, for certain video drivers, you can adjust the picture quality.

The add operation brings up a channel editing box. The title of the channel appears in the channel list. The source of the channel is the entry in the physical tuner's frequency table corresponding to the title.

Fine tuning in the channel edit dialog adjusts the physical frequency slightly if the driver supports it. The norm and frequency table together define which frequency table is selected for defining sources. If the device supports multiple inputs, the input menu selects these.

To sort channels, highlight the channel in the list and push move up or move down to move it.

Once channels are defined, the source item in the record window can be used to select channels for recording. The same channel selecting ability also exists in the record monitor window. Be aware channel selections in the record monitor window and the record window are stored in the current batch.

For some drivers an option to swap fields may be visible. These drivers don't get the field order right every time without human intervention. Toggle this to get the odd and even lines to record in the right order.





On systems with lots of memory, Cinelerra sometimes runs better without a swap space. If you have 4 GB of RAM, you're probably better off without a swap space. If you have 512MB of RAM, you should keep the swap. If you want to do recording, you should probably disable swap space in any case. There's a reason for this. Linux only allows half the available memory to be used. Beyond that, it starts searching for free pages to swap, in order to cache more disk access. In a 4 GB system, you start waiting for page swaps after using only 2 GB.

The question then is how to make Linux run without a swap space. Theoretically it should be a matter of running

swapoff -a

Unfortunately, without a swap space the kswapd tasklet normally spins at 100%. To eliminate this problem, edit linux/mm/vmscan.c. In this file, put a line saying return 0; before it says

	 * Kswapd main loop.

Then recompile the kernel.



In order to improve realtime performance, the audio buffers for all the Linux sound drivers were limited from 127k to 64k. For recording audio and video simultaneously and for most audio recording this causes dropouts. Application of low latency and preemtible kernel patches make it possible to record more audio recordings but it doesn't improve recording video with audio. This is where you need to hack the kernel.

This only applies to the OSS version of the Soundblaster Live driver. Since every sound card and every sound driver derivative has a different implementation you'll need to do some searching for other sound cards. Edit linux/drivers/sound/emu10k1/audio.c

Where is says

if (bufsize >= 0x10000)

change it to say

if (bufsize > 0x40000)

Where is says

		for (i = 0; i < 8; i++)
			for (j = 0; j < 4; j++)

change it to say

		for (i = 0; i < 16; i++)
			for (j = 0; j < 4; j++)

In linux/drivers/sound/emu10k1/hwaccess.h


#define MAXBUFSIZE 65536


#define MAXBUFSIZE 262144

Finally, in linux/drivers/sound/emu10k1/cardwi.h



#define WAVEIN_MAXBUFSIZE 262144

Then recompile the kernel modules.



XFree86 by default can't display Cinelerra's advanced pixmap rendering very fast. The X server stalls during list box drawing. Fix this by adding a line to your XF86Config* files.

In the Section "Device" area, add a line saying:

Option "XaaNoOffscreenPixmaps"

and restart the X server.



The Linux kernel only allows 32MB of shared memory to be allocated by default. This needs to be increased to do anything useful. Run the following command:

echo "0x7fffffff" > /proc/sys/kernel/shmmax



This is a very popular command sequence among Linux gurus, which is not done by default on Linux distributions.

hdparm -c3 -d1 -u1 -k1 /dev/hda

-c3 puts the hard drive into 32 bit I/O with sync. This normally doesn't work due to inept kernel support for most IDE controllers. If you get lost interrupt or SeekComplete errors, quickly use -c0 instead of -c3 in your command.

-d1 enables DMA of course. This frees up the CPU partially during data transfers.

-u1 allows multiple interrupts to be handled during hard drive transactions. This frees up even more CPU time.

-k1 prevents Linux from resetting your settings in case of a glitch.



Linux runs some daily operations like compressing man pages. These may be acceptable background tasks while compiling or word processing but not while playing video. Disable these operations by editing /etc/rc.d/init.d/anacron.

Put exit before the first line not beginning in #.

In /etc/rc.d/init.d/crond put exit before the first line not beginning in #. Then make like Win 2000 and reboot.

You can't use the at command anymore, but who uses that command anyways?



Gamers like high resolution mice, but this can be painful for precisely positioning the mouse on a timeline or video screen. XFree86 once allowed you to reduce PS/2 mouse sensitivity using commands like xset m 1 1 but you're out of luck with USB mice or KVM's.

We have a way to reduce USB mouse sensitivity. Edit /usr/src/linux/drivers/input/mousedev.c.

After the line saying

struct mousedev_list {


#define DOWNSAMPLE_N 100
#define DOWNSAMPLE_D 350
int x_accum, y_accum;

Next, the section which says something like:

case EV_REL:
	switch (code) {
		case REL_X:	list->dx += value; break;
		case REL_Y:	list->dy -= value; break;
		case REL_WHEEL:	if (list->mode) list->dz -= value; break;

must be replaced by

case EV_REL:
	switch (code) {
		case REL_X:
			list->x_accum += value * DOWNSAMPLE_N;
			list->dx += (int)list->x_accum / (int)DOWNSAMPLE_D;
			list->x_accum -= ((int)list->x_accum / (int)DOWNSAMPLE_D) * (int)DOWNSAMPLE_D;
		case REL_Y:
			list->y_accum += value * DOWNSAMPLE_N;
			list->dy -= (int)list->y_accum / (int)DOWNSAMPLE_D;
			list->y_accum -= ((int)list->y_accum / (int)DOWNSAMPLE_D) * (int)DOWNSAMPLE_D;
		case REL_WHEEL:	if (list->mode) list->dz -= value; break;

Change the value of DOWNSAMPLE_N to change the mouse sensitivity.



Screen blanking is really annoying, unless you're fabulously rich and can afford to leave your monitor on 24 hours a day without power saving mode. In /etc/X11/xinit/xinitrc put

xset s off
xset s noblank

before the first if statement.

How about those windows keys which no Linux distribution even thinks to use. You can make the window keys provide ALT functionality by editing /etc/X11/Xmodmap. Append the following to it.

keycode 115 = Hyper_L
keycode 116 = Hyper_R
add mod4 = Hyper_L
add mod5 = Hyper_R

The actual changes to a window manager to make it recognize window keys for ALT are complex. In FVWM at least, you can edit /etc/X11/fvwm/system.fvwm2rc and put

Mouse 0 T A move-and-raise-or-raiselower
#Mouse 0 W M move
Mouse 0 W 4 move
Mouse 0 W 5 move
Mouse 0 F A resize-or-raiselower
Mouse 0 S A resize-or-raiselower

in place of the default section for moving and resizing. Your best performance is going to be on FVWM. Other window managers seem to slow down video with extra event trapping and aren't as efficient in layout.



You'll often store video on an expensive, gigantic disk array separate from your boot disk. You'll thus have to manually install an EXT filesystem on this disk array, using the mke2fs command. By far the fastest file system is

mke2fs -i 65536 -b 4096 my_device
tune2fs -r0 -c10000 my_device

This has no journaling, reserves as few blocks as possible for filenames, and accesses the largest amount of data per block possible. A slightly slower file system, which is easier to recover after power failures is

mke2fs -j -i 65536 -b 4096 my_device
tune2fs -r0 -c10000 my_device

This adds a journal which slows down the writes but makes us immune to power failures.



Video recorded from the ZORAN inputs is normally unaligned or not completely encoded on the right. This can be slightly compensated by adjusting parameters in the driver sourcecode.

In /usr/src/linux/drivers/media/video/zr36067.h the structures defined near line 623 affect alignment. At least for NTSC, the 2.4.20 version of the driver could be improved by changing

static struct tvnorm f60ccir601 = { 858, 720, 57, 788, 525, 480, 16 };


static struct tvnorm f60ccir601 = { 858, 720, 57, 788, 525, 480, 17 };

In /usr/src/linux/drivers/media/video/bt819.c more structures near line 76 affect alignment and encoding.


{858 - 24, 2, 523, 1, 0x00f8, 0x0000},

could be changed to

{868 - 24, 2, 523, 1, 0x00f8, 0x0000},

Adjusting these parameters may or may not improve your picture. More of the time, they'll cause the driver to lock up before capturing the first frame.





First, Zoran capture boards must be accessed using the Buz video driver in Preferences->Recording and Preferences->Playback. Some performance tweeks are available in another section. See IMPROVING PERFORMANCE.

Once tweeked, the Buz driver seems to crash if the number of recording buffers is too high. Make sure Preferences->Recording->Frames to buffer in device is below 10.



Sometimes there will be two edits really close together. The point selected for dragging may be next to the indended edit on an edit too small to see at the current zoom level. Zoom in horizontally.





Dolby pro logic is an easy way to output 6 channel audio from a 2 channel soundcard with degraded but useful results. Rudimentary Dolby pro logic encoding can be achieved with clever usage of the effects.

Create 2 audio tracks with the same audio. Apply invert audio to one track. The signal comes out of the back speakers.

Create a single audio track with monaural audio of a different source. Center it in the pan control. The signal comes out of the center speaker.

Create other tracks with different signals and pan them left or right to put signals in the front left or right speaker.

Finally, if a copy of the signal in the back speakers is desired in any single front speaker, the signal in the back speakers must be delayed by at least 0.05 seconds and a single new track should be created. Pan the new track to orient the signal in the front speakers.

If the same signal is desired in all the speakers except the center speaker, delay the back speakers by 0.5 seconds and delay either the front left or front right by 0.2 seconds.

If you want to hear something from the subwoofer, create a new track, select a range, drop a synthesizer effect, and set the frequency below 60 Hz. The subwoofer merely plays anything below around 60Hz.

Other tricks you can perform to separate the speakers are parametric equalization to play only selected ranges of frequencies through different speakers and lowpass filtering to play signals through the subwoofer.



Unless you live in a rich nation like China or are a terrorist, you probably record analog TV more than you record digital TV. The picture quality on analog TV is horrible but you can do things in Cinelerra to make it look more like it did in the studio.

First, when capturing the video, capture it in the highest resolution possible. For Europeans it's 720x576 and for Americans it's 720x480. Don't bother adjusting the brightness or contrast in the recording monitor, although maxing out the color is useful. Capture it using MJPEG or uncompressed Component Video if possible. If those are too demanding, then capture it using JPEG. RGB should be a last resort.

Now on the timeline use Settings->Format to set a YUV colorspace. Drop a Downsample effect on the footage. Set it for

Horizontal:        2
Horizontal offset: 0
Vertical:          2
Vertical offset:   0

  x   green
  x   blue

Use the camera tool to shift the picture up or down a line to remove the most color interference from the image. This is the difference we're looking for:


If you have vertical blanking information or crawls which constantly change in each frame, block them out with the Mask tool. This improves compression ratios.

This is about all you can do without destroying more data than you would naturally lose in compression. The more invasive cleaning techniques involve deinterlacing.



Interlacing is done on most video sources because it costs too much to build progressive scanning cameras and progressive scanning CRT's. Many a consumer has been dissapointed to spend 5 paychecks on a camcorder and discover what horrible jagged images it produces on a computer monitor.

As for progressive scanning camcorders, forget it. Cost factors are probably going to keep progressive scanning cameras from ever equalling the spatial resolution of interlaced cameras. Interlacing is here to stay. That's why they made deinterlacing effects in Cinelerra.

We don't believe there has ever been a perfect deinterlacing effect. They're either irreversible or don't work. Cinelerra cuts down the middle by providing deinterlacing tools that are irreversible sometimes and don't work sometimes but are neither one or the other.

Line Doubling

This one is done by the Deinterlace effect when set to Odd lines or Even lines. When applied to a track it reduces the vertical resolution by 1/2 and gives you progressive frames with stairstepping. This is only useful when followed by a scale effect which reduces the image to half its size.

Line averaging

The Deinterlace effect when set to Average even lines or Average odd lines does exactly what line doubling does except instead of making straight copies of the lines it makes averages of the lines. This is actually useful for all scaling.

There's an option for adaptive line averaging which selects which lines to line average and which lines to leave interlaced based on the difference between the lines. It doesn't work.

Inverse Telecine

This is the most effective deinterlacing tool when the footage is an NTSC TV broadcast of a film. Here the image was converted from 24fps to 30fps by interlacing in a predictable pattern, which the Inverse Telecine effect can detect. It shifts fields forwards and backwards to get progressive frames of the same resolution as the original, most of the time.

The timing is going to be jittery because of this but it's progressive.

There is only one useful setting for Inverse Telecine:

Pattern offset:   0
    Odd field first
  x Automatic IVTC
  x A B BC CD D

The other options are only there because one day there may be some progressive scan camera which produces pulldown in the same frame of reference throughout entire tapes.

Time base correction

The first three tools either destroy footage irreversibly or don't work sometimes. Time base correction is last because it's the perfect deinterlacing tool. It leaves the footage intact. It doesn't reduce resolution, perceptually at least. It doesn't cause jittery timing.

The Frames to Fields effect converts each frame to two frames, so it must be used on a timeline whose project frame rate is twice the footage's frame rate. In the first frame it puts a line averaged copy of the even lines. In the second frame it puts a line averaged copy of the odd lines. When played back at full framerates it gives the illusion of progressive video with no loss of detail.

Best of all, this effect can be reversed with the Fields to frames nonrealtime effect. That one combines two frames of footage back into the one original interlaced frame of half the framerate.

Unfortunately, the output of Frames to Fields can't be compressed as efficiently as the original because it introduces vertical twitter and a super high framerate.

Interlaced 29.97fps footage can be made to look like film by applying Frames to Fields and then reducing the project frame rate of the resulting 59.94fps footage to 23.97fps. This produces no timing jitter and the occasional odd field gives the illusion of more detail than there would be if you just line averaged the original.



Video sweetening is constantly getting better. Lately the best thing you can do for dirt cheap consumer camcorder video is to turn it into progressive 24fps output. While you can't really do that, you can get pretty close for the money. Mind you, this procedure can degrade high quality video just as easily as it improves low quality video. It should only be used for low quality video.

This entire procedure could be implemented in one nonrealtime effect, but the biggest problem with that is you'll most often want to keep the field based output and the 24fps output for posterity. A nonrealtime effect would require all that processing just for the 24fps copy. Still debating that one.



Let's face it, if you're employed you live in Silicon Valley. As such you probably photograph a lot of haze and never see blue sky ever. Even if you can afford to briefly go somewhere where there is blue sky, horizon shots usually can stand for more depth. This is what the gradient effect is for.

Drop the gradient effect on hazy tracks. Set the following parameters:

Angle: 0
Inner radius: 0
Outer radius: 40
Inner color: blue 100% alpha
Outer color: blue 0% alpha

It's important to set the 0% alpha color to blue even though it's 0% alpha. This is a generally applicable setting for the gradient. Some scenes may work better with orange or brown for an evening feel.



Most effects in Cinelerra can be figured out just by using them and tweeking. Here are brief descriptions of effects which you might not utilize fully by mere experimentation.



This effect replaces the selected color or intensity with black if there is no alpha channel and replaces it with transparency if there is an alpha channel. The selection of color model is important.

Chroma key uses either the value or the hue to determine what is erased. If this parameter is within a certain threshold it's erased. It's not a simple on/off switch, however. As the selected parameter approaches the edge of the threshold, it gradually gets erased if the slope is low or is completely erased if the slope is high.

The slope tries to soften the edges of the chroma key but it doesn't work well for compressed sources. A popular softening technique is to use a maximum slope and chain a blur effect below the chroma key effect to blur just the alpha.



This shows the number of occurances of each value of a certain color channel. It is always performed in 16 bit RGB regardless of the project colorspace. Use the upper gradient to determine the range of input intensities to be expanded to the output. Use the lower gradient to determine the range of output intensities to target the expansion to. Enable automatic mode to have the histogram calculate automatic input values for every frame. The threshold is only used in automatic mode and determines how sensitive to the upper and lower boundaries of the histogram the automatic gain should be.



Time average is one effect which has many uses besides creating nifty trail patterns of moving objects. It's main use is reducing noise in still images. Merely point a video camera at a stationary subject for 30 frames, capture the frames, and average them using TIME AVERAGE and you'll have a super high quality print. In 16 bit colormodels, time average can increase the dynamic range of lousy cameras.



The video scope plots two views of the image. One view plots the intensity of each pixel against horizontal position. They call this the WAVEFORM. Another view translates hue to angle and saturation to radius for each pixel. They call this the VECTORSCOPE.

The vectorscope is actually very useful for determining if an image is saturated. When adjusting saturation, it's important to watch the vectorscope to make sure pixels don't extend past the 100 radius.

The waveform allows you to make sure image data extends from complete black to complete white while adjusting the brightness/contrast.

Some thought is being given to having a video scope for recording. Unfortunately, this would require a lot of variations of the video scope for all the different video drivers.



The deinterlace effect has evolved over the years to deinterlacing and a whole lot more. In fact two of the deinterlacing methods, Inverse Telecine and Frames to Fields, are separate effects. The deinterlace effect offers several variations of line replication to eliminate comb artifacts in interlaced video. It also has some line swapping tools to fix improperly captured video or make the result of a reverse effect display fields in the right order.



The plugin API in Cinelerra dates back to 1997, before the LADSPA and before VST became popular. It's fundamentally the same as it was in 1997, with minor modifications to handle keyframes and GUI feedback. Unfortunately, the GUI is not abstracted from the programmer. This allows the programmer to use whatever toolkit they want and allows more flexibility in appearance but it costs more.

There are several types of plugins, each with a common procedure of implementation and specific changes for that particular type. The easiest way to implement a plugin is to take the simplest existing one out of the group and rename the symbols.



All plugins inherit from a derivative of PluginClient. This derivative implements most of the required methods in PluginClient, but users must still define methods for PluginClient. The most commonly used methods are already implemented in macros.

The files they include depend on the plugin type. Audio plugins include pluginaclient.h and video plugins include pluginvclient.h. They inherit PluginAClient and PluginVClient respectively.

Another thing all plugins do is define at least three objects:



The processing object should inherit from the intended PluginClient derivative. Its constructor should take a PluginServer argument.

MyPlugin(PluginServer *server);

In the implementation, the plugin must contain a registration line with the name of the processing object like


The constructor should contain


to initialize the most common variables.

The processing object should have a destructor containing


to delete the most common variables.

Another function which is useful but not mandatory is

int is_multichannel();

It should return 1 if one instance of the plugin handles multiple channels simultaneously or 0 if one instance of the plugin only handles one channel. The default is 0 if it is omitted. Multichannel plugins should refer to the value of PluginClient::total_in_buffers to determine the number of channels.

To simplify the implementation of realtime plugins, a macro for commonly used members should be put in the class header, taking the configuration object and user interface thread object as arguments. This is only useful for realtime plugins. Fortunately, nonrealtime plugins are simpler.

PLUGIN_CLASS_MEMBERS(config_name, thread_name)

Many other members may be defined in the processing object, depending on the plugin type. The commonly used members in PLUGIN_CLASS_MEMBERS are described below. Not all these members are used in nonrealtime plugins.



The configuration object is critical for GUI updates, signal processing, and default settings in realtime plugins. Be aware it is not used in nonrealtime plugins. The configuration object inherits from nothing and has no dependancies. It's merely a class containing three functions and variables specific to the plugin's parameters.

Usually the configuration object starts with the name of the plugin followed by Config.

class MyPluginConfig

Following the name of the configuration class, we put the three required functions and the configuration variables.

	int equivalent(MyPluginConfig &that);
	void copy_from(MyPluginConfig &that);
	void interpolate(MyPluginConfig &prev,
		MyPluginConfig &next,
		int64_t prev_position,
		int64_t next_position,
		int64_t current_position);

	float parameter1;
	float parameter2;
	int parameter3;

Now you must define the three functions. Equivalent is called by LOAD_CONFIGURATION_MACRO to get the return value. That is all it's used for and if you don't care about load_configuration's return value, you can leave this function empty. It normally returns 1 if the argument's variables have the same values as the local variables.

Then there's copy_from which transfers the configuration values from the argument to the local variables. This is once again used in LOAD_CONFIGURATION_MACRO to store configurations in temporaries. Once LOAD_CONFIGURATION_MACRO has replicated the configuration, it loads a second configuration. Then it interpolates the two configurations to get the current configuration. The interpolation function performs the interpolation and stores the result in the local variables.

Normally the interpolate function calculates a previous and next fraction, using the arguments.

void MyPluginConfig::interpolate(MyPluginConfig &prev,
		MyPluginConfig &next,
		int64_t prev_position,
		int64_t next_position,
		int64_t current_position)
	double next_scale = (double)(current_position - prev_position) / (next_position - prev_position);
	double prev_scale = (double)(next_position - current_position) / (next_position - prev_position);

Then the scales are applied to the previous and next configuration object to yield the current values.

	this->parameter1 = (float)(prev.parameter1 * prev_scale + next.parameter1 * next_scale);
	this->parameter2 = (float)(prev.parameter2 * prev_scale + next.parameter2 * next_scale);
	this->parameter3 = (int)(prev.parameter3 * prev_scale + next.parameter3 * next_scale);

Alternatively you can copy the values from the previous configuration argument for no interpolation.

This usage is the same in audio and video plugins. In video playback, the interpolation function is called for every frame, yielding smooth interpolation. In audio playback, the interpolation function is called only once for every console fragment and once every time the insertion point moves. This is good enough for updating the GUI while selecting regions on the timeline but it may not be accurate enough for really smooth rendering of the effect.

For really smooth rendering of audio, you can still use load_configuration when updating the GUI. For process_realtime; however, ignore load_configuration and write your own interpolation routine which loads all the keyframes in a console fragment and interpolates every sample. This would be really slow and hard to debug, yielding improvement which may not be audible. Then of course, every century has its set of wierdos.

An easier way to get smoother interpolation is to reduce the console fragment to 1 sample. This would have to be rendered and played back in a separate program of course. The Linux sound driver can't play fragments of 1 sample.



The user interface object at the very least consists of a pointer to a window and pointers to a set of widgets. Using Cinelerra's toolkit, it consists of a BCWindow derivative and a Thread derivative. The Thread derivative is declared in the plugin header using

PLUGIN_THREAD_HEADER(plugin_class, thread_class, window_class)

Then it is defined using

PLUGIN_THREAD_OBJECT(plugin_class, thread_class, window_class)

This in combination with the SHOW_GUI macro does all the work in instantiating the Window class. This is used in realtime plugins but not in nonrealtime plugins. Nonrealtime plugins create and destroy their GUI in get_parameters and there's no thread.

Now the window class must be declared in the plugin header. It's easiest to implement the window by copying an existing plugin and renaming the symbols. The following is an outline of what happens. The plugin header must declare the window's constructor using the appropriate arguments.

#include "guicast.h"

class MyPluginWindow : public BC_Window
	MyPluginWindow(MyPluginMain *plugin, int x, int y);

This becomes a window on the screen, positioned at x and y.

It needs two methods

	int create_objects();
	int close_event();

and a back pointer to the plugin

	MyPlugin *plugin;

The constructor's definition should contain extents and flags causing the window to be hidden when first created. The create_objects member puts widgets in the window according to GuiCast's syntax. A pointer to each widget which is to be synchronized to a keyframe is stored in the window class. These are updated in the update_gui function you earlier defined for the plugin. The widgets are usually derivatives of a GuiCast widget and they override functions in GuiCast to handle events. Finally create_objects calls


to make the window appear all at once.

The close_event member should be implemented using


Every widget in the GUI needs to detect when its value changes. In GuiCast the handle_event method is called whenever the value changes. In handle_event, the widget then needs to call plugin->send_configure_change() to propogate the change to rendering.



Realtime plugins should use PLUGIN_CLASS_MEMBERS to define the basic set of members in their headers. All realtime plugins must define an

int is_realtime()

member returning 1. This causes a number of realtime methods to be called during playback and the plugin to be droppable on the timeline.

Realtime plugins must define a member called


to be called during every audio fragment and video frame. It has an input and an output argument and for audio, a size argument. The process_realtime function should start by calling load_configuration. The LOAD_CONFIGURATION_MACRO returns 1 if the configuration changed. Then process_realtime should move the data from the input to the output with processing.

Additional members are implemented to maintain configuration in realtime plugins. Some of these are also needed in nonrealtime plugins.



Like realtime plugins, load_defaults and save_defaults must be implemented. In nonrealtime plugins, these are not just used for default parameters but to transfer values from the user interface to the signal processor. There doesn't need to be a configuration class in nonrealtime plugins.

Unlike realtime plugins, the LOAD_CONFIGURATION_MACRO can't be used in the plugin header. Instead, the following methods must be defined.

The nonrealtime plugin should contain a pointer to a defaults object.

Defaults *defaults;

It should also have a pointer to a MainProgressBar.

MainProgressBar *progress;

The progress pointer allows nonrealtime plugins to display their progress in Cinelerra's main window.

The constructor for a nonrealtime plugin can't use PLUGIN_CONSTRUCTOR_MACRO but must call load_defaults directly.

The destructor, likewise, must call save_defaults and delete defaults directly instead of PLUGIN_DESTRUCTOR_MACRO.



The simplest audio plugin is Gain. The processing object should include pluginaclient.h and inherit from PluginAClient. Realtime audio plugins need to define

int process_realtime(int64_t size,
		double **input_ptr,
		double **output_ptr);

if it's multichannel or

int process_realtime(int64_t size,
		double *input_ptr,
		double *output_ptr);

if it's single channel. These should return the number of samples generated. In the future, the number of samples return value will synchronize plugins which delay audio.

Nonrealtime audio plugins need to define

int process_loop(double *buffer, int64_t &write_length);

for single channel or

int process_loop(double **buffers, int64_t &write_length);

for multi channel.



The simplest video plugin is Flip. The processing object should include pluginvclient.h and inherit from PluginVClient. Realtime video plugins need to define

int process_realtime(VFrame **input,
		VFrame **output);

if it's multichannel or

int process_realtime(VFrame *input,
		VFrame *output);

if it's single channel. They only get one frame per call but should return the number of frames generated anyway. In the future, the number of frames return value will synchronize plugins which delay video.

The nonrealtime video plugins need to define

int process_loop(VFrame *buffer);

for single channel or

int process_loop(VFrame **buffers);

for multi channel. They're always assumed to have a write length of 1 when they return 0.



The simplest video transition is dissolve and the simplest audio transition is crossfade. These work identical to the single channel, realtime audio and video plugins. The only difference is the addition of an is_transition method to the processing object. is_transition should return 1.

Routines exist for determining where you are relative to the transition's start and end.

Users should divide source position by total length to get the fraction of the transition the current process_realtime function starts at.

Secondly, the meaning of the input and output arguments to process_realtime is different for transitions than for realtime plugins.

The first argument to process_realtime is the data for the next edit. The second argument to process_realtime is the data for the previous edit. Eventually the second argument becomes the output.



Effects like Histogram and VideoScope need to update the GUI during playback to display information about the signal. This is achieved with the send_render_gui and render_gui methods. Normally in process_realtime, when the processing object wants to update the GUI it should call send_render_gui. This should only be called in process_realtime. Send_render_gui goes through a search and eventually calls render_gui in the GUI instance of the plugin.

Render_gui should have a sequence like

void MyPlugin::render_gui(void *data)

// update GUI here


The sequence uses one argument, a void pointer to transfer information from the renderer to the GUI. The user should typecast this pointer into something useful.



There are several useful queries in PluginClient which can be accessed from the processing object. Some of them have different meaning in realtime and non-realtime mode. They all give information about the operating system or the project.