if(($ACT == 'edit' || $ACT == 'preview') && $INFO['editable']){ ?> } else { ?> } ?>
Video Production Software Server Project - born around Sodankylä on June 27 2008.
It's in don't try this at home, kids alpha development. - As wikipedia would put it: this page is only a stub.
History : ardour's ICS/CMT, xjadeo, gjvideotimeline,.. Motivation: Wicked, Frontera, video-db, freeJ
sodankyla implements the decoder framework (yellow box) in the planning diagram. Yet it includes a generic frame-cache, simple video object management and besides some test front-ends (CLI, SDL/GL, OSC) a fully featured TCP socket daemon that can speak HTTP or ICSP 1).
For session management the sodankyla prototype reads information from a sqlite database. The script folder contains an experimental perl utility to read EDL-3.0.0 or EDL-CMX data files into the DB.
sodankyla is under heavy development.
You may download a alpha-devel-snapshot or follow development with trac.
git://gareus.org/sodankyla
tested with ardour 0.99.3 - ardour2.5 requires a patch - see also Ardour & ICS.
For ardour3 complete integration of a video-timeline and video-monitor is prepared. It uses icsd's HTTP interface and xjadeo. More information on the ardour-dev mailing-list and in the README.
Instead of compiling icsd
, there's a statically linked version available for testing from icsd-static-alpha10.
git clone git://rg42.org/ardour3 cd ardour3/ git checkout -b videotl origin/videotl ./waf configure --videotimeline ./waf
or alternatively:
svn co http://subversion.ardour.org/svn/ardour2/branches/3.0 ardour3 cd ardour3 curl "http://rg42.org/gitweb/?p=ardour3.git;a=commitdiff_plain;hp=master;h=videotl" | patch -p1 ./waf configure --videotimeline ./waf
While the backend is still in the making, there are already a few usable front-ends for testing and development:
vextract <video-file> <output-file> <frame-number>
– writes png files ; works just fine - used in fronteravplay
– partly working; will be merged with xjadeo and xj5 icsd
– image composition TCP socket server - start it and point a web-browser to http://localhost:1554/
; see also ./ics –help
videoscd
– listens on OSC commands and replies image data as OSC-blobsaicsd
– ardour version of icsd - run make && src/aics -p 30000
vannotate
– crop parts out of each video-frame and OCR it for timecode.
The scripts/
folder contains various parsers for EDL and an ardour-session generator. under web/' you can find JavaScript code and a PHP-mockup of a simple user-interface that is being built into
icsd''.
There are too many things happening at the same time to provide documentation here while the interface is still in flux. use the source and feel free to drop by for further information.
see icsd for tentative interface documentation.
For rapid prototyping and testing there are a couple of wrapper scripts to generate XHTML/JavaScript for the icsd AJAX interface to visualize information and present images or video data; aiming towards offering services for each the different steps in postproduction workflow.
Currently there's three: an EDL edit interface, a image-preview with timecode & slider and a live-strem/render/export viewer. Development on the ardour-interface is currently stalled pending ics session-management and ardour-3.0 merge.
ics currently require the AV data to be present locally (or on a GB LAN; NFS or pre-shared) the Control information (EDL) is small (EDL).
Testing the streamer/encoder prototype it seems feasible to re-importing processed frames (~3 frames latency, that can be compensated). The advantage here would be interoperability with many other tools that can produce A/V streams. the disadvantage that it requires extra code to decode/buffer/cache/timestamp input streams since they're not seekable by default.
The alternative is a cache-coherence protocol that operates directly on the frame cache allowing to share image-data directly. it is much faster, requires less coding and similar is needed for effect plugins anyway. using mjpeg may become an option to meet real-time requirements for destructive editing.
With a frame-cache and look-ahead buffering, “render / export / save_as / stream_it” functionalities are not distinguished and only limited by hardware.
Currently it requires the input data to be available as seekable file. The pull API approach provides for accurate sync and latency compensation.
Live-feed composition is underway connecting via video-jack and freeJ.dyne.org. PureData/GEM should be easy to patch-in using an Shared-Mem external.