My Departement Of Nerdcore Enthusiasm

Tutorials and the like. Things which I have committed myself to. Often linux/UNIX-related.

søndag den 11. september 2016

DIY repair, cello broken neck

Okay, the DIY repair described by this blog post, I've had to revisit because the crack broke up again. I have since made a new intervention on the cello, which seems to work well. The dokumentation og this new intervantion I have a link to here. It takes the form of a google album with comments. Visit the album here.

I repaired this cello for my girlfriend. It is not her "fine" cello, but a cheap one (still a lot of money in my opinion :-)) she bought to teach with. Just after she had bought it, it broke. 
The neck broke for no apparent reason. I could see that it had been broken there before and glued. Since the cello seemed to be very old, the glue probably dried out and lost it's grip. Anyway, I wanted to try to repair it and my girlfriend decided to let me experiment because it would have been expensive to take it to a violin shop to be repaired.

This is not a real tutorial because I am certainly no expert in this field. I just wanted to post the images of the process for inspiration purposes. Some important things do not appear in the pictures and are therefore explained underneath.

I found a good youtube tutorial and there I could see that the violin builder in the video took the fingerboard off and glued the cello's neck and afterwards drilled two holes from above and put two thick wooden dowels of maple through the already glued cello neck to strengthen it. He then put the fingerboard back on. 

It is probably the correct way to do it, but I could not do it this way as I did not want to take the fingerboard off for several reasons. I had to find another solution. I have experimented a bit with building loudspeakers where you often use polyurethane glue - a foaming glue to join the speaker boxes together. I thought about using the same adhesive for the cello as it is very strong and since it can expand and fill out any gaps between the two pieces of wood. My plan instead was to drill a hole which was larger than the dowel into both sides of the wood and to then compensate for the gap by using the expanding glue. The larger gap would make piecing the parts together easier and more precise. When I got round to the gluing I put a lot of glue on the dowel and lowered it into the hole and hoped that to glue would expand enough to bind everything together properly. I have since found out that it's a good idea to put a bit of water on the surface that the glue should adhere to (only one side needs the glue on it), but I did not know this during the repair and I guess it makes no big defference. I opted to make only one big dowel instead of two also in order to make the adjustment easier. Since I could not get hold of any maple wood, I used a piece of beech which I had from an old kitchen surface. Beechwood is similar to maple as it is very hard and strong. I sawed a square rod out of beech that was around 25cm long and 15mm wide on each side. Then I planed the edges of the rod so it was fairly round. I took a small nail and hammered it into the middle of the broken neck where I thought the dowel would sit the best. I clipped the head of the nail off with a pair of pincers just a few millimeters above the surface. (A boat builder once taught me this method). I then pressed the pieces together so that the nail could make a mark at the same place on both sides. Then I could use these marks as a guide for drilling the holes. I used a hand-powered drill (brace and bit) with a 16mm flat drill bit. I drilled by hand as it was easier to keep track of the process and as it does not make any noise. It is also kinda dogmatic and sensual and old skool :-).

Ideally you would let the polyurethene glue bubble out of the cracks and holes as it dries and then remove it all when it has dried completely. However, I definitely wouldn't want to do that on this fragile instrument so I carefully dried away the glue each time as it bubbled out so as not to damage the surface of the wood.

The operation went well it seems. Here is a small video where my girlfriend is playing the cello. 

tirsdag den 17. december 2013

The counter rotating script for video editing

A demo of a small set of scripts I've made for counter rotating a video clip. I call it

The counter rotating script for video editing

The script is help full if you have a video recording where the camera have been rotated back and forth while recording and you want to stabilize it or counter rotated it either clockwise or counter clockwise back to an almost stable, realistic view-orientation.

Background for the scripts
On the fourth of October 2013 in the city of Aarhus (in Denmark) a good friend of mine took a very hand-held recording of my 20 minutes long solo performance "Stoned" while I performed it live. The pictures were very beautiful, but the camera rotates during filming. If the beautiful recording should be used for something I had to find a way to contra-rotate it so you could watch the video in a fairly horizontal orientation all the way through. It was not only about rotating the video recording from eg landscape to portrait format. That would have been an easy task, but here the rotation had to happen along the way so it hat to be continuously counter-rotated. When I was doing "Dance of a Newsboy" an earlier peace (2012), I spent an incredibly long time to make an a homemade method to zoom in a video sequence which had already been recorded. A reasonably advanced interactive shell script that I think is the coolest piece of programming I have so fare made. As soon as I got my recording of "Stoned" home in October I started "coding". I would expand the old script so it could both rotate and crop the video sequence. Before the process starts I was writing all the "frames" of the video clip out as images. There is a 20-minutes of video recording and there are 24 frames per second. Overall it gets to over 30.000 images. All of these images are now added extra padding at the top and bottom so they are 100 percent square (1280 x 1280 pixels). It provides a good starting point to rotate them. I started to make these scripts back in October and now I'm done with the coding and have started to rotate and crop the thousands of images.

How it all works
The script treat 96 images at a time which corresponds to 4 seconds . The five pictures illustrate the process . What is is not visible from these still images is the dynamic flow from image to image embedded in the script, but the pictures illustrate fairly the steps it goes through. This process can perhaps be compered to steering through a video recording with a flight simulator, if that makes any sense :-). I hope that I can "manage" my way through all 30,500 pictures/frames before December 1 where I will start on an new education.

The original image is "lying down". It is in the format 720p (1280 × 720 pixels)

Here the picture is made ​​squareA "compass" is put upon the imageAn angle value can then be entered for rotation of the image. Don't worry you do not rotate each and every image separately. The script process 96 images in one operation corresponding to 4 seconds of video.

Now the image is upright.

The next phase is cropping. This is governed by a coordinate system consisting of a lot of small numbers in the image. There is a frame that can be placed in the image by entering the coordinates of the frame's upper left corner. This is not either done for each image separate, but for 96 images at a time. It has a starting value and other value to where the frame should end after the 96 images and then the cropping wanders gradually from one position to the other over the 96 images in one operation.

The finished cropped image is slightly smaller than the originalThe format is called WSVGA and is 1024x600 pixels.

A video that demonstrate how it is to work with this script.
If you want to skip the introduction stuff jump to time: 6.27 (on the video timeline)

fredag den 5. april 2013

Small, simple, simplistic and aesthetic, PHP and HTML guest book

This blog post presents a small guest book solution that you can implement on your own website. By guest book, I mean: An opportunity for people who visit the website to post comments and read others comments on the page. The solution consists of a single PHP/HTML page and a plain text file to store the written comments.


I have long been fond of making bash shell scripts (Linux) on my computer and so I have a basic understanding of programming. I also tried to code in other languages, but here I have not yet achieved the same level of understanding and skill, therefore I am so proud and happy that I manage to get as much control over the PHP-codes and syntax that I could code this guest book for my website. A guest book that looked and acted like I wanted it. The solution I came up with I would like to share with you whom might be interested. For inspiration for making your own solution or to directly use my solution as it is on your own website.

As I said the guest book only uses one page. It has input facility and the view of previous posts on the same page. It is a completely open guest book, anyone can write in it without lock-inn or registration. Managing the guest book is done by editing the text file in which all messages are stored. This will typically say: Download the file, edit it and upload it again. (If you use Linux/UNIX, you can automate this process by creating a script where you use eg wput and wget to upload and download the file from the server).

I would like to explain the basic principle of the PHP code for this guest book. The trick is that there is an "if-statement" in the very beginning of the PHP-code that says

"If there is something in the variable "post" then add this content to a specified text file. If there is not then just load the specified text file and show it on the page."

When the page loads the first time, there is nothing in the post variable it is not until there is someone who has completed the form and press the submit button underneath that there will be anything ind the "post" variable. Therefore the script simply loading the specified text file and displays it on the page. Now when someone completes the form at press the submit button the page will be loaded one more time but now there is something in the "post variable" and therefore that content is written to the specified text file as stated in the if-statement in the beginning. Then the page is loaded one more time but now the post variable is empty again and therefor we go back to first case scenario where the guest book is simply loaded into the page.

I do not think I can explain the approach as simple and easy to understand as I would like, but I just do want to state that this approach in my opinion, is the guiding principle in the coding of this guest book. This is the principle which makes it possible to receive input from the user and display the guest book content at the same page. The rest of the PHP code is just about retrieving input from the user via an HTML form and adding this input to a specified text file.

If you want to use this guest book solution on you own website you can download the source code to it from here. Your server must support PHP for it to work but the vast majority of servers does that already. The source code is packed as a zip archive so you will have to find a way to unzip it before you can start to work with it.

PHP and HTML in interaction

If you want to use this guest book on your own website it will indeed be nice if it fully integrates into the design of your website. Therefore, I would like to review how you can put the PHP code of the guest book into the HTML code of your own HTML page.

So HTML and PHP are two different types of codes. A little simplistic, one can say that HTML can be used for presentation while PHP can be used for manipulation (actions.. changing things.. etc..). PHP makes it possible that what the user enters will be added to a specified text file where it can be stored and that the file is loaded into the HTML document. HTML makes the design and look of the page. PHP and HTML work very well together. You can put PHP code into the HTML document at (almost) any place in the HTML code. You use a PHP-tag when you want to insert PHP code in the HTML document. The tag looks like this:

<?php ... PHP code ... ?>

Pages which are primarily driven by PHP are often given file extension ".php" although they also contain HTML code. I like to start my PHP pages exactly as I start my HTML pages namely with a DOCTYPE definition and some initial HTML code. The PHP codes I insert further down in the document. I have made a small illustration of how I've sorted my PHP code in relation to my HTML in this solution.

There is a minimal version of the guest book with a minimal amount of HTML code. You can open that file (gbdemostrip.php) in a text editor and select and copy the two blocks of PHP code and the HTML form which is the code between the:

<form method="post" action="">
and the

The two tags included.

and paste them into your pages HTML-code in the appropriate places. After that, all you need to do is to upload the specified text file demoposts.txt and your own PHP/HTML page with the guest books PHP-code to your server then it should work.

The striped down version of the guest book you can download her as a zip fil (you will need to unzip it first)
You can download the archive here

I also made a small zip archive with a working demo of the guest book to upload and try out right away on your own server. The archive consists of four files. gbdemo.php, demoposts.txt, demobilled.png and englematch.css. If you upload the four files to your server and go to the file.


with your browser. It would look something like this:

you should have a working guest book solution on your own website.

You can download the archiv here

A small note about character encoding.

Character encoding is an interesting topic in it self that I do not want to go deeper into here. Here I will just say that the text in your documents no matter whether if it is pure texts or an HTML page or a java script can have different encodings. It is recommended that you use the encoding called UTF-8 because it is the most modern, international and flexible encoding. For some reason I could not get UTF-8 character encoding to work on my server. Therefore I have chosen to go with the older standard called iso-8859-1 so therefore all the elements used for my version of the guest book iso-8859-1 encoded. I just wanted to point this out. It is quite easy to change this if you want to when you have just found out how it works.

I wish you much joy and inspiration

onsdag den 13. marts 2013

Balancing sound recording

I would like to share this with you although I guess it as usual are very few that would have any interest in it. The reason I want to tell you about it is firstly, because I think my solution to this sound balance problem is very unorthodox and fun. Secondly because I think the result was surprisingly good. It still freaks me out a how much I was able to improve the quality or maybe more correct the "perceivability" of the interview.

I have to say that I am a happy Linux user and that this little project is made on a Linux platforms with basic open source software for audio and image processing.

Let's look at the problem.

It is an interview conducted over Skype. And the problem is very simple. One voice is the voice of the woman who ask the questions in the interview. Her voice comes through loud and clear. The other voice is the voice of the woman being interviewed. Her voice comes through so quiet that you can hardly hear it.

Here is a screen shot of the audio file opened with Audacity. It is obvious how one voice gives large fluctuations in the graph while the other ones gives tiny fluctuations in the graph.

I would like to try if I could balance the sound level in the two voices in proportion to each other. The first thing I found out was that I could increase the volume of the quiet voice if I in Audacity (a sound editing program) selected the piece that has low volume and only that piece. That is the piece which I in the illustration here above has identified as the "Voice B". Than after selecting it I ran a so called normalize effect on it. After doing only that the sound of voice B, the one that was so quiet would suddenly come though laud and clear. I was so surprised and happy about this discovery that I made a short video about it.

So therefore, if I could do this all over the interview which lasted about an hour I would probably be getting a fine balanced interview where you could hear both voices. So I started to do that manually and it was a little laborious process and at the same time it was also difficult to make it uniform so I started thinking about whether or not this process could be automated. I looked around for an already existing programs that did this same thing and I found one that was called The Levelator® and I tried it, but for some reason it gave absolutely nothing to my audio file. As I thought more about it, I got a picture in my head that it could be cool if I could make two complimentary soundtrack of the interview, one where only the strong voice is heard and where there is silence on all the parts where the quiet voice is speaking and one where only the quiet voice is heard and where there is silence in all the places where the strong voice is speaking. Then I would be able to open the two soundtracks in Audacety together and be able to treat them individually in relation to each other.

I thought it would be an easy task for a computer sound engineer to make a program that could do that. A program that could register when it was a period of tiny fluctuations in the graph and when it was a period of big fluctuations in the graph. But I am no computer sound engineer. I am not so strong in audio on the computer, but I have worked a lot with images and video on computers and as I thought about it the idea of turning the sound stream in to a video stream started to take form in my mind. The idea that I might be able to make sound into video and the video in to a series of images and then make an algorithm based on measurements of the images.

There are for Linux / Unix a small command line based program for playing audio files. This program is called "play" and it is part of a program suite called SoX which is for command line based manipulation of audio files.

When "play" plays a sound file it also display some meta information while running. It shows a time indicator but it also shows a small sound level indicator.

Now, if I could shoot this little indicator while it was playing the interview with 24 frames per second I would be able to print this video into a series of images which could show exactly how loud the sound is at any given time.

So I hope you are with so far. The idea is that I make a little video recording (screen recording) of the small volume indicator's movement while the interview is playing. This video I print out to a huge amounts images one for each frame in the video recording. Each of these images contains accurate information about how loud the sound is at a given time in the recording. This information can be read from every single image in the series by a human being, but now I want to find a way that the computer also can read or calculate this information on its own so that the process can be automated. There are two parameters that I would like to have the computer figure out. One is a precise time. The second is the state of the sound level indicator at that particular time.

OK first the time parameter because that is the easyest. If I create a script that count its way through the series of images then the number it has reached in the counting can very easely be calculatet to a time messure. There are 24 images per second so you can just devide the number by 24 then you have the time in seconds.

But how do we get the computer to read the sound level indicator?

Netpbm is the name of a package of command-based programs for image manipulation. It is a software I use again and again and therefore I also know its possibilities really well.

I knew that in the netpbm program suite there is a program called "pnmcrop" which can remove all same colored edges of an image. So, if you have a black image with a white star in the middle then "pnmcrop" remove all the black from top, bottom, right, left side of the image until it meets some of the white pixels from the star in the middle. Like this:

Before pnmcrop

After pnmcrop

I also knew that the netpbm program suite had a program that could be used to measure how large an image was in pixels (pamfile).

The neat thing about the netpbm program suite is that all the programs in it are command line programs which means that they can be put into a script and used to automate workflows.

So I record a little video of "play" playing the interview sound file. This can be done by making a screen recording or a screen cast and there is again an excellent command-line program that can be used for this purpose. This program is called ffmpeg.

In the picture below I define the area I want to record of the screen. As you can see, it is a very small area.

When this is written out to images instead of video it produces a huge amount of images (24 per second). The images looks like this.

Now I use pnmcrop on the five images here above to cut all black away from the right and only from the right side the images:

You can see the image are different in width in accordance to the scale of the little volume indicator.
 As I said, there is also a program in the netpbm program suite that can measure a picture width in pixels. So now I have all of a sudden a setup that can give me an exact figure for the sound level measured in pixels at an exact point in time. So I would be able to create a simple conditional statement:

If the image is greater than 314 pixels wide than it is voice A speaking. If it is less than 314 pixels wide, it is voice B speaking.

Now 24 time measurements of a second is too high a level of detail for what I want to do so I made a small script that collected the images twelve and twelve. Here I also use a program from the the netpbm program suite in the script to put the images together twelve and twelve (pnmcat).

Now each image represents half a second. It's a little easier to work with and I can still do the same algorithm to crop from the right and measure precisely how the sound level is for just that given half a second.

So I made a few shell script to automate this.

The first script produces a list with exact time measurement for every time the interview switch from voice A to voice B and vice versa.


The script is here if anyone want to see how I make that in practice. (Sorry the comments are in danish. If you don't understand danish you will have to use google translate or analyze the meaning out of the code it self)

The next script takes this list and producer from that the two previously mentioned complementary soundtrack. For this I use SoX and ffmpeg.

Finally I open the two audio tracks in Audacity and adjust them relative to each other and then I mix them down to on track again.
The result has been really good and the process have been really inspiring.


Note to those who wants to experiment with this in practice.

Although I have split the voices into two tracks there are still some transitions where the strong voice patches the weak voice tracks. In particular, I found out that the shift from the strong voice to the weak almost always put a little less than a second of the strong voice on the weak voice track. To counteract this, I tried to move the timeing of all the shifts one second forward in relation to the soundtrack. I did this by removing 24 photos from the start of the process where the many small images is collected twelve and twelve. It was a very easy place to do it because I just need to change a number. The starting point in the script. I just counted that 24 counts up. So if 145 is the image that comes closest to the audio file starting point I will instead start 24 images ahead in the series so that image 169 will be the starting point.

When moveing the timeing of the shifts a second forward in relation to the audio file there ware much less patches from the powerful voice on the weak voice soundtrack, but there was still some. I've found that the best results was achieved by leveling these "spots" into alligenment with the fluctuations of the rest of the track on the quiet voice track. I have done this manually in Audacity. It takes me about 20 minutes to do this manually in an interview lasting about one hour. I zooms into the graph so much that 25 seconds fills my whole Audacity window. Then I use the arrow at the end of scroolbaren in the bottom of the Audacity window to move forward in the soundtrack. As soon as I see a fluctuation which extends above the average very low level on the soundtrack I mark it and use

Effect -> Amplify

To turn down the volume on this place and reach the level of the rest of the track. When I'm done with this process, I have a track where all fluctuations have the same low rate and the board looks quite uniform without anything sticking up in any places. Then I run

Effect -> normalize

on the track. It makes a really strong impact. The entire track is enhanced significantly and so I do not need to do anything alse. The second track, with the powerful voice I don't change at all. When I manually have eliminated all "ridges" in the quiet tracks and run

Effect -> Normalize

on the entire track I can just export the the two tracks together and then I have the finished audio file where the two voices can both be heard loud and clear.

onsdag den 11. juli 2012

To make 3g USB modem work on Android

The Liberty Tab journey -reports from my process towards making a 10.1 inch tablet computer become my primary computer.
The product i use for this is a Packard Bell Liberty Tab G100. Please note that:
Packard Bell Liberty Tab G100
Acer Iconia A500 TAB
are identical products and that i will use the two product names synonymous.

To make 3g USB modem (mobile broardband) work

Packard Bell Liberty Tab G100 comes with build in WI-FI but not with built in 3g modem (SIM Card) function. I wanted to use my 3g USB dongle modem (mobile broadband) to go on the internet. My 3G modem is a Huawei E1750.
The mobile broadband solution I have used for some years now in Linux as my everyday internet connection. I have several time spent some time to get it to the work under various Linuxes. Huawei E1750 and indeed all the various Huawei 3G modem are made in such a manner that they themselves have the drivers for eg Windows on board. The way it works is that when you first put the modem in the Windows computer the modem reports to the system as a CD-ROM containing the installation program. After installation is complete the installed software sends a signal to the modem that it must now change from being a CD-ROM to be a Modem. It is this shift operation which is causing some problems under Linux, and therefore also under Android (Android is Linux based). A German guy named Draisberghof has developed a Linux tool for this switching operation. This tool is called USB_modeswitch. All Android tablet computer that I know of are driven by a process type called ARM. Draisberghof have compiles a version of usb_modeswitch for the ARM processor.
. And he has compiled it STATIC. That is, he has compiled a version of usb_modeswitch that can run without any further dependencies on most android devices, smart phones, tablets, etc. Will need this version of usb_modeswitch for making the 3g modem work on our Android tablet. It can be downloaded from here.

Look for the text
"Upon a recent request I Provide a static binary for ARM: usb_modeswitch-1.1.9-arm static.bz2."
Under the Download headings on the page and download it.

I'll have to say now that before you can even do the following, you must first have installed a costume ROM on your device. The stuck ROM (also called firmwire) that ships with the Packard Bell Liberty Tab G100 does not support 3G USB modem. It does in fact not support USB devices at all. I will talk about the process of installing costume ROM in another post. For this post we'll assume you have already installed a costume ROM that supports 3G modem and that your tablet is rooted witch it will propel be if you have installed a costume ROM.

You need an android terminal for the next step. I have been very happy about a terminal called ConnectBot It's free (Android marked). But otherwise you can just use Terminal Emulator also from Android marked.

OK let's say you have downloaded usb_modeswitch-1.1.9-arm-static.bz2 to your PC. A easy way to get it on your tablet could be pulling the mini-sd-card out of the tablet. Putting it in a card reader on your PC. Add usb_modeswitch-1.1.9-arm-static.bz2 onto the card and put the card back into the tablet computer. So you need a good file manager. Astro (app) is good. (Total commandor app is even better). With your file manager find usb_modeswitch-1.1.9-arm-static.bz2 on external_sd. If you click and hold usb_modeswitch-1.1.9-arm.static.bz2 in Astro you can unpacks the bz2 archives and so you'll get the pure binary file usb_modeswitch-1.1.9-arm-static. usb_modeswitch-1.1.9-arm-static should now be maneuvered into a specific directory on your tablet PC for


You can do this by opening your terminal

and then:


to become root

cd /data/local


cp /mnt/external_sd/usb_modeswitch-1.1.9-arm-static .

now you just make sure that usb_modeswitch-1.1.9-arm-static is executable

chmod 0755 usb_modeswitch-1.1.9-arm-static

Now you're actually ready to initiate your modem by giving a very long command to usb_modeswitch-1.1.9-arm-static. This command is so vast that it must be in a script if you want to use it many times and you must if you use this method.

You must therefore have made a small shell script which shall also be
The shell it self is nothing but a plain text file with a few lines of text in. The text lines are here.

/data/local/usb_modeswitch-1.1.9-arm-static -v 0x12d1 -p 0x1446 -V 0x12d1 -P 0x1001 -s 20 -M "5553

How you'll make this small text file and gets it into
I will leave it up to you for now. But when you have done it you must also make sure that it is executable

chmod 0755 mymodeswitch # or whatever name you gave your little script

You can now put your modem into the USB port on your tablet computer and then run your script

by typing
in your terminal.
You can see what usb_modeswitch-1.1.9-arm-static "answer". If it finds a modem and it succeed with the switch operation it will answer that it succeeded.

If you have another Huawei modem than E1750 it may be you need another mode line. The mode line is the long number line that you can see the script text above.

Try to search google with your Huawei your-model usb_modeswitch

I had to even adjust some of the values to get to the values that i use here. Her the tool:


Is a good help. Just tupe "lsusb" in the terminal and see what it says.

If you manage to switch your modem, you must have it set up in your Android system settings
as well.
Go to settings turn of WI-FI. Find the settings for mobile broadband. Enter or choose two pieces of information. Your access point and your network. It is in my case:

network: Telia
accespoint: websp

After that it worked for me.

Every time I turn on the tablet computer and wants to use internet will I open a terminal and execute my modem initiation script


So it takes a few minutes when the modem initialize and connect and after this I'm ready to go on facebook and email and surf the Internet.

tirsdag den 6. december 2011

Zooming in a video clip

This script is a way to zoom into a video clip already recorded. In order for that to work one must be able to move the "focus" around through the progression of the video clip so that one can keep the focus around a certain object. The scripted in reality consist of two scripts "prunpre" a preperatons a script and "prunrun" a "run script" which treats 60 frames at a time.
The script vil run on a Unix environment: Unix, Linux, Mac OS X, Solaris, Free BSD, etc. You can even run it in Windows if you install cygwin. besides having a UNIX/Linux environment, there are also some basic software packages you must have installed on the system before this script can be used. That is:

FFmpeg (commandolinie Video conversion and editing)
Feh (Image viewer)
Netpbm (Toolkit for manipulation of graphic images)

Here is a video demo of the scripts in action.

Here are the scrips. You can right click and download or open and copy paste.


Once you have the script downloaded, you will probably also have to make them executable.
chmod +x prunpre
chmod +x prunrun

When you have the two scripts, create a directory where you put the two scripts and the video clip you want to zoom in on.
Then the video clip needs to be converted into a series of numbered ppm images (ppm is an image format) one image per frame in the video clip.
You do that with ffmpeg. Open an x-terminal of one kind or another and switch to the directory where you you have the video clip and the scripts. then give the command:

ffmpeg -sameq -i myvideo.avi myvideo%d.ppm

Where naturally myvideo.avi be replaced with the name of the video clips you want to work with.

Note this process can be pretty disk-space intensive.

You're then ready to run the first script. So in your X-terminal you type.
./prunpre basename (Followed by "enter")
where basename is the base name of the pictures you just printed out with ffmpeg. If the images are called.

img1.ppm, img2.ppm, img3.ppm, img4.ppm

then your base name is "img".

The script will guide you through the following steps. After prunpre the second script prunrun have to be executed, but prunpre will give a message about that when it has completed its process. The demo-video here above could also give a sense of how to use the scripts.

A more general note that is dealing not just with these two scripts but also the other scripts on this blog and my whole approach to this work:
The scripts here I make because I need them myself for a specific task, but somtimes I think they are really good and very configurable and so I try to adjust them so that others could use them too. It is also a kind of aesthetic pleasure to make things nice of comprehensible to others. But I think I apeal to a very limitation audience with these scripts and therefore there is no reason to come around every all aspects of adaptation and compatibility. I think of it this way: For instance this scripts would, easily be adapted to run on the terminal on a mac computer. Or the scripts could easily be change to work together with an other image viewer than "feh", but I do not know if anyone will ever need this so there is no reason to make all this ajustments. I’d rather you contact my if you have any special needs. I know the scripts and I could probably easy ajust them to your needs. My contact info is here ore you can do a comment on the blog post here.


mandag den 17. oktober 2011

Create scrolling move text from a pdf file

If you look around on this blog you can see that much of it is about being able to produce video effect without a fancy graphic video editing program. I find it fuzzy to work with these heavy video editing programs when I often sit and just have a small computer at my disposal. My scripts can produce a good result without me having to sit and pull elements in place with the mouse drag-and-drop in a graphics program. I can instead give a precise command from a commando line and then just wait to see the job gets done. This script can be slow to get through the process. The example of “rolling credits” shown here took about half an hour to process on my little ASUS eee pc. But in that time I am free to work on something else. That works fine for me.

This script creates scrolling text video clip like the rolling credits of a movie. It makes it out of a PDF or PS (PostScript) document. That is when you want to produce your rolling credits, you simply typing them into a word processing program like OpenOffice Writer or AbiWord, etc. and then save (save a copy or export) as a pdf (almost any modern word processor will do this). You should set the page size to A4 paper size. Top and bottom margin should be an inch with is equal to 2.54 cm. The spread of your document equals to the screen stretched so the distance you have between the text and the edges of the document corresponds to what you're going to see on the screen.  I have usede a Font Size of 20 points which I think is a good starting point but it is up to you. You can format text just like you would in your word processing system. Use black for the text and white for the background. Although the text is spread out over several pages in the word processing program try to imagine that it's just one long continuous text when you set it up. The resulting video clip is white text on black background. This concept can obviously be expanded so that the text will run over the "living pictures" in your video, but it leads too far now. The script is also currently limited to producing a video output in VGA format 640 x 480 pixels.

For this script to work you must have the following programs on your computer:

1. The scripted is a bash shell script so you must have a bash shell environment that you have on most Unix platforms. OS X, Linux (Ubuntu, Red Hat, puppy etc), Solaris, etc.

2. The basic program package Netpbm command-line based and script-based image processing.

3. Ghostscript (GS), the classic Gnu / open-source tool for PDF and PostScript handeling.

4. Ffmpeg a really nice strong basic commando line based film/video editing and conversion tool.

To run the script you must have opened a terminal window. Xterm Aterm.. whatever. Then, in your working directory you should have your PostScript file and the script. You give PDF or PostScript file as an argument to the script like this:

./rolling yourfile.pdf

and now it will take a while to produces a video clicp script with "rolling credits" out of your document.

The script is here. If you want to download it you can right-click and then choose save target as, or similar. Now make sure the script is executable. You can probably also do the right-click action or from a commando line with

chmod +x rolling

If for some reason would want to see the pdf file I've used as the basis for this video eksepel so it is here.