Video streaming in Kubuntu 8.10

I have tried several player such as Kaffeine, and mplayer, but finally I ended up in Totem for my Kubuntu 8.10 intrepid. I need to install the additional plugins and extra plugins.

I can run the video streaming from one of the website that I usually watched.

Segmentation fault error when running in Hilbert

When I tried to run my code in Hilbert cluster with 4 nodes and 4 cpus each node, I kept on getting this error

p15_26112: p4_error: interrupt SIGSEGV: 11
rm_l_15_26139: (699.613281) net_send: could not write to fd=5, errno = 32
Initial Guess …
Start NL Poisson…
dUin : 0.00106498
dUin : 0.00104582
dUin : 0.00100422
dUin : 0.000719826
dUin : 0.000220362
dUin : 1.35221e-05
dUin : 4.5627e-08
p0_17202: p4_error: interrupt SIGx: 13
p15_26112: (699.625000) net_send: could not write to fd=5, errno = 32
p7_13549: (701.113281) net_send: could not write to fd=5, errno = 32
rm_l_13_26081: (699.851562) net_send: could not write to fd=5, errno = 32
p12_26025: (699.964844) net_send: could not write to fd=5, errno = 32
p9_2640: (700.449219) net_send: could not write to fd=5, errno = 32
p0_17202: (732.257812) net_send: could not write to fd=4, errno = 32
—————————–

I finally found out the reasons. It turns out I omitted the line in PBS script that specify the memory requirements. It seems that it can run in the head node but the other nodes will require -l mem=

so this is an example of scripts that I use:

#!/bin/sh
#PBS -N MPI_Job
#PBS -l nodes=4:ppn=4
#PBS -l ncpus=16
#PBS -l mem=2gb
#PBS -V
#PBS -o Output_File
#PBS -e Error_File
#PBS -l walltime=40:00:00

cd $PBS_O_WORKDIR
mpirun -np 16 -machinefile $PBS_NODEFILE ./a.out

Guide to available mathematical libraries

I just found an interesting site to help you find the required mathematical libraries for computation.

http://gams.nist.gov/

checking number of free processors and jobs

to check the number of free processors:
mdiag -n

To check the status of the job, either you can use
qstat -a
or
checkjob -v PID
where PID is your job ID, you can use qstat -a to check your job PID. checkjob tells you the architecture and which processors the job is running.

Installing IT++ in Hilbert and Darwin

Compared to installing in Turing (SGI Altix), I need to install the FFTW library and not specify F77 flag when installing IT++ in Hilbert.

./configure –prefix=$HOME/local CPPFLAGS=”-I$HOME/Download/boost_1_36_0 -I$HOME/local/include” LDFLAGS=”-L$HOME/local/lib”

While when installing in Darwin

./configure –prefix=$HOME/local CPPFLAGS=”-I$HOME/Download/boost_1_36_0 -I$HOME/local/include” LDFLAGS=-L$HOME/local/lib F77=ifort

Installing IT++ in SGI Altix

I am installing IT++ in my compute servers SGI Altix (Turing). I want to use the Intel compiler MKL for building IT++ libraries. To do that:

export LDFLAGS="-L/opt/intel/lib/64"
export CPPFLAGS="-I/opt/intel/include -I/scratch/ihpcoka/Download/boost_1_36_0"

I used Boost libraries in my IT++ modification. And then do a configure
./configure --prefix=/scratch/ihpcoka/local CXX=icpc

Then type make, make check, and make install. During make check, I got 1 FAILED test, which is “rec_syst_conv_code_test”.

Send/receive C++ vector using MPI

I am using IT++ for some of my code and I tried to parallelize it. It came to the point where I need to sum up all the vectors from all the process. So in MPI, I will need to use MPI_Reduce. My question is since IT++ vector is not an array, can we then use MPI_Reduce with “count” parameter or do we need to define a new MPI data type to handle it?

It turns out that C++ std::vector stores the data contiguously similar to Array [1]. And IT++ vector uses std::vector of C++ standard. Hence, we can use a similar technique as Array.

this is a simple test code:

#include
#include "mpi.h"
#include

using namespace itpp;
int main(int argc, char** argv)
{
int my_rank;
int p;

MPI_Init(&argc,&argv);
MPI_Comm_rank(MPI_COMM_WORLD,&my_rank);
MPI_Comm_size(MPI_COMM_WORLD,&p);

vec test(3),result(3);
if (my_rank==0)
{
test(0)=1; test(1)=2; test(2)=3;
}
else
{
test(0)=4; test(1)=5; test(2)=6;
}
MPI_Reduce(&test(0),&result(0), test.length(), MPI_DOUBLE,
MPI_SUM, 0, MPI_COMM_WORLD);
std::cout<<"test from "<<my_rank<<" :"<<test<<"\n";
std::cout<<"result from "<<my_rank<<" :"<<result<<"\n";
MPI_Finalize();
}

The code basically creates vector “test” and store {1,2,3} in root process and {4,5,6} in other process. After that the vector is summed up in root process and displayed. To compile this, I need to include the MPI header file and IT++ header files and libs (I also used Boost library to modify my IT++ source code)

mpicxx -I/usr/lib/openmpi/include -I/home/kurniawano/local/include -I/home/kurniawano/Download/NumericalComputation/boost_1_36_0 -L/home/kurniawano/local/lib -litpp testmpivec.cpp

Then I ran the code in two processes using

mpirun -np 2 ./a.out

The output is:
test from 1 :[4 5 6]
result from 1 :[0 8.44682e+252 5.31157e+222]
test from 0 :[1 2 3]
result from 0 :[5 7 9]

So you can see that the result in root (0) is a summation of the vector from the two processes.

References:
[1] http://www.gotw.ca/publications/mill10.htm