Monthly Archives: July 2013

sum_states.py: Nice DOS plots from QE outputs

Once, I was making some DOS figures with quantum espresso by using the sumpdos.x program by Andrea Ferretti (included in the QE package), but I felt that I am missing some features. Thereby, I wrote my own script, which does something similar, but it is much more automatized, it is faster to use, and it even produces ready-to-publish graphs automatically.

It is part of the quantum espresso distribution, so if you have the source code of quantum espresso, you have it in the …/*espresso*/PP/tools/sum_states.py. If you don’t have it, or you just want to read the source-code online, you can check for sum_states.py in the quantum espresso repository. Continue reading

dos-ipr.f: Calculate DOS and IPR with CPMD

The Denstiy OStates (DOS) and the Inverse Partitipation Ratio (IPR) are two interesting properties for understanding the electronic structure of a system.

The DOS is just a histogram counting the amount of states (molecular orbitals/wavefunctions) per energy unit, and analyzing these distributions, we can understand better the electronic behavior of our system.

The IPR is a way to analyze the “amount of localization” of these states, so that the larger the value of IPR the higher the localization around an specific covalent bond.

We used these two properties in an article published in Phys. Rev. B, named “Polymorphism in phase-change materials: melt-quenched and as-deposited amorphous structures in Ge2Sb2Te5 from“, which you can check for more information.  Continue reading

now: easy job monitoring

Performing job monitoring tasks while running jobs on a cluster/supercomputer can be done by several tools (such as qstat, showstart, …). Unfortunately the most of these standard tools are made by computer scientist for computer scientists, but now, there is an alternative: now.
At some point in my life, I started to get sick of qstats, greps, awks, … because it takes a couple of seconds every time one has to write them. If we multiply these seconds by the number of times I need them and by the number of computational members in my group, we get enough time to prepare a couple of new inputs, or write some post/comments in my blog.
So, I wrote my own job monitoring/visualization script, based on some ideas of my friend Iñaki during my time in the theoretical chemistry group in Donostia. That script, initially did nothing but execute these programs and display ONLY the information I need, and ALL the information I need. This information includes: Continue reading

checkpc: checking the PCs on your local network

Sometimes, specially in small computational groups, it is usual that the people uses the workstations for running calculations, and in case of a “trusted” network, were all the colleagues can log-in to each other’s computers, it is sometimes also the case, that everybody calculates everywhere. But at the moment of submitting a calculation, if your own PC is already busy, how do I find a computer in my network, which is not running anything at the moment?

You could ssh to each host and run <it>top</it> or something like that, but that is really slow when you have a lot of computers, and you have vs. to repeat the task very often.

The solution: “checkpc
checkpc is a python script which can scan a whole network of computers and return the load of all PCs within it in less than 3 seconds. Continue reading

shrink_traj: Make trajectory files smaller

When we perform molecular dynamics (MD) simulations, sometimes we want to store a frame every time step, in order to increase the quality of the statistics (i.e. when calculating radial distribution functions). But on the other hand, due to their big size, these trajectories might be very difficult to handle by visualization programs (VMD, jmol, …), because they usually load the hole file into memory, and storing every frame (or each few frames) in a long MD with a lot of atoms can produce a trajectory file of several MB or even GB.

The solution: shrinking the big trajectory into a smaller one with my “shrink_traj” script.
This script takes a trajectory file (in xyz format) and copies each nth frame to another file, resulting on a smaller and easier to handle file specially suitable for visualization.

resend: Long calculation vs. short walltime

A typical problem in the life of a computational scientist: your calculations take longer than the wall time in the supercomputer. If the job can be restarted, we can work around that by re-sending it manually but that is quite tedious sometimes.

My solution: use the “resend” BASH script 😉 (which you can download here)
As in the case of the most of my scripts, there is a “-h” option. This helps if you don’t remember the syntax,  this option will remind you about the few possibilities.
You need the job script you submit to the queue, and the input(s). The script keeps checking the qstat for the current user, and searching for the path from where the resend was executed. Whenever there is no job running or in queue with the same path, it submits another one. If there is a job with the same path, it waits for one minute and tries again.

The resend script is as easy to use as:

user> nohup resend -n 5 -f /path/to/script.cmd &

were “-n” option specifies the number of times the job will be resent (default: 3) and “-f” the path to the batch job script (default: ./job.cmd). It is useful to include the last option in order to be able identify the process (by i.e. “ps”) if necessary.

For killing the script, you can use:
>touch STOP
this will break the internal loop and exit the script.

From the same directory, or just search the PID and kill it.

Another trick: if you already have executed the program, but you notice that you would like to run the job some more times, you can resend it again and the job will be sent as many times as the sum of resend’s both scripts request.

Here we go!

Finally I have written the first post in this blog.
It has been a hard decision, but finally, after transforming my super-complex multilanguage dynamic website into a funny tablet pc, with fancy pure-CSS animated buttons, I noticed that I should maybe use a more sober layout, so I redesigned everything again to make it look a little bit more standard. Then I noticed that I would like to embed a blog, which according to my philosophy, should be self programmed from scratch (with vi!), but the need for more advances features took me to the difficult decision of migrating everything to wordpress.

The blog format will allow me to post about the contents of my old websites in a more simple and effective way.

A comment on the layout: the background image is a natural (not edited) picture I took in the snowy Oulu, and the layout is based on the twenty twelve theme with hard css customization.
The favicon is the representation of some orbitals in an Al+3 pentagon, which using the right cut-off and the right angle, looks like some funny smiley alien, or something.