Showing posts with label tips. Show all posts
Showing posts with label tips. Show all posts


A hacky fix for SuperDuper running out of space: "delete first" with rsync

Rsync has a possibility to delete the to-be-deleted files in the destination before it starts syncing files that actually need syncing.

That is sorely missing in Shirt Pocket's program SuperDuper!, which is actually a nice backup program, and for some time was one of the few sane options available to make fully reliable backups. SuperDuper! just starts copying blindly, and can then find itself in the situation where the backup destination can't contain the new files + the old files that should be deleted but still weren't.

So that problem would be solved with rsync's "delete first" behaviour. I see there have been people complaining about this in Shirt Pocket's forums for at least 5 years, and the developers seem to only say "yes, we will do something sometime".

But they still didn't. So, this is the command line to be used:

sudo rsync --delete --existing --ignore-existing --recursive  /Volumes/ORIGINAL_VOLUME/ /Volumes/BACKUP_VOLUME

  • ORIGINAL_VOLUME has a trailing slash, BACKUP_VOLUME hasn't
  • sudo is there so rsync can delete files not owned by the current user. Of course, that makes the command more dangerous. Adding the option --dry-run shows which files will be actually deleted
  • Why not use rsync to make the full backup? That might an option, but some years ago rsync was unable to copy all the metadata used by OS X, so the backup might not be "good enough". Not even the Apple-modified, Apple-provided rsync in OS X did it right. Again, that was at least 5 years ago, so things might have changed. And anyway, rsync is rather designed to work through a "slow" link between the two volumes – say, disk-computer-network-computer-disk. It will work locally, of course, but it might happen that you'd end faster just making a full copy – rsync might not save anything, and might actually take more.


Microwave oven sponge cake

I needed my recipe for a microwave oven-made sponge cake, and could not find it. Never more. So here it is.

I have seen different variations online. Some are interesting, some aren't. This is one version I have kind of settled in after some tries and changes and mixings with other recipes. It still is evolving, but is a good start point, or of course end point.

(Yeah, microwave oven. It works. This is kind of a survival recipe; it won't be as good as with a normal oven, but is incredibly fast (15 mins from beginning to cake), and it is better than a lot of sponge cakes and muffins you see in supermarkets and even coffee shops. Yeah, it also works for muffins. And if you don't have a normal oven in your rented flat, it's actually more than good enough)

  • 4 eggs
  • 450 g flour (substitute about a third of it with Graham for more interesting texture; or change to 300 g flour+200 g cocoa for a chocolate sponge cake)
  • 450 g sugar (no big difference with brown sugar)
  • 1.5 glasses of milk (can be substituted with juices. Orange, lemon are good. Lime is interesting! Also add grated orange or lemon zest)
  • baking powder (can be substituted with a heaped coffee spoon of sodium bicarbonate for this quantity of flour. Beware if using acid juices or yoghurt, might need more powder)
  • A bit less than half glass of oil (125 ml=95 g; olive gives a slightly fruity flavor)
  • any extras: nuts, hazelnuts, almonds, raisins, chocolate chips, apple, pineapple (if canned, use the syrup for the dought and when done!), berries!

Beat the eggs; mix in milk, oil, sugar, flour, baking powder, zest. Can be done unceremoniously and in a hurry, but will grow up better if the egg whites are beaten first, then the yolks, then the liquids and finally the solids.

Yields about 1.5 kg of dough, which is enough for a rather big baking bowl and still save some for about 20 muffins of 20g each; the dough stays perfectly on the fridge for some days, or even frozen.

Cook for 8 minutes in a 750 W microwave oven; check with a toothpick for doneness. Using less than 100% power doesn´t look like a good idea, since the dough will collapse during the pauses.
The dough will grow 2x or a bit more during cooking. Beware, the borders will look cooked sooner than the center, and the top sooner than the bottom. Don't care to try to use grill, brings more problems than is worth.

Freezes perfectly and goes nicely with coffee and such.

There are other interesting variations (yoghurt, wine, beer, ...). To be continued.


Enlaces a mensajes de correo

Gruber, de Daring Fireball, escribió hace tiempo sobre la posibilidad de crear URLs que enlazan a mensajes en tu buzón de . Y es algo tremendamente útil y que no veo publicado en español, así que aquí pongo un pequeño resumen.

Primero: de qué se trata exactamente? Pues que es un enlace que, cuando clicas en él, se abre y te muestra el mensaje concreto que sea, en medio de tu buzón. Muy útil si por ejemplo dentro de 2 semanas tendrás que usar un mensaje que acabas de recibir: puedes usar el enlace al mensaje en iCal, de forma que la alarma que te saltará dentro de 2 semanas tiene un link que al clicarlo abre el mensaje correspondiente.

La cosa es que entiende esas URLs, y se crean automáticamente si por ejemplo arrastras un mensaje desde a iCal. Tienen este aspecto: message://, donde MESSAGE-ID está contenido en las cabeceras de cualquier mensaje.

Y sí, es muy útil, pero no tiene una forma directa de crear tales links. Una forma es drag'n'drop, pero sería mucho más útil poder crearlos explícitamente. Y Gruber publica en su blog un AppleScript que puedes usar con el menú de scripts estándar de OS X (que puedes activar en las preferencias del Editor de Scripts). De forma que copias el script, lo pones donde toca, y ya puedes crear URLs a tu gusto en Mail. Genial.

Él escribió sobre esto en el 2007, para Leopard. Pero sigue funcionando en Snow Leopard y Lion. Lástima que sigue sin tener una forma nativa de hacer esto.


Safari's Reading List protocol: CalDAV + XBELs

Safari on desktop and iOS has a feature called "Reading List". It is a way to store URLs in iCloud, synchronize them between Safaris, and mark them as read or not. Somewhat like Instapaper maybe.

I was a bit surprised that there is no Chrome or Firefox extensions for tapping into Safari's Reading List. So I wanted to try poking a bit into the protocol, maybe something interesting would appear or someone could get some headstart from this.

Safari always seem to start contacting, which resolves to the Akamai CDN, some Luckily the IP always was the same, and there is no client certificate check, so I could mount a little Man In The Middle via /etc/hosts.

Then, a couple of socats: one to pose as the original SSL server and resend the plaintext; and the second to receive the plaintext and send it to the original IP. The first socat shows everything that goes out of it (in plaintext) thanks to the -vx option.

socat -vx OPENSSL-LISTEN:443,cert=certs/server.pem,verify=0,reuseaddr,fork TCP:localhost:50000 

socat -v TCP-LISTEN:50000,reuseaddr,fork OPENSSL:

I tried tcpflow instead of the -vx, but tcpflow won't show anything when capturing on the loopback interface on OS X. Seems to be a bug.

So now we can see the exchanges between Safari and iCloud. I was half-expecting something encrypted, but it isn't (apart from the SSL connection). It seems to be CalDAV, used to exchange small XBEL files, gzipped; which can be quickly unzipped by copying the hex from the socat dump and pasting into a "xxd -r -p | zcat ". Each XBEL file is a small XML file with an URL and its status.

I expected this to be much quicker, but socat and tcpflow kind of conspired against it. I started expecting to use named pipes and tee's to connect the socats while showing the exchanges, but each update in the Reading List causes between 3 and 5 connections, which needs the socat option fork, which forks new socats for each connection, which won't play nice with the pipes.

I guess I should have switched to some python earlier... next time I will. And at this point anyway, if I wanted to keep going into this protocol, looks like throwing together some small client with Python would be the best way forward.

But I am not currently interested in JavaScript, so I guess I won't be making any extension myself, so I guess I should move on to other things.

So, dear Lazyweb... :)

Huawei E220 3G modem drivers on OS X Lion: only 32 bits

I recently installed Mac OS X 10.7 Lion, and found that it already had drivers for the Huawei E220 3G modem. But the drivers are 32 bit, so they won't work for machines which use a 64 bit kernel.

And there seem to be no 64 bits drivers, so the only solution for now is booting in 32 bit kernel mode (pressing 3 and 2 when booting).

I seriously doubt that Huawei are going to publish updated drivers; they were already rather unsupporting even when the modems were new. Eventually, I plan to try to make my own driver, but that won't be short-term...


FileMon-like functionality on OS X as a one-liner dtrace script

I first thought about this as a lsof substitute, but no, it's more like a primitive / simple FileMon/fs_usage. It shows the executable and the file it opened. Could be improved, of course.

Dtrace is amazing.
sudo dtrace -qn 'syscall::open*:entry{ printf("%s %s\n",execname,copyinstr(arg0)); }'
fs_usage shows much more information... but is not a one-liner ;P

mplayer vs. polish subtitles

Typically, when I start seeing a film, I only have enough time to quickly get some subtitles, try to make them more-or-less work with the film at hand and ... that's it.
So, lots of repetitive, rushed fixes but no long-term solutions.

But this time I got sick of it and tried to get to understand the problem. Which is: Polish subtitles don't work with mplayer, or at least not with the mplayer built with MacPorts' mplayer-devel port, which uses mplayer's SVN HEAD.

The option -subcp cp1250 does select the codepage (cp1250 is variously called Windows Latin 2 or Windows Central European encoding, which seems to be the typical encoding used by polish subtitles on the net)

The option -subcp enca should auto-detect the encoding, but the port disables enca at configure time, and provides no way to enable it. I'll try to send a patch for that.
In the meantime, enca -L pl -i file by itself works nicely (enca is provided by a port). For difficult cases when enca fails, the chardet module for python should work; didn't try it yet.

So now we have subtitles with the right encoding. Next problem: the displayed subtitles have some polish letters missing! (not all, which is strange. ć and ł do work, but ż and ą don't)

-fontconfig, -font don't seem to do anything. -font in particular doesn't seem to mind what kind of file I feed it, or even if it exists, and -msglevel all=9 doesn't show anything. Looks like -nofontconfig is needed for -font to start having any effect. Which sounds logical given that the fontconfig project seems to be all about autoconfiguring font management, but the mplayer docs say pretty little about all of this.
So, with -nofontconfig present, -font does accept .ttf files, but also .desc files.

(I have a pack of fonts for mplayer called font-arial-cp1250, which contains some variations of some arial font consisting on sets of .raw files with a main font.desc file. I seem to remember that I downloaded it from some forum. The files are dated from 2003. The font.desc files can be feed to mplayer with the -font option. And they don't work, or rather only seem to work when the encoding is NOT correctly selected with the -subcp option. So this is a dead end, probably outevolved by mplayer in these last 8 years. So better forget this.)

My ~/.mplayer directory also contains a subfont.ttf file, which seems to work fine with the -font option. I have tried a couple of other fonts and they seem to lack the polish characters, so not every .ttf will be OK.

(I don't know if that subfont.ttf is standard. This .mplayer subdirectory has probably followed me through maybe 4 or 5 OS X versions, 4 computers and 2 architectures. Which means, no idea where it came from. The current mplayer port doesn't seem to include such a thing, which is to be expected since MacPorts makes some effort to install things only in well defined places. But I remember having played with other versions of mplayer, from fink to some binaries. Who knows.)

So, to summarize: with -subcp goodencoding -nofontconfig -font goodfont.ttf we should be OK.

And yet, it can be better. At some moment I discovered that the -ass option works. -ass uses libass, which is not covered by any port, so I didn't expect it to work; but mplayer seems to have its own internal version of the library (seen at the configure stage with something like port install -d mplayer-devel, or in the configure.log if macports has not been configured to --clean after installation). And -ass is amazing. -nofontconfig nor -font again don't seem to work, and I don't know where mplayer is getting its fonts now, but it is a good font with all the polish characters. And not only that, but configuration commands contained in some subtitles (.txt files, with not only the subtitles but commands that look like {y:b}{c:$0000ff}; RTF? CSS?) do work, so instead of the occasional rubbish now the subtitles render beatifully, with colors and bolds and italics, oh my.

So, to re-summarize: the best option is something like mplayer vidfile.avi -sub subfile.txt -subcp cp1250 -ass  
(and if I manage to send the patch for enabling enca, it should be something like -subcp enca:pl:cp1250)

Keeping in mind that all of this is for MacPorts' mplayer-devel, which builds the SVN head. So all of this might be quite temporal. Which sucks. Hard.

(Why not use VLC and forget about all of this? Because VLC allows very little control, and most any change means having to stop and start again. That's OK if the film and subtitles are perfectly matched, but that's not usually my case. Meanwhile, in mplayer one can tune lots of things, even while watching the film: go forward and backward in the subtitles and synchronise them to the video / sound; set say 90% speed (VLC only allows 66%, 50%, ...); move subtitles on the screen, even render the film with more black space on the bottom should one want the subtitles to not overlap the image. And then of course is mencoder...)


tcpflow 1.0.2 doesn't work with net expressions

Tcpflow 1.0.2, as built with MacPorts, doesn't work when net expressions are used.
But 1.0.6 does work.

I already submitted a new portfile so it should be available soon.


Toshiba G450 en Mac OS X Lion

Los drivers que Toshiba publicó para el G450 en el 2008 son sólo 32 bits. Si tu Mac OS X Lion corre el kernel a 32 bits, posiblemente funcionarán.
Pero si ordenador corre el kernel a 64 bits, lo cual creo que ya será la mayoría de macs, esos drivers no funcionarán. Y Toshiba no tiene pinta de  que vaya a publicar nuevos drivers, porque ya eran lentos con cosas de soporte cuando el móvil era nuevo, y ahora encima parece que la división de móviles de Toshiba se ... arrejuntó con Fujitsu en el 2010.

Así que la solución a corto plazo es arrancar en modo 32 bits, lo cual se hace pulsando las teclas 3 y 2 mientras arrancas. Sólo el kernel pasa a modo 32 bits, y se supone que la diferencia no será demasiado grande en rendimiento - pero no sé de cifras concretas. Los programas seguirán funcionando a 64 bits.

Curiosamente, OS X Lion lleva de serie drivers de Huawei para el modem E220 y algún otro. Pero son también drivers de 32 bits, así que estamos en las mismas. Y tampoco veo que Huawei haya publicado nuevos drivers a 64 bits...

Hace tiempo estuve jugando con la idea de hacer mis propio driver para el Toshiba G450. Y parece que vuelve a ser un buen momento para intentarlo. Veremos qué pasa. (y ahora el E220 también es candidato!)

Toshiba G450 drivers for OS X Lion - only 32 bits

I recently changed computers and got a MacBook Pro which boots the kernel in 64 bit mode. The problem is, the only drivers Toshiba published for the G450 modem are 32 bit only (published on 2008).
So the only solution for now is booting in 32 bit kernel mode (pressing 3 and 2 when booting).

I seriously doubt that Toshiba are going to publish updated drivers; they were already rather unsupporting even when the modems were new. And Toshiba seems to have merged its mobile division with Fujitsu's... and even Windows 7 users seem to have problems. So... maybe this means that I should try to go back to the program-your-own-driver thing.


Building socat in OS X 10.7 Lion

socat ( as of this writing) doesn't compile with the clang, the standard compiler in Mac OS X 10.7 Lion (10.7.2 as of this writing). It does compile if instead of clang one uses for example llvm-gcc-4.2.2.

The developer reports that this is a bug; only gcc is supported in socat, but there are compatibility fallbacks for other compilers. Only, the fallback was missing on the file that fails to compile, xioexit.c. The fix is easy:
@@ -5,6 +5,7 @@
/* this file contains the source for the extended exit function */

#include "xiosysincludes.h"
+#include "compat.h"
#include "xio.h"
(if someone is trying to build something like socat, I guess he doesn't need help about patchfiles)

This problem was also present in MacPorts' port for socat. I have already reported it and provided a new working portfile, so I guess it won't be long until it is published.


Resizing the system NTFS partition on Windows XP

I didn't find a way to grow the boot partition on a live system (I think OS X has been able to do that for some time now?). I used BootIt, free for this kind of partition work. Quick download, quick self-install on a USB stick, reboot, unnervingly unresponsive while working, but finally problem-free.

In my case I was growing the boot partition into free space. And this XP uses some Intel SATA AHCI driver, and installed in a non-standard way at that, which made me a bit weary. But everything turned out right.

The next best option seemed to be GParted Live, which also has an easy way to be installed on a USB stick via TuxBoot.

Directory permissions vs local Portfiles in MacPorts

~/MacPorts/ports$ sudo port install socat
--->  Computing dependencies for socat
could not read "/Users/mija/MacPorts/ports/sysutils/socat/Portfile": permission denied

I was having strange problems to use a locally edited portfile. Turns out the permissions were wrong in a directory in the path; each of the directories should have at least o+rx permissions, and strangely my $HOME had none of those (strange because other users in my computer had o+rx, admins or not).

Note that MacPorts lately (from 2.0?) has started to use the user nobody at some points of its workings, and root at others; in this case the user nobody was the one unable to reach the Portfile. A way to check what this user sees is "sudo -u nobody ls -leda@ /Users/mija/blabla".

A workaround is setting macportsuser in /opt/local/etc/macports/macports.conf to root, but that's not a good idea. MacPorts is doing the the sensible thing, de-escalating privileges when they are not needed; that way, if something went awry, the problems should be much smaller. Lion is doing a lot to be secure, let's keep it that way.

So fix those permissions, goddamit.


Límites absurdos de longitud de paths en APIs de Windows

Ordenando cosas viejas me he encontrado con un problema interesante que tuvimos en los tiempos de gvSIG (hace unos 4 años)...

Problema: si se intenta descomprimir el ZIP en un Windows, nos encontramos con la incapacidad de Windows de crear ficheros con path completo de más de 260 chars. Esto sucede tanto en NTFS como en FAT, y es un problema (por ejemplo) si se quiere descomprimir el ZIP en un Windows para meterlo en una unidad USB o un CD / DVD.
No es una limitación de FAT: la copia o descompresión se puede hacer en un OS no-Windows y va bien.
Es una limitación de API de Windows (como mínimo incluyendo Windows 2003 Server): aunque filenames pueden ser de mas de 255chars, paths completos no pueden superar 260 chars.


dtrace'ing paging to disk

How to know which processes are paging to/from disk (as opposed to other VMM management) in OS X, and how much exactly:

sudo dtrace -n maj_fault'{@[execname] = count()}'

reference (with examples and other options):

I had been meaning to look for ways to do this, and tried some of the tools included in OS X (Activity Monitor, top, Instruments, vm_stat, vmmap, top, ...). But nothing really helped, and/or seemed to miss the exact level of information I was targeting (only resulting in real I/O; relationship between I/O and process; realtime... ). Finally I had the inspiration to google for "dtrace pagefaults". Bingo.
(dtrace in this example isn't realtime, but is the best approximation so far, and I'm guessing some tuning should fix it. Heck, it's a one-liner!)

Learning dtrace is still something I'd love to do, and once again it is tempting me to let it jump to the front of queue...

(Mhm, will Oracle's Java for OS X support dtrace too?)

Oh, and of course Firefox was by far the greatest pagefaulter, even with the great improvements in the v9 betas. (I actually saw it diminish its VM usage by some hundreds of MB when it was freshly launched!... though after some days it has gone back to its habit of hovering over 20% CPU and 2 GB VM even while idle)
But it's interesting that the second offender is Skype, even if more than one order of magnitude less than Firefox; but also one order of magnitude greater than the 3rd and rest of offenders. Interesting because it looks very good mannered in Activity Monitor, and it was unused during all of the measuring time. Maybe its P2P routing thing...? Would be good to keep an eye on it.


Compiling Transcode 1.1.6 and 1.2 in OS X

(express post, too much time spent on this but too much work spent to let it slip without benefitting someone else)

1.1.6 is not released, had to be hg-cloned. In fact, the repository looks like the 1.1.6 tag was at some moment added but then deleted?? That branch also has the latest video stabilization, no idea yet if it is also in 1.2

Can be compiled. Get the hg repository via hg itself, not one of the zip/bz/etc packaged from the website since the compilation scripts expect to find the .hg files. Read the README and INSTALL, you'll need to create the configure yourself (easy, just read)

I am using MacPorts, and even installed the included Transcode 1.1.5, so all the requirements are already installed. Most of the expected work was about configuring tne non-macports transcode to use the /opt dirtree.

(Incidentally: 1.1.5 has horribly mismatched documentation, I have had to resort to using strings and source code to understand what some of the utils were supposed to be doing and how. The wiki doesn´t help. Also, not really working some of the combinations of export module/codec blow up without saying anything, no idea if that's because of transcode itself or because of lame, ffmpeg versions, who knows. Also, seems to be accepted that mp4 container files can't be generated directly by Transcode. I am interested on using the video stabilization plugin, but I if I had known that the time invested would be this much, ...)

1.2's generation of configure uses a script named, which doesn't work right and causes problems: the generated configure fails with
./configure: line 4: .: filename argument required
.: usage: . filename [arguments]
Inspection of the configure source shows that it was badly generated. uses echo -n blabla trying to output a "blabla" with no line feed. But echo in sh doesn't work like that, so the actual output is "-n blabla\n". (mhm, no, both sh and bash use /bin/echo but it behaves differently, interesting). Anyway: that's nasty, didn't anyone check it? And the errors it causes are insidious, I wasn't sure if those were status messages, warnings or plain errors. The easy fix is change the echo -n $1$HGVER for printf $1$HGVER .

Even after having fixed that, 2 telltale failed output strings remained (something like warning: AC_INIT: not a literal: -n 1.2.0-0a9d7c95b961
). But I could not find where they were coming from. No, was not it, and I could not find any file which had cached the string or parts of it. Before the fix in there were about 6 of those, after there were 2. Anyway, after some infructuous searching, I decided to try to go on... at least configure now worked and nothing exploded in a seemingly related way.

The ./configure line I used:
./configure --prefix=/opt/local --enable-libavcodec --enable-ffmpeg --enable-experimental --enable-statbuffer --enable-lame   --enable-xvid   --enable-x264   --enable-libquicktime   --enable-faac    --enable-libavformat --with-lame-prefix=/opt/local --with-lame-includes=/opt/local/include --with-xvid-prefix=/opt/local --with-xvid-includes=/opt/local/include --with-xvid-libs=/opt/local/lib --with-faac-prefix=/opt/local/bin --with-faac-includes=/opt/local/include --with-faac-libs=/opt/local/lib --disable-libdvdread --disable-libjpeg 

The xvid support is half-baked. I had to
export CC=I/opt/local/include
so the xvid support finally worked, since sometimes the --with-xvid-includes was left unused by the Makefile.

Incidentally, I think that also helped x264.

Transcode 1.2 has been seemingly abandoned for about 18 months now. The last post in the website says it should be about alpha quality, and that they were desperately looking for new developers...

The make failed because in a number of submakefiles the tcmodule libs  path was not being added to the linker. I fixed it by adding $(LIBTCMODULE_LIBS) whenever the linker failed (with errors like:
Undefined symbols:
  "_tc_module_info_match", referenced from:
      _tc_module_match in tcmp3cut.o
  "_tc_module_info_log", referenced from:
      _tc_module_show_info in tcmp3cut.o
ld: symbol(s) not found
collect2: ld returned 1 exit status
make[2]: *** [tcmp3cut-1.2] Error 1


(note the "_tc_module_info_match": "tc_module_info_match" is a function in the "tcmodule-info" file in the "tcmodule" directory (sometimes Spotlight is useful! ;P). Then, note the "tcmp3cut": that was the part being compiled. Looking for it in the makefile showed that it was not including the $(LIBTCMODULE_LIBS), which was defined in the beginning of the file, but not being used as much as the rest of definitions)

The tools directory had worse problems. The tcstub.c file is used when compiling other files, and 4 definitions conflict with libtc.a. Since libtc  seems to be generic for all of Transcode and the stub seems to be less important, I guessed the stub versions of the definitions had to go. I commented them out (tc_new_video_frame, tc_new_audio_frame, tc_del_video_frame, tc_del_audio_frame).
The errors were like this:
ld: duplicate symbol _tc_new_video_frame in ../libtc/.libs/libtc.a(tcframes.o) and tcmodchain-1.2-tcstub.o

With that, it finally compiled.
Beware, make install is supposed to install in parallel to existing 1.1-line transcodes (the new files should have "-1.2" added to their names), but some files had the same name and got overwriten. Example: avifix.
Which is doubly insidious because the tools directory had those stub file problems, which make all the directory contents suspicious...

When I have a moment to check / compare the 1.1.5, 1.1.6 and 1.2, I'll try to leave a note.


Discos duros dañados, salvar datos y huir de SpinRite

Últimamente me he encontrado a gente hablando de usar SpinRite para recuperar discos duros (incluso en Macs!), así que quizás ayude si cuento mi experiencia.

Hace unos años (quizás 5?), el disco duro de mi iMac G3 de por entonces empezó a fallar. El ordenador se quedaba de repente congelado, sin responder a nada durante alrededor de 10 minutos, y luego de repente continuaba funcionando como si nada. Si recuerdo correctamente, las pausas siempre eran igual de largas, y ni siquiera respondía al ratón. No recuerdo si el disco hacía algún ruido extraño.

Cómo supe que era el disco duro? Probé con algún CD o DVD de Linux y todo parecía ir bien, excepto al acceder al disco duro.

Quería sacar los datos, así que empecé a buscar posibilidades. Una cosa que encontré fue SpinRite, que según lo que decía en su página web, sonaba a magia... lo cual en cosas de informática no es buena señal, creo yo. Se supone que SpinRite "revive" el disco duro. Pero bueno, llevaba tiempo con curiosidad, y tenía acceso a un PC y al programa, así que metí en él el disco duro y lo probé.
Lo dejé trabajar más de 24 horas, y sólo había recorrido unas cuantas KB de los alrededor de 100 GB que tenía el disco duro.
Lo paré y volví a empezar, esta vez esperando sólo unas 6 horas, pero seguía atascándose en el mismo punto. A ese ritmo tardaría meses en acabar (si es que acababa!), así que pasé a intentar otras cosas. Pero vamos, que por de pronto mi recomendación es clara: no perder el tiempo con SpinRite. Si tienes suerte sólo es una pérdida de tiempo; pero si no tienes suerte, todo lo que intenta hacer en el disco duro sólo lo acabará de estropear. (ya que cuando un disco duro empieza a fallar por algo mecánico, irá de mal en peor, así que hay que darse prisa para salvar lo que puedas)

Al final usé un LiveCD o LiveDVD de Linux (Knoppix primero y Ubuntu después), que llevaba dd_rescue y ddrescue (y si no los llevaba no eran muy difíciles de meter con un pendrive o de descargar de la red, no recuerdo ahora). Se llaman así porque dd es la herramienta básica y tradicional en unixes para copiar discos al más bajo nivel, pero que si encuentra errores se para. ddrescue es como dd pero si encuentra fallos intenta continuar. Y dd_rescue (nombre parecido pero herramienta bastante diferente) no sólo resiste fallos sino que cuenta con ellos, y hace lo posible por salvar la mayor cantidad de datos en el menor tiempo: si encuentra un error, salta adelante en el disco hasta encontrar una zona sin fallos y sigue leyendo; lee hacia delante y hacia detrás desde cada punto del disco (porque a veces eso ayuda); y recuerda todos los saltos dados para recomponer una imagen de disco lo más utilizable posible.

Por qué es importante lo de los saltos? Porque como he dicho, cada vez que el disco intentaba leer en un punto con errores, había un timeout de minutos. Y había muchos errores salpicados por el disco, así que cualquier intento de leer a lo loco se hacía imposiblemente lento. Además, acabé descubriendo que parte del problema era que para mejorar la velocidad (caché de lectura/read-ahead) el OS y el propio disco normalmente intentan leer bloques grandes del disco, en vez de sectores (que son sólo 512 bytes). Y en realidad cada sector con problemas tenía un timeout de quizás 2 segundos. Pero es típico que los sectores dañados están juntos, así que una lectura de digamos 8 MB (que parece un valor típico) podía estar tropezando con cientos de sectores dañados, multiplicando los tiempos de espera; por eso puede ser útil acercarse a un punto dañado "desde atrás", para tropezar sólo con un sector dañado en vez de con muchos.

Y no sólo eso, sino que además es posible desactivar los cachés y read-ahead con hdparm en Linux. Así que, añadiendo a dd_rescue la configuración del disco con hdparm para minimizar los timeouts, pude acabar de salvar más del 80% del disco en unas pocas horas.
dd_rescue sigue intentando leer las partes dañadas para extraer todo lo posible, pero llegó un momento en que el disco duro ni siquiera respondia ya. Quizás si no hubiese perdido tiempo con SpinRite hubiese conseguido sacar más, quién sabe.

Después de todo esto acabé con una imagen de disco, que pude montar en OS X con hdiutil (creo recordar, eran tiempos de OS X 10.3 o 10.4) y semiarreglar con DiskWarrior. Aunque creo que también usé Data Rescue II para rebuscar ficheros sueltos, aunque eso ya es a lo desesperado y sin garantías de que valga la pena (porque los ficheros recuperados pueden estar corruptos). Aconsejable si tenías ficheros sueltos que aprecies, ya sean documentos o fotos; pero las cosas que iban en directorios (por ejemplo, programas) mejor darlos por perdidos, aunque a veces hay alguna sorpresa.
En vez de Data Rescue II creo que también probé otras herramientas, pero Data Rescue II fue lo mejor al final. Photorec es una opción, y gratis/open source; creo que la descubrí cuando ya había acabado con Data Rescue.

Cosas que aprendi con todo esto:
  • los discos duros fallan. Sin más.
  • Hacen falta backups. 
  • Linux da un nivel de control impagable para arreglar discos duros fallados. 
  • Se pueden salvar muuuchas cosas con herramientas gratuitas. 
  • Cuando el disco duro empieza a fallar mecánicamente, hazte a la idea de que tiene las horas contadas (literalmente!), y piénsate bien cómo las usas para salvar lo que puedas. 
  • Usar SMART puede avisarte con algo de tiempo de que el disco va a fallar.

Respecto a SMART, smartmontools (open source, disponible en macports) hace de todo: desde chequeos puntuales hasta tests en profundidad periódicos. Pero ojo, han habido un par de estudios (hechos por Google y otra empresa, analizando fallos en miles de discos de varios fabricantes) en que la conclusión es que SMART sólo detecta alrededor del 50% de los fallos. Pero eso sí, cuando avisa, al disco duro le quedan menos de 24 horas.
Y en cualquier caso para que SMART chequee todo lo que puede chequear hace falta usar sus tests periódicamente, como hace smartmontools (si lo configuras como toca).
Pero de todas formas, desde que pasó todo esto, me han fallado 2 discos duros más y SMART no avisó pese a los tests y bla bla bla. En fin, es gratis... menos da una piedra, supongo.
Ah, y aún no he visto ninguna caja externa que dé acceso a SMART. Y eso que tengo 5 (USB y Firewire, y de diferentes marcas).

Más cosas: cuando un disco duro "moderno" (los PATA ya contaban como "modernos"!) detecta que un sector está fallando, intentará recuperar la información automáticamente (por ejemplo repitiendo la lectura de un sector que da errores de lectura), y si lo consigue, lo substitutye por otro sector sano; los discos duros llevan ocultos unos cuantos sectores de reserva para ello. Esto sucede en el disco duro por sí mismo, sin que el OS haga nada. Cuando un sector realmente no se puede leer tras varios intentos, es cuando empezamos a recibir errores de lectura en el OS. Y en esa situación una posibilidad para ayudar a arreglarlo es escribir ceros en ese sector, lo cual puede hacerse sobreescribiéndo el fichero afectado con ceros, ... o simplemente formateando todo el disco (con ceros! ojo, que hay programas de formateo que no usan ceros! El de DOS por defecto no lo hace, el de Windows XP creo que tampoco, OS X tiene la opción). Cuando la unidad detecta la escritura de ceros, aprovecha para substituir los sectores dañados que tenía pendientes de leer, de forma que ya no darán problemas.
...pero en cualquier caso la existencia de esos fallos eso ya debería ser una mala señal sobre el futuro del disco!
Con smartmontools se puede consultar la lista interna de la unidad que dice cuántos sectores están en esa situación de mala lectura/esperando substitución. Si no recuerdo mal, si ese valor es mayor de 0, puede ser porque ya no quedan sectores de reserva para la substitución, ... así que la cantidad de errores seguirá aumentando. Hora de preparar un disco nuevo.

Y volviendo a SpinRite: hay bastante gente online diciendo que va muy bien. Pero también hay mucha gente que cree en el horóscopo, y eso no hace que sea verdad. Y es interesante que normalmente la gente que está en contra de SpinRite son los que saben explicarse técnicamente. Es decir: la presentación oficial de SpinRite suena a basura de Teletienda, y los argumentos de la gente que lo defienden suenan también casualmente a "happy customers" de Teletienda (cosas como "hazme caso, soy informático en una gran empresa y SpinRite ha salvado mi culo más veces de lo que puedo recordar, te lo recomiendo, no te arrepentirás!": explicaciones muy técnicas, vamos). Me imagino que a veces puede ayudar, porque lo que hace SpinRite es básicamente lo mismo que he explicado de la lectura repetida de sectores y escribir ceros en sectores fallados. Pero lo hace de una forma absurdamente basta, provocando enormes timeouts, repitiendo cada operación varias veces incluso cuando es contraproducente, afectando igual a sectores sanos y dañados, ... y encima es de pago, cuando el mismo efecto lo puedes conseguir con herramientas gratuitas o incluso incluidas por defecto en el sistema.

Y por último, SpinRite hace una cantidad de estupideces asombrosa para dar la sensación de que hace algo realmente mágico. Parece que es el estilo general del autor ( ), que ya ha dado la nota alguna vez en otros temas, como cuando intentó convencer al mundo que un bug del formato WMF en Windows era una conspiración de Microsoft para controlar todos los PCs. Por suerte de vez en cuando hay alguien que da una referencia para ponerle en su sitio... como cuando un desarrollador de Snort desmontó las tonterías que decía sobre escaneo de redes.


Extracting audio from a "video file" without recompression with Quicktime

I had an .mp4 video file (with audio and video), and wanted to extract the audio to play it independently. (Sometimes the song version used in some music video is better than the original song!)

That can be done in a number of ways; with Quicktime it is more or less immediate: Window, movie properties, select audio, extract, File, export.
But I wanted to do it without recompressing, since the audio was already in MPEG4 format (ACC).
My first impulse was to try mencoder, but it doesn't seem to work, at least not in a few quick tries. But that can be because the version I have installed is somewhat b0rken, or SVN-ish, or macports-ish, or because the output options say they are beta, or because I should re-study the manpage. Again.
So while Macports installed my second impulse (ffmpeg), I went back to Quicktime to make sure that there is not some option for "direct", "non-reencoding", "codec: copy" or some such.
And there is! But it is somehow misleadingly buried. You have to:
  • select MPEG4 exporting in any of its incarnations (which isn't a good start because selecting already makes you feel like you are modifying things when you don't want to), 
  • ignore the presets (which is where I most expected to find such an option), 
  • pretend you are selecting options to re-encode (which in my case was exactly what I didn't want to do, so at this point I had already given up and trying to reencode with minimal loss), and ... 
  • there it was: the format listbox in the audio tab includes the option to just use the already compressed audio. And it seems to do what it promises, since the export is pretty much instantaneous.


PS3 uses bluetooth headset only for voice!

I just bought a stereo bluetooth headset (A2DP, the works), planning to use it to hear the sound from the PS3 while gaming, to avoid having to look for some long audio cable for my normal headphones.
Bad news: the headset works, BUT it is only used for voice communication; not for the actual music or sfx from games.
Which is something I had not seen mentioned anywhere, and had me chasing around for a while. I finally saw it as a side comment in some FAQ in some forum. So I hope this helps anyone before buying his/her own headset.
Soooo lame.


Mini-guía para crear ficheros RTF

Generar un documento RTF simple (sin tablas ni imágenes, por ejemplo) pero con cosas interesantes (fuentes, colores, párrafos, incluso estilos!) es muy fácil.

En inglés hay una pequeña guía en, pero no llega a meterse en estilos. También está la especificación completa del formato RTF en la web de Microsoft. Yo necesitaba algo intermedio, y no parece haber guías en castellano, así que aquí está lo que he aprendido y estoy usando.