Showing posts with label Mac OS X. Show all posts
Showing posts with label Mac OS X. Show all posts


Flame Graphs vs Instruments in OS X: using Intel's Performance Counters

TL;DR: as of last update for OS X 10.11 / 7.1, you can't really create meaningful CPI Flame Graphs on OS X because of bugs in Apple's tools, both GUI and command-line. There are some alternatives and workarounds; for an alternative to a CPI Flame Graph, might be good enough. I sent 6 bug reports to Apple, and 5 of them got marked as duplicates; so maybe some Instruments post-7.1 will get good enough to allow the data extraction needed for real Flame Graphs.

I have been trying to generate Brendan Gregg's Cycles-Per-Instruction Flame Graphs on OS X (10.10), mostly to check cache-related CPU stalls. However, after working on it for a good while, it's looking to me like the given method is somewhat shaky, and the same or better insights about the code can be gotten more easily; partly thanks to, … and partly in spite of


Stopping OS X's from storing locally IMAP messages

TL;DR: wants you to download your messages even if you change manually the account .plist. Forget about it.

I wanted to stop in OS X 10.10 Yosemite from caching locally my IMAP messages. Once upon a time there was right there in an option to do just that, but for the last couple of OS versions the only option shown there is whether to download the attachments or not.


Using OS X’s syslogd to receive log messages from the network

TL;DR: avoid this buggy mess and go with macports & syslog-ng. You'll finish faster and saner.

[Updated 2 times]

This sounds like should be easy, but OS X is a moving target because of all the infrastructure changes they have been making for the last few OS releases. Yes, there is a syslogd, but it is some half-hollowed out thing and “others” do most of the work. Syslogd does NOT open an UDP socket, launchd does and feeds it to syslogd. Syslogd does NOT (really) receive the UDP packets, a plugin does it. Syslog does NOT parse the UDP message, ASL (Apple System Log?) does. Syslogd does NOT filter the messages and store them into “logs”, ASL does.

So why there is a syslogd at all, apart from giving a slight sense of false security? (As in, “c’mon, there’s syslog, can’t be too difficult”). No idea. If I had seen how complicated this was going to get I would have bailed out and used syslog-ng from macports.

Anyway. So the first step is to enable UDP reception. The manpage for syslogd explains what has to be changed and where, but doesn’t mention the intermediate step of converting the syslogd plist into xml. Which is easy with plutil or by using some GUI editor, but the fastest way is to just change things from the command line:
cd /System/Library/LaunchDaemons
sudo /usr/libexec/PlistBuddy -c "add :Sockets:NetworkListener dict"
sudo /usr/libexec/PlistBuddy -c "add :Sockets:NetworkListener:SockServiceName string syslog"
sudo /usr/libexec/PlistBuddy -c "add :Sockets:NetworkListener:SockType string dgram"
sudo launchctl unload
sudo launchctl load

With that, log messages originating from the network will be dumped into the normal ASL message storage. Some of that goes into BSD-style, flat text files. But another part of it does not, and stays in some internal ASL database. So the best way to see everything kind-of-at-once is to use the OS X Console utility.

However, the UDP syslog protocol is supposed to include in the UDP packets some information that is not appearing in my Console. Most importantly, there should be a Host field, which would be perfect to classify the log messages coming from the network; alas, in my usage I am not seeing that. After checking the debug info from syslog and ASL and their dogs, looks like either the Linux side (DD-WRT) is not sending the packets correctly or the Mac side is not parsing them right (no, I didn’t feel like trying Wireshark), and some fields are either swapped or missing-and-not-accounted-for, causing some fields to contain values that should be in other fields. Which results in the log messages coming from the network getting lost among the local log messages… and there is no direct way to distinguish them.

So, next idea: dump the log messages coming from the network into a separate file.
To do this, it’s necessary to add a rule to ASL so it filters out the messages that came from the network and writes them into our file.
The problem is that once the messages are inside ASL, already there is no direct way to know where they came from! Luckily, they can still be indirectly recognized because of some anomalies: for example their GID, UID and PID are nonsensical and always the same: GID = UID = -2, PID = -1.
(note that the GID, UID, PID and other keys are not usually in the BSD-flat-format typical in logs; but ASL does store metadata for every message, and a lot of that metadata is simply ignored when outputing to the BSD format. So the idea of writing the UDP-received messages into a file is not the best solution, and I would have preferred being able to continue using some ASL storage and to comb through it; alas, I couldn’t find a way)

After some tests, this is what worked for me: put the following into the file /etc/asl/syslog_UDP :
> my_syslog.log mode=0740 format=std rotate=seq compress file_max=5M all_max=50M gid=20
? [N< PID 0] file my_syslog.log

And signaled syslogd to reconfigure:
sudo killall -SIGHUP syslogd

Aaaand it’s working! In /var/log a folder named syslog_UDP will appear, in which logfiles named my_syslog.log will contain log messages with a PID lesser than 0, which should be impossible - but that is what the UDP-originating messages report. Note too that the logfiles will be automatically rotated, compressed and purged after they start taking more than 50 MB. And they are readable by everyone - no sudo needed. (You are not running as an admin, are you?)

This will break when either DD-WRT starts sending correct UDP messages, or when OS X starts parsing them right, or when the syslog/ASL mechanism changes…

One thing that I tried and couldn’t get to work is using “directory my_syslog.log” instead of "file...", but the result seems to be buggy, in that the generated files inside the directory do not honor the mode or UID/GID that I try to set.
And the debug support in ASL (activated by adding a line “= debug 1” in /etc/asl.conf) is less than useful. Finally what helped the most was to use the message inspector in and the format=raw output flag to see the key values as they are at filtering time - the -1 and -2 values are in fact shown as the decimal value of 0xFFFF and 0xFFFF+1 in the message inspector, which did not work when filtering

In hindsight I’ll bet that using syslog-ng, or even some hackish reverse netcat, would have been much faster. Buuut… well, at least I have learnt a bit about the logging madness in OS X, which in fact had been kind of a to-do for some years…


UPDATE 3/3/2015: Beware, this seems to stop working after a while. My first troubleshooting shows OS X's syslogd to be buggy when receiving from UDP (in Yosemite at the very least): syslogd stopped working (seemingly at all, not only the UDP reception) after some hours, and lsof shows that syslogd had 255 file descriptors open, most of them to the UDP socket. Sending a SIGHUP didn't do anything. "sudo killall syslogd" restarted it and now seems to be working, but who knows for how long. So I see myself going the syslog-ng/macports route in the near future.

UPDATE 4/3/2015: the UDP file descriptor growth keeps happening. Seems to be related to my WiFi connection going on and off. Nastily unreliable, and a funny way to remotely disable syslog on an OS X system, too. I'll disable the UDP thing and send a bug report. My recommendation: just go with macports & syslog-ng.


Filing the PIT from Mac OS X

If you are security-conscious, you probably have limited all kind of things that Adobe Reader can do (Javascript, opening links, …). Probably also in you browser configuration (plugin activation or even desinstallation), you OS configuration (limiting which programs can run and when, uninstalling things, etc).

You might even have a Mac. ;P

But then some program comes around which just assumes things to be standard and doesn’t try too hard to adapt to anything. Like the e-Deklaracje program used for filling in the tax documents in Poland (PIT-37 in my case). Made with Adobe AIR, and needed (?) to minimize the red tape, so the fuckers can count on the user doing whatever is necessary to appease the monster.

The first couple of years I used a Virtual Machine with Windows XP for those things. But already for the last couple of years I have managed to do everything in Mac OS X; I only have to remember all the little tweaks that have to be done or undone so the e-Deklaracje program runs successfully. So here's the (partial?) list.

So, the thing is every option has to be set into pretty much the most permissive possibility. Note that there are so many options along the way that checking how they interact would be too much work; so I am not totally sure that everything I list here is needed. I only know that when all of this is set like this, e-Deklaracje does work.

Make sure that the Adobe Reader plugin is installed in /Library/Internet Plugins; it surely got there when you instaled Adobe Reader, unless you specifically went in to delete or disable it, as I had done.
(One note here: looks like once you install Adobe Reader without the Internet plugin, the normal installer will refuse to run, saying that you already have installed the application. So if you want to install the plugin, you have to either get ride of the installed Reader, or download the full Adobe Reader offline installer)

In Adobe Reader’s preferences, enable JavaScript, and in the Internet section, enable the plugin - if that option appears. This is “funny”, because sometimes you are presented with a checkbox, but sometimes with just some descriptive text. I am not sure what causes the change. Right now for me it shows the text, saying that “the browser will use Acrobat Reader”.
In the Security (Enhanced) section, I have it enabled.
(Want to have a chuckle? There is a link right there saying “What is Enhanced Security?”, which takes you to the browser, where you are warned that you are opening a page at with a self-signed certificate for “localhost”!)

In the Trust Manager section, in Internet Access check your settings. e-Deklaracje is shown there as allowed to do what it pleases, I don’t know if I was asked about it some past year or if it just put itself there somehow. Also make sure that if a PDF tries to send information to the net, it will ask, not block unconditionally.
In the Forms section I have “Automatically calculate field values”, and to show everything.

And finally make sure that Safari has all its plugin and security settings allowing Acrobat Reader to do its thing: I had to enable the plugin in the Plugin Manager in the Security preferences (it had been disabled by Safari because it does not allow for the highest security settings). Note, even if you don’t use Safari this can be necessary, since Safari does supply in fact some flavor of OS-wide internet settings.

…and I think that is all.

Just remember to lock down again everything when you finish! Personally, I disable Javascript in Adobe Reader's preferences, and manually move the AdobePDF* files from /Library/Internet plug-ins into a folder I made named "Internet plug-ins DISABLED". That way the PDF viewer native to each browser works again.

One more thing: The e-Deklaracje application doesn't seem to be really necessary, though it helps you manage other forms and all the process and history. All the filling-in of the forms themselves can be done by plain Acrobat Reader, so you only need to get the PDF document for your PIT. I am not sure about the sending step, but I seem to remember from other years that it could be also done either from Reader itself, or from the e-Deklaracje website.

EDITED on 2015: an earlier version of this post mentioned some extra settings, but I deleted them when I could ascertain that they are not necessary for e-Deklaracje to work.


A hacky fix for SuperDuper running out of space: "delete first" with rsync

Rsync has a possibility to delete the to-be-deleted files in the destination before it starts syncing files that actually need syncing.

That is sorely missing in Shirt Pocket's program SuperDuper!, which is actually a nice backup program, and for some time was one of the few sane options available to make fully reliable backups. SuperDuper! just starts copying blindly, and can then find itself in the situation where the backup destination can't contain the new files + the old files that should be deleted but still weren't.

So that problem would be solved with rsync's "delete first" behaviour. I see there have been people complaining about this in Shirt Pocket's forums for at least 5 years, and the developers seem to only say "yes, we will do something sometime".

But they still didn't. So, this is the command line to be used:

sudo rsync --delete --existing --ignore-existing --recursive  /Volumes/ORIGINAL_VOLUME/ /Volumes/BACKUP_VOLUME

  • ORIGINAL_VOLUME has a trailing slash, BACKUP_VOLUME hasn't
  • sudo is there so rsync can delete files not owned by the current user. Of course, that makes the command more dangerous. Adding the option --dry-run shows which files will be actually deleted
  • Why not use rsync to make the full backup? That might an option, but some years ago rsync was unable to copy all the metadata used by OS X, so the backup might not be "good enough". Not even the Apple-modified, Apple-provided rsync in OS X did it right. Again, that was at least 5 years ago, so things might have changed. And anyway, rsync is rather designed to work through a "slow" link between the two volumes – say, disk-computer-network-computer-disk. It will work locally, of course, but it might happen that you'd end faster just making a full copy – rsync might not save anything, and might actually take more.


Enlaces a mensajes de correo

Gruber, de Daring Fireball, escribió hace tiempo sobre la posibilidad de crear URLs que enlazan a mensajes en tu buzón de . Y es algo tremendamente útil y que no veo publicado en español, así que aquí pongo un pequeño resumen.

Primero: de qué se trata exactamente? Pues que es un enlace que, cuando clicas en él, se abre y te muestra el mensaje concreto que sea, en medio de tu buzón. Muy útil si por ejemplo dentro de 2 semanas tendrás que usar un mensaje que acabas de recibir: puedes usar el enlace al mensaje en iCal, de forma que la alarma que te saltará dentro de 2 semanas tiene un link que al clicarlo abre el mensaje correspondiente.

La cosa es que entiende esas URLs, y se crean automáticamente si por ejemplo arrastras un mensaje desde a iCal. Tienen este aspecto: message://, donde MESSAGE-ID está contenido en las cabeceras de cualquier mensaje.

Y sí, es muy útil, pero no tiene una forma directa de crear tales links. Una forma es drag'n'drop, pero sería mucho más útil poder crearlos explícitamente. Y Gruber publica en su blog un AppleScript que puedes usar con el menú de scripts estándar de OS X (que puedes activar en las preferencias del Editor de Scripts). De forma que copias el script, lo pones donde toca, y ya puedes crear URLs a tu gusto en Mail. Genial.

Él escribió sobre esto en el 2007, para Leopard. Pero sigue funcionando en Snow Leopard y Lion. Lástima que sigue sin tener una forma nativa de hacer esto.


Huawei E220 3G modem drivers on OS X Lion: only 32 bits

I recently installed Mac OS X 10.7 Lion, and found that it already had drivers for the Huawei E220 3G modem. But the drivers are 32 bit, so they won't work for machines which use a 64 bit kernel.

And there seem to be no 64 bits drivers, so the only solution for now is booting in 32 bit kernel mode (pressing 3 and 2 when booting).

I seriously doubt that Huawei are going to publish updated drivers; they were already rather unsupporting even when the modems were new. Eventually, I plan to try to make my own driver, but that won't be short-term...


Tcpflow and connections between local interfaces

Looks like tcpflow doesn't see connections between local interfaces. After a bit of digging, looks like such connections are "routed internally by the kernel", at least in Linux. And there are patches for Linux to force those packets out of one interface and in from another, but even that is only useful if you have an external network connecting both interfaces (looks like a simple crossover cable should be enough).

There is another option: using iptables to first make the packets leave the machine towards some external IP, and then using arpd to make a router send back those packets.

And I see people reporting that tcpflow -i lo does work for them, capturing flows having local addresses even though different than

The interesting thing is that people seem to take for well-known that Linux routes through the "lo" interface the traffic between local interfaces; but I didn't find any authoritative source which explains any rationale, any configurability, any implementation. Which I guess would make it somewhat easier to find such things in the BSD's and OS X.

(I surely should go straight to the source code, but that feels fragile. I am not interested on the current implementation, but on the design: how should it work vs. how does it work right now. Although surely that kind of networking would be pretty baked in into the kernel...)

Would be interesting to know if this something missing in the BSD's/OS X or in libpcap.


FileMon-like functionality on OS X as a one-liner dtrace script

I first thought about this as a lsof substitute, but no, it's more like a primitive / simple FileMon/fs_usage. It shows the executable and the file it opened. Could be improved, of course.

Dtrace is amazing.
sudo dtrace -qn 'syscall::open*:entry{ printf("%s %s\n",execname,copyinstr(arg0)); }'
fs_usage shows much more information... but is not a one-liner ;P

mplayer vs. polish subtitles

Typically, when I start seeing a film, I only have enough time to quickly get some subtitles, try to make them more-or-less work with the film at hand and ... that's it.
So, lots of repetitive, rushed fixes but no long-term solutions.

But this time I got sick of it and tried to get to understand the problem. Which is: Polish subtitles don't work with mplayer, or at least not with the mplayer built with MacPorts' mplayer-devel port, which uses mplayer's SVN HEAD.

The option -subcp cp1250 does select the codepage (cp1250 is variously called Windows Latin 2 or Windows Central European encoding, which seems to be the typical encoding used by polish subtitles on the net)

The option -subcp enca should auto-detect the encoding, but the port disables enca at configure time, and provides no way to enable it. I'll try to send a patch for that.
In the meantime, enca -L pl -i file by itself works nicely (enca is provided by a port). For difficult cases when enca fails, the chardet module for python should work; didn't try it yet.

So now we have subtitles with the right encoding. Next problem: the displayed subtitles have some polish letters missing! (not all, which is strange. ć and ł do work, but ż and ą don't)

-fontconfig, -font don't seem to do anything. -font in particular doesn't seem to mind what kind of file I feed it, or even if it exists, and -msglevel all=9 doesn't show anything. Looks like -nofontconfig is needed for -font to start having any effect. Which sounds logical given that the fontconfig project seems to be all about autoconfiguring font management, but the mplayer docs say pretty little about all of this.
So, with -nofontconfig present, -font does accept .ttf files, but also .desc files.

(I have a pack of fonts for mplayer called font-arial-cp1250, which contains some variations of some arial font consisting on sets of .raw files with a main font.desc file. I seem to remember that I downloaded it from some forum. The files are dated from 2003. The font.desc files can be feed to mplayer with the -font option. And they don't work, or rather only seem to work when the encoding is NOT correctly selected with the -subcp option. So this is a dead end, probably outevolved by mplayer in these last 8 years. So better forget this.)

My ~/.mplayer directory also contains a subfont.ttf file, which seems to work fine with the -font option. I have tried a couple of other fonts and they seem to lack the polish characters, so not every .ttf will be OK.

(I don't know if that subfont.ttf is standard. This .mplayer subdirectory has probably followed me through maybe 4 or 5 OS X versions, 4 computers and 2 architectures. Which means, no idea where it came from. The current mplayer port doesn't seem to include such a thing, which is to be expected since MacPorts makes some effort to install things only in well defined places. But I remember having played with other versions of mplayer, from fink to some binaries. Who knows.)

So, to summarize: with -subcp goodencoding -nofontconfig -font goodfont.ttf we should be OK.

And yet, it can be better. At some moment I discovered that the -ass option works. -ass uses libass, which is not covered by any port, so I didn't expect it to work; but mplayer seems to have its own internal version of the library (seen at the configure stage with something like port install -d mplayer-devel, or in the configure.log if macports has not been configured to --clean after installation). And -ass is amazing. -nofontconfig nor -font again don't seem to work, and I don't know where mplayer is getting its fonts now, but it is a good font with all the polish characters. And not only that, but configuration commands contained in some subtitles (.txt files, with not only the subtitles but commands that look like {y:b}{c:$0000ff}; RTF? CSS?) do work, so instead of the occasional rubbish now the subtitles render beatifully, with colors and bolds and italics, oh my.

So, to re-summarize: the best option is something like mplayer vidfile.avi -sub subfile.txt -subcp cp1250 -ass  
(and if I manage to send the patch for enabling enca, it should be something like -subcp enca:pl:cp1250)

Keeping in mind that all of this is for MacPorts' mplayer-devel, which builds the SVN head. So all of this might be quite temporal. Which sucks. Hard.

(Why not use VLC and forget about all of this? Because VLC allows very little control, and most any change means having to stop and start again. That's OK if the film and subtitles are perfectly matched, but that's not usually my case. Meanwhile, in mplayer one can tune lots of things, even while watching the film: go forward and backward in the subtitles and synchronise them to the video / sound; set say 90% speed (VLC only allows 66%, 50%, ...); move subtitles on the screen, even render the film with more black space on the bottom should one want the subtitles to not overlap the image. And then of course is mencoder...)


tcpflow 1.0.2 doesn't work with net expressions

Tcpflow 1.0.2, as built with MacPorts, doesn't work when net expressions are used.
But 1.0.6 does work.

I already submitted a new portfile so it should be available soon.


Toshiba G450 en Mac OS X Lion

Los drivers que Toshiba publicó para el G450 en el 2008 son sólo 32 bits. Si tu Mac OS X Lion corre el kernel a 32 bits, posiblemente funcionarán.
Pero si ordenador corre el kernel a 64 bits, lo cual creo que ya será la mayoría de macs, esos drivers no funcionarán. Y Toshiba no tiene pinta de  que vaya a publicar nuevos drivers, porque ya eran lentos con cosas de soporte cuando el móvil era nuevo, y ahora encima parece que la división de móviles de Toshiba se ... arrejuntó con Fujitsu en el 2010.

Así que la solución a corto plazo es arrancar en modo 32 bits, lo cual se hace pulsando las teclas 3 y 2 mientras arrancas. Sólo el kernel pasa a modo 32 bits, y se supone que la diferencia no será demasiado grande en rendimiento - pero no sé de cifras concretas. Los programas seguirán funcionando a 64 bits.

Curiosamente, OS X Lion lleva de serie drivers de Huawei para el modem E220 y algún otro. Pero son también drivers de 32 bits, así que estamos en las mismas. Y tampoco veo que Huawei haya publicado nuevos drivers a 64 bits...

Hace tiempo estuve jugando con la idea de hacer mis propio driver para el Toshiba G450. Y parece que vuelve a ser un buen momento para intentarlo. Veremos qué pasa. (y ahora el E220 también es candidato!)

Toshiba G450 drivers for OS X Lion - only 32 bits

I recently changed computers and got a MacBook Pro which boots the kernel in 64 bit mode. The problem is, the only drivers Toshiba published for the G450 modem are 32 bit only (published on 2008).
So the only solution for now is booting in 32 bit kernel mode (pressing 3 and 2 when booting).

I seriously doubt that Toshiba are going to publish updated drivers; they were already rather unsupporting even when the modems were new. And Toshiba seems to have merged its mobile division with Fujitsu's... and even Windows 7 users seem to have problems. So... maybe this means that I should try to go back to the program-your-own-driver thing.


Building socat in OS X 10.7 Lion

socat ( as of this writing) doesn't compile with the clang, the standard compiler in Mac OS X 10.7 Lion (10.7.2 as of this writing). It does compile if instead of clang one uses for example llvm-gcc-4.2.2.

The developer reports that this is a bug; only gcc is supported in socat, but there are compatibility fallbacks for other compilers. Only, the fallback was missing on the file that fails to compile, xioexit.c. The fix is easy:
@@ -5,6 +5,7 @@
/* this file contains the source for the extended exit function */

#include "xiosysincludes.h"
+#include "compat.h"
#include "xio.h"
(if someone is trying to build something like socat, I guess he doesn't need help about patchfiles)

This problem was also present in MacPorts' port for socat. I have already reported it and provided a new working portfile, so I guess it won't be long until it is published.


Directory permissions vs local Portfiles in MacPorts

~/MacPorts/ports$ sudo port install socat
--->  Computing dependencies for socat
could not read "/Users/mija/MacPorts/ports/sysutils/socat/Portfile": permission denied

I was having strange problems to use a locally edited portfile. Turns out the permissions were wrong in a directory in the path; each of the directories should have at least o+rx permissions, and strangely my $HOME had none of those (strange because other users in my computer had o+rx, admins or not).

Note that MacPorts lately (from 2.0?) has started to use the user nobody at some points of its workings, and root at others; in this case the user nobody was the one unable to reach the Portfile. A way to check what this user sees is "sudo -u nobody ls -leda@ /Users/mija/blabla".

A workaround is setting macportsuser in /opt/local/etc/macports/macports.conf to root, but that's not a good idea. MacPorts is doing the the sensible thing, de-escalating privileges when they are not needed; that way, if something went awry, the problems should be much smaller. Lion is doing a lot to be secure, let's keep it that way.

So fix those permissions, goddamit.


dtrace'ing paging to disk

How to know which processes are paging to/from disk (as opposed to other VMM management) in OS X, and how much exactly:

sudo dtrace -n maj_fault'{@[execname] = count()}'

reference (with examples and other options):

I had been meaning to look for ways to do this, and tried some of the tools included in OS X (Activity Monitor, top, Instruments, vm_stat, vmmap, top, ...). But nothing really helped, and/or seemed to miss the exact level of information I was targeting (only resulting in real I/O; relationship between I/O and process; realtime... ). Finally I had the inspiration to google for "dtrace pagefaults". Bingo.
(dtrace in this example isn't realtime, but is the best approximation so far, and I'm guessing some tuning should fix it. Heck, it's a one-liner!)

Learning dtrace is still something I'd love to do, and once again it is tempting me to let it jump to the front of queue...

(Mhm, will Oracle's Java for OS X support dtrace too?)

Oh, and of course Firefox was by far the greatest pagefaulter, even with the great improvements in the v9 betas. (I actually saw it diminish its VM usage by some hundreds of MB when it was freshly launched!... though after some days it has gone back to its habit of hovering over 20% CPU and 2 GB VM even while idle)
But it's interesting that the second offender is Skype, even if more than one order of magnitude less than Firefox; but also one order of magnitude greater than the 3rd and rest of offenders. Interesting because it looks very good mannered in Activity Monitor, and it was unused during all of the measuring time. Maybe its P2P routing thing...? Would be good to keep an eye on it.


Compiling Transcode 1.1.6 and 1.2 in OS X

(express post, too much time spent on this but too much work spent to let it slip without benefitting someone else)

1.1.6 is not released, had to be hg-cloned. In fact, the repository looks like the 1.1.6 tag was at some moment added but then deleted?? That branch also has the latest video stabilization, no idea yet if it is also in 1.2

Can be compiled. Get the hg repository via hg itself, not one of the zip/bz/etc packaged from the website since the compilation scripts expect to find the .hg files. Read the README and INSTALL, you'll need to create the configure yourself (easy, just read)

I am using MacPorts, and even installed the included Transcode 1.1.5, so all the requirements are already installed. Most of the expected work was about configuring tne non-macports transcode to use the /opt dirtree.

(Incidentally: 1.1.5 has horribly mismatched documentation, I have had to resort to using strings and source code to understand what some of the utils were supposed to be doing and how. The wiki doesn´t help. Also, not really working some of the combinations of export module/codec blow up without saying anything, no idea if that's because of transcode itself or because of lame, ffmpeg versions, who knows. Also, seems to be accepted that mp4 container files can't be generated directly by Transcode. I am interested on using the video stabilization plugin, but I if I had known that the time invested would be this much, ...)

1.2's generation of configure uses a script named, which doesn't work right and causes problems: the generated configure fails with
./configure: line 4: .: filename argument required
.: usage: . filename [arguments]
Inspection of the configure source shows that it was badly generated. uses echo -n blabla trying to output a "blabla" with no line feed. But echo in sh doesn't work like that, so the actual output is "-n blabla\n". (mhm, no, both sh and bash use /bin/echo but it behaves differently, interesting). Anyway: that's nasty, didn't anyone check it? And the errors it causes are insidious, I wasn't sure if those were status messages, warnings or plain errors. The easy fix is change the echo -n $1$HGVER for printf $1$HGVER .

Even after having fixed that, 2 telltale failed output strings remained (something like warning: AC_INIT: not a literal: -n 1.2.0-0a9d7c95b961
). But I could not find where they were coming from. No, was not it, and I could not find any file which had cached the string or parts of it. Before the fix in there were about 6 of those, after there were 2. Anyway, after some infructuous searching, I decided to try to go on... at least configure now worked and nothing exploded in a seemingly related way.

The ./configure line I used:
./configure --prefix=/opt/local --enable-libavcodec --enable-ffmpeg --enable-experimental --enable-statbuffer --enable-lame   --enable-xvid   --enable-x264   --enable-libquicktime   --enable-faac    --enable-libavformat --with-lame-prefix=/opt/local --with-lame-includes=/opt/local/include --with-xvid-prefix=/opt/local --with-xvid-includes=/opt/local/include --with-xvid-libs=/opt/local/lib --with-faac-prefix=/opt/local/bin --with-faac-includes=/opt/local/include --with-faac-libs=/opt/local/lib --disable-libdvdread --disable-libjpeg 

The xvid support is half-baked. I had to
export CC=I/opt/local/include
so the xvid support finally worked, since sometimes the --with-xvid-includes was left unused by the Makefile.

Incidentally, I think that also helped x264.

Transcode 1.2 has been seemingly abandoned for about 18 months now. The last post in the website says it should be about alpha quality, and that they were desperately looking for new developers...

The make failed because in a number of submakefiles the tcmodule libs  path was not being added to the linker. I fixed it by adding $(LIBTCMODULE_LIBS) whenever the linker failed (with errors like:
Undefined symbols:
  "_tc_module_info_match", referenced from:
      _tc_module_match in tcmp3cut.o
  "_tc_module_info_log", referenced from:
      _tc_module_show_info in tcmp3cut.o
ld: symbol(s) not found
collect2: ld returned 1 exit status
make[2]: *** [tcmp3cut-1.2] Error 1


(note the "_tc_module_info_match": "tc_module_info_match" is a function in the "tcmodule-info" file in the "tcmodule" directory (sometimes Spotlight is useful! ;P). Then, note the "tcmp3cut": that was the part being compiled. Looking for it in the makefile showed that it was not including the $(LIBTCMODULE_LIBS), which was defined in the beginning of the file, but not being used as much as the rest of definitions)

The tools directory had worse problems. The tcstub.c file is used when compiling other files, and 4 definitions conflict with libtc.a. Since libtc  seems to be generic for all of Transcode and the stub seems to be less important, I guessed the stub versions of the definitions had to go. I commented them out (tc_new_video_frame, tc_new_audio_frame, tc_del_video_frame, tc_del_audio_frame).
The errors were like this:
ld: duplicate symbol _tc_new_video_frame in ../libtc/.libs/libtc.a(tcframes.o) and tcmodchain-1.2-tcstub.o

With that, it finally compiled.
Beware, make install is supposed to install in parallel to existing 1.1-line transcodes (the new files should have "-1.2" added to their names), but some files had the same name and got overwriten. Example: avifix.
Which is doubly insidious because the tools directory had those stub file problems, which make all the directory contents suspicious...

When I have a moment to check / compare the 1.1.5, 1.1.6 and 1.2, I'll try to leave a note.


Discos duros dañados, salvar datos y huir de SpinRite

Últimamente me he encontrado a gente hablando de usar SpinRite para recuperar discos duros (incluso en Macs!), así que quizás ayude si cuento mi experiencia.

Hace unos años (quizás 5?), el disco duro de mi iMac G3 de por entonces empezó a fallar. El ordenador se quedaba de repente congelado, sin responder a nada durante alrededor de 10 minutos, y luego de repente continuaba funcionando como si nada. Si recuerdo correctamente, las pausas siempre eran igual de largas, y ni siquiera respondía al ratón. No recuerdo si el disco hacía algún ruido extraño.

Cómo supe que era el disco duro? Probé con algún CD o DVD de Linux y todo parecía ir bien, excepto al acceder al disco duro.

Quería sacar los datos, así que empecé a buscar posibilidades. Una cosa que encontré fue SpinRite, que según lo que decía en su página web, sonaba a magia... lo cual en cosas de informática no es buena señal, creo yo. Se supone que SpinRite "revive" el disco duro. Pero bueno, llevaba tiempo con curiosidad, y tenía acceso a un PC y al programa, así que metí en él el disco duro y lo probé.
Lo dejé trabajar más de 24 horas, y sólo había recorrido unas cuantas KB de los alrededor de 100 GB que tenía el disco duro.
Lo paré y volví a empezar, esta vez esperando sólo unas 6 horas, pero seguía atascándose en el mismo punto. A ese ritmo tardaría meses en acabar (si es que acababa!), así que pasé a intentar otras cosas. Pero vamos, que por de pronto mi recomendación es clara: no perder el tiempo con SpinRite. Si tienes suerte sólo es una pérdida de tiempo; pero si no tienes suerte, todo lo que intenta hacer en el disco duro sólo lo acabará de estropear. (ya que cuando un disco duro empieza a fallar por algo mecánico, irá de mal en peor, así que hay que darse prisa para salvar lo que puedas)

Al final usé un LiveCD o LiveDVD de Linux (Knoppix primero y Ubuntu después), que llevaba dd_rescue y ddrescue (y si no los llevaba no eran muy difíciles de meter con un pendrive o de descargar de la red, no recuerdo ahora). Se llaman así porque dd es la herramienta básica y tradicional en unixes para copiar discos al más bajo nivel, pero que si encuentra errores se para. ddrescue es como dd pero si encuentra fallos intenta continuar. Y dd_rescue (nombre parecido pero herramienta bastante diferente) no sólo resiste fallos sino que cuenta con ellos, y hace lo posible por salvar la mayor cantidad de datos en el menor tiempo: si encuentra un error, salta adelante en el disco hasta encontrar una zona sin fallos y sigue leyendo; lee hacia delante y hacia detrás desde cada punto del disco (porque a veces eso ayuda); y recuerda todos los saltos dados para recomponer una imagen de disco lo más utilizable posible.

Por qué es importante lo de los saltos? Porque como he dicho, cada vez que el disco intentaba leer en un punto con errores, había un timeout de minutos. Y había muchos errores salpicados por el disco, así que cualquier intento de leer a lo loco se hacía imposiblemente lento. Además, acabé descubriendo que parte del problema era que para mejorar la velocidad (caché de lectura/read-ahead) el OS y el propio disco normalmente intentan leer bloques grandes del disco, en vez de sectores (que son sólo 512 bytes). Y en realidad cada sector con problemas tenía un timeout de quizás 2 segundos. Pero es típico que los sectores dañados están juntos, así que una lectura de digamos 8 MB (que parece un valor típico) podía estar tropezando con cientos de sectores dañados, multiplicando los tiempos de espera; por eso puede ser útil acercarse a un punto dañado "desde atrás", para tropezar sólo con un sector dañado en vez de con muchos.

Y no sólo eso, sino que además es posible desactivar los cachés y read-ahead con hdparm en Linux. Así que, añadiendo a dd_rescue la configuración del disco con hdparm para minimizar los timeouts, pude acabar de salvar más del 80% del disco en unas pocas horas.
dd_rescue sigue intentando leer las partes dañadas para extraer todo lo posible, pero llegó un momento en que el disco duro ni siquiera respondia ya. Quizás si no hubiese perdido tiempo con SpinRite hubiese conseguido sacar más, quién sabe.

Después de todo esto acabé con una imagen de disco, que pude montar en OS X con hdiutil (creo recordar, eran tiempos de OS X 10.3 o 10.4) y semiarreglar con DiskWarrior. Aunque creo que también usé Data Rescue II para rebuscar ficheros sueltos, aunque eso ya es a lo desesperado y sin garantías de que valga la pena (porque los ficheros recuperados pueden estar corruptos). Aconsejable si tenías ficheros sueltos que aprecies, ya sean documentos o fotos; pero las cosas que iban en directorios (por ejemplo, programas) mejor darlos por perdidos, aunque a veces hay alguna sorpresa.
En vez de Data Rescue II creo que también probé otras herramientas, pero Data Rescue II fue lo mejor al final. Photorec es una opción, y gratis/open source; creo que la descubrí cuando ya había acabado con Data Rescue.

Cosas que aprendi con todo esto:
  • los discos duros fallan. Sin más.
  • Hacen falta backups. 
  • Linux da un nivel de control impagable para arreglar discos duros fallados. 
  • Se pueden salvar muuuchas cosas con herramientas gratuitas. 
  • Cuando el disco duro empieza a fallar mecánicamente, hazte a la idea de que tiene las horas contadas (literalmente!), y piénsate bien cómo las usas para salvar lo que puedas. 
  • Usar SMART puede avisarte con algo de tiempo de que el disco va a fallar.

Respecto a SMART, smartmontools (open source, disponible en macports) hace de todo: desde chequeos puntuales hasta tests en profundidad periódicos. Pero ojo, han habido un par de estudios (hechos por Google y otra empresa, analizando fallos en miles de discos de varios fabricantes) en que la conclusión es que SMART sólo detecta alrededor del 50% de los fallos. Pero eso sí, cuando avisa, al disco duro le quedan menos de 24 horas.
Y en cualquier caso para que SMART chequee todo lo que puede chequear hace falta usar sus tests periódicamente, como hace smartmontools (si lo configuras como toca).
Pero de todas formas, desde que pasó todo esto, me han fallado 2 discos duros más y SMART no avisó pese a los tests y bla bla bla. En fin, es gratis... menos da una piedra, supongo.
Ah, y aún no he visto ninguna caja externa que dé acceso a SMART. Y eso que tengo 5 (USB y Firewire, y de diferentes marcas).

Más cosas: cuando un disco duro "moderno" (los PATA ya contaban como "modernos"!) detecta que un sector está fallando, intentará recuperar la información automáticamente (por ejemplo repitiendo la lectura de un sector que da errores de lectura), y si lo consigue, lo substitutye por otro sector sano; los discos duros llevan ocultos unos cuantos sectores de reserva para ello. Esto sucede en el disco duro por sí mismo, sin que el OS haga nada. Cuando un sector realmente no se puede leer tras varios intentos, es cuando empezamos a recibir errores de lectura en el OS. Y en esa situación una posibilidad para ayudar a arreglarlo es escribir ceros en ese sector, lo cual puede hacerse sobreescribiéndo el fichero afectado con ceros, ... o simplemente formateando todo el disco (con ceros! ojo, que hay programas de formateo que no usan ceros! El de DOS por defecto no lo hace, el de Windows XP creo que tampoco, OS X tiene la opción). Cuando la unidad detecta la escritura de ceros, aprovecha para substituir los sectores dañados que tenía pendientes de leer, de forma que ya no darán problemas.
...pero en cualquier caso la existencia de esos fallos eso ya debería ser una mala señal sobre el futuro del disco!
Con smartmontools se puede consultar la lista interna de la unidad que dice cuántos sectores están en esa situación de mala lectura/esperando substitución. Si no recuerdo mal, si ese valor es mayor de 0, puede ser porque ya no quedan sectores de reserva para la substitución, ... así que la cantidad de errores seguirá aumentando. Hora de preparar un disco nuevo.

Y volviendo a SpinRite: hay bastante gente online diciendo que va muy bien. Pero también hay mucha gente que cree en el horóscopo, y eso no hace que sea verdad. Y es interesante que normalmente la gente que está en contra de SpinRite son los que saben explicarse técnicamente. Es decir: la presentación oficial de SpinRite suena a basura de Teletienda, y los argumentos de la gente que lo defienden suenan también casualmente a "happy customers" de Teletienda (cosas como "hazme caso, soy informático en una gran empresa y SpinRite ha salvado mi culo más veces de lo que puedo recordar, te lo recomiendo, no te arrepentirás!": explicaciones muy técnicas, vamos). Me imagino que a veces puede ayudar, porque lo que hace SpinRite es básicamente lo mismo que he explicado de la lectura repetida de sectores y escribir ceros en sectores fallados. Pero lo hace de una forma absurdamente basta, provocando enormes timeouts, repitiendo cada operación varias veces incluso cuando es contraproducente, afectando igual a sectores sanos y dañados, ... y encima es de pago, cuando el mismo efecto lo puedes conseguir con herramientas gratuitas o incluso incluidas por defecto en el sistema.

Y por último, SpinRite hace una cantidad de estupideces asombrosa para dar la sensación de que hace algo realmente mágico. Parece que es el estilo general del autor ( ), que ya ha dado la nota alguna vez en otros temas, como cuando intentó convencer al mundo que un bug del formato WMF en Windows era una conspiración de Microsoft para controlar todos los PCs. Por suerte de vez en cuando hay alguien que da una referencia para ponerle en su sitio... como cuando un desarrollador de Snort desmontó las tonterías que decía sobre escaneo de redes.


Yet Another Nonsensical Javascript Benchmarking of Mostly Unreleased Browsers

After a couple of too-close-together performance-competition-between-browsers thingies, I decided to make my very own quick & dirty benchmark of current browsers. Webkit-based at least, although soon I'd like to at least try again Firefox; it's been months since I abandoned it for Webkit itself, due to a multitude of small problems, some of them caused by myself (much more than a hundred tabs always open, lots of extensions to lessen the load) but magnified by Firefox itself (extensions failing, slowness even when with much more reasonable numbers of tabs, sluggish Flash performance, problematic Java)...
In fact it's a great moment to make this kind of test, since Safari 4.0.2 has just appeared, Stainless 0.6.5 too, and ... well, WebKit has a recent nightly (r45641). Chrome is the available one right now: 3.0.192 (developer release)
A surprise has been to learn that Opera seems to have abandoned the race some time ago (ten times slower, javascript-wise, than any of the webkit-based browsers???). Will be... interesting to throw that to the next one who claims that Opera is holier than thou, blah, blah.
The benchmarks are Google's V8 (version 4) and Webkit's Sunspider.
v8: (bigger is better)
  • webkit: Run it once: 2160. Open new tab, run it again in the new tab: goes down to less than 2050, which is about 5% less. Next tabs hover just over 2050. Closing tabs and starting anew (but w/o restarting Webkit) goes back to 2160. (later I closed everything and retried, and didn't go over 2100)
  • safari: 2080 -> 1980. Same behaviour.
  • stainless: starts at 2064 and with each tab improves up to 2120!
  • chrome: pretty stable around 2950. Wow.
sunspider: (smaller is better)
  • webkit: starts at 787, worsens to 818 after a couple of new tabs with the same V8.
  • safari: from 795 to 805
  • stainless: again, improves from 695 to 680!
  • chrome: again, pretty stable winning at 650...
Interesting: Sunspider continually loads something between each subtest (something substantial). V8 only loads on, well, the first load; afterwards looks like the cache is enough, since every new tab with V8 doesn't cause (a lot of) traffic. BUT, in Stainless V8 seems to be reloaded in every new tab. Maybe Stainless is not sharing cache between the tabs?
And it looks like tabs are sometimes slow and sometimes fast: open tab, load V8, let it run. Let's say it gets a lowish score, 2050. Make new tab, load V8, let run. Let's say this one gets a somewhat high score, 2150. Now, run both tests again. The "slow" tab keeps being slow, and the "fast" tab keeps being fast... Strange.
(and looks like the first tab created in every window is "slow", and the next are "faster")
And with this, I forbid myself to keep wasting time benchmarking unreleased browsers. (But I sure will keep an eye on Stainless and chrome....if they had session saving, maybe I would switch right now! but at least they will be an interesting debugger...)
The machine used was a 2007 white macbook Core2Duo 2.16 GHz, with 3 GB RAM and about 5% of background CPU load.


Smart Crash Reports, Input Managers y otras alimañas

Smart Crash Reporter es un Input Manager creado por Unsanity (creadores de los "haxies").
La historia completa está en . El resumen de la historia en español es que los Input Managers son en teoría una forma de añadir al sistema formas de entrada de texto. Pero en la práctica, los Input Managers pueden hacer muchas más cosas. Se cargan automáticamente en cualquier programa y pueden alterarlo (en memoria, al ejecutarse; los programas no son alterados en disco). Un Input Manager conocido es (era?) Pith Helmet, que añade capacidades anti-anuncios a Safari.
Así que los Input Managers son muy potentes, y también pueden ser muy peligrosos. Hay quien dice que son el sueño de cualquier programador de malware, y de hecho Leopard trajo como mejora de seguridad serias restricciones al uso de Input Managers.
Smart Crash Reporter es otro Input Manager típico, y mosqueante porque suele ser instalado por varias aplicaciones sin avisar al usuario. Lo que hace es modificar el Crash Reporter de Apple para que no sólo envíe los informes de crash a Apple sino también a Unsanity, donde otros desarrolladores podrán ver esos informes, lo cual será útil para que encuentren los problemas que causaron el crash.
Peeero, esa meta se puede conseguir de otras formas más... higiénicas que instalar (a escondidas o no) un Input Manager, que va a trastocar todas las aplicaciones que ejecutas. Parece que SCR está bien hecho y no debería dar problemas, pero aún así, siempre será mejor no jugar con fuego.
Si en tu directorio Librería/Input Managers tienes el dichoso SCR, puedes empezar eliminándolo a mano y mantener un tiempo abierta la carpeta para ver cuándo vuelve a aparecer (seguramente al arrancar alguna de las aplicaciones que lo instalan).
Otra cosa que puedes hacer es añadir una Acción de Carpeta para que te avise automáticamente si aparece algo nuevo en la carpeta. Pero eso sólo avisa, no lo evita. (...a no ser que uses una acción que además borre, claro...)
Y otra posibilidad es añadir una carpeta llamada igual que el Input Manager que quieres evitar ("Smart Crash Reports" en este caso); así cuando quieran instalarlo otros programas, creerán que ya está ahí y lo dejarán. (aunque aparecerán mensajes en el system.log cada vez que arranques un programa, avisando de que no han podido cargar el Input Manager...)
En mi caso, el programa que había hecho aparecer el SCR era Graphic Converter. Es una versión algo atrasada (6.2), no sé si la actual aún lo usará...
Y parece que QuickSilver no lo instala, aunque era mi primer sospechoso.