PG doesn't shrink the index files nor does vacuum reclaim them, but space in index files is scavenged, I believe.
To do what Carl's suggestion would require a new index type, I think. I don't think the PG folks would give up their existing index structure. While the fact that there's no MVCC information kept with individual index entries does indeed mean that visibility/validity information must be retrieved from the table, it also means that the indexes are smaller. Since PG stores integers as real binary integers (32 bits) the size overhead required to support MVCC information in the index table could be quite high.
That's been the major argument against making indexes transaction-aware.
Carl's actually talking about something different I just realized - index-organized tables, rather than making the general btree indexes transaction-aware. Sort of a best-of-both-worlds in cases where it would work (a worst-of-both-worlds if you don't know what you're doing!).
One of the reasons the PG developer group is less-than-enthusiastic about some of these interesting ideas/proposals is that they're still working hard at improving some important nuts-and-bolts issues. For instance, "lazy vacuum" in PG 7.2 (I think it's there?) that will do space reclaiming in the background, which will help sites that do a lot of delete/update/inserting of data avoid the need for frequent vaccums. In other words triage ...