You don't say if you did this aleady, so I would recommend posting on the postgresql mailing lists. I think they have a performance or sql list where they focus on this type of issue.
Count(*) is ALWAYS slow on postgresql, so the index won't fix that. The index should fix scanning for a single column.
I can think of a few parameters that you might want to look at that helped us tune a pretty large postgresql install.
Shared buffers was increased to 120000 on a machine with 8G of ram i think. Increasing further did not help much. I tested this experimentally.
max_fsm_pages set to 200000
work_mem increased to 2048
effective_cache_size set to 32768
This is pretty important
default_statistics_target set to 100
(not sure what the default is, 10 i think)
These all help, we got some slow queries to switch from sequential scans to index scans using this technique.
The big thing this sped up was full text search with tsearch2. (That is why shared buffers is so big.)
I also noticed that in quite a few places, we had to rewrite queries that ran fine on smaller databases or PG 7.x but ran very slow on PG 8.x with huge tables. This is mainly queries on views. Joins on multiple views seem to be very slow. This is not an issue with your example of course.
TO summarize, you can't avoid a sequential scan on that table to select a single row without an index. Unfortunately I don't know how to improve index creation, but I hope some of this information will be useful. Definitely ask about the index creation on one of the PG mailing lists.