I'm joining on parties instead of users b/c each user can have multiple parties.
You're right about the group by plan being different; I killed my old emacs buffer, so I'm not sure what I was doing different. But here's what I'm getting now:
Subquery Scan won_percent_raw_v (cost=0.00..94421.44 rows=10874 width=16)
-> Aggregate (cost=0.00..94421.44 rows=10874 width=16)
-> Group (cost=0.00..93334.01 rows=108743 width=16)
-> Nested Loop (cost=0.00..93062.15 rows=108743 width=16)
-> Index Scan using parties_pkey on parties p (cost=0.00..258.47 rows=43 width=4)
-> Seq Scan on battles b (cost=0.00..1173.49 rows=65649 width=12)
In my experience nested loops are to be avoided when at all possible and this does seem to be somewhat slower. Although I'm not sure why it does a seq scan here instead of the index scan it does with the subquery. (I doublechecked after vacuum analyze in case my playing with index creation/destruction were confusing it and I get the same results in both cases.)
But as you say, it's going to be less expensive in any case to track the info separately... I do an update to Parties after each battle anyway, so adding battles_won and battles_fought to that table won't add significant overhead. It
is uglier, though. :(