The key is really the rows you pull out, not the rows you have in the table. Of course, if you join really big tables without the existence of indexes on the right keys you'll end up with really huge merge sorts. But if they get too big (and "too big" is a configurable parameter in Postgres) it will do a multiway sort on disk so it won't bomb, it will just rattle your disk 'til it falls apart.
And after it falls apart you'll remember to create the proper indexes next time! :)
Rows out ... PG builds the entire resultset at once, rather than step through the query and build a row at a time once the plan's set up. Then AOLserver pulls these out a row at a time and brings them back to the client program.
However ... the db_multirow builds the entire result set in your thread's space, so it can be made available to the template engine.
But 50-500 rows at a time is no big deal. The db_multirow code assumes that you're building pages of reasonable length and aren't munging out a bazillion rows at a time. Not only would that take a lot of memory but shoving it back up the socket to the poor user and her modem would take a long time, right, so it is reasonable to assume that you'll be pulling a reasonable amount of stuff out at a time.