My final point here is that the behavior I've demonstrated is in fact only possible if a versioning database is used.
Not at all. The straightforward implementation is to generate the rowset generated by the query at cursor OPEN time, then FETCH rows from that. In fact the semantics of CURSOR, OPEN and FETCH demand that the implementation act as though this is true (A Guide to The SQL Standard, Date & Darwin).
So Oracle's implementation - "lazy evaluation" of CURSORs which delays work until FETCH time - is simply an optimization. The advantage of Oracle's "lazy evaluation" approach is that you don't pay the cost of fetching the full rowset associated with the CURSOR query unless you actually are using all the rows. If you just FETCH a few rows them close the cursor Oracle's approach saves you a lot of machine cycles.
And it will save a lot of RAM, too, if the query returns a lot of rows since the rowset doesn't need to be generated and stored in the db server, then FETCH'd a row at a time into your local variables.
PG implements MVCC but does not implement "lazy evaluation" of CURSOR select statements. This is one area where Oracle is superior to PostgreSQL, though it's an area that's not of much importance to most web/db hackers. Cursors in the toolkit are used to traverse the entire rowset in every case I've looked at. In PostgreSQL, the "right" way to get just a few rows from a query is to use the non-standard OFFSET and LIMIT clauses rather than manipulate a cursor.