Forum OpenACS Development: What is the best way to migrate data between OpenACS instances?

I am splitting an OpenACS website into two different websites. I would like to move a portion of the users, and some package data, to the new OpenACS instance. The databases are both using the same postgresql on the same machine, and the versions of OpenACS are almost the same (5.1.2 and 5.1.5). The new database has been created with an acs_object id sequence starting at 1,000,000 to provide some room (the max of the source db is 24,610). I see a few ways to do it and would appreciate guidance on which is best.
  1. Import the data directly via pg_dump. This would entail, presumably, manually deleting all of the unwanted data from the dump file. That, in turn, would entail mapping out all of the table dependencies for the involved tables (acs_objects, users, etc) and possibly doing a fair amount of manual lookups to determine which records to wipe from the dump.
  2. Import the data into the new db into some temporary space and then use sql to copy selected data over. Would require some kind of renaming of all of the tables to prevent conflict, or figuring out whatever psql can do for namespaces (ie, distinct spaces in the same db which can be directly accessed with sql, which is not the case for different databases)
  3. Write migration scripts in OpenACS TCL, using the -dbn switch to go between databases. This would not require mucking with the data, but would require writing lots of tedious code walking the various relationships and copying the data record by record, field by field, in reverse foreign key order. One plus would be that a "user copier" would come out of it which others could use, but LDAP is probably a better route for those sorts of things.
  4. Write migration scripts in OpenACS TCL at the API level. I don't think this is possible because I don't see how to do database-directed API calls (besides the db_ stuff)
Writing this, option #1 looks best assuming that the different bits of data are already well-segregated in the dump file. That's probably true for package data but not true for user data. #3 would be tidier because it would be reproducible, could be run against the latest data, etc, but it would be more work unless the dump was really ugly. Thoughts?
Another possibility would be to completely clone the site, and then add in admin UI features that allowed you to prune out the part of the site you didn't want. Ideally, we should allow that degree of flexibility anyway.

Packages should unmount cleanly.
You should be able to make huge sets of users deleted.
Etc.

I know this probably isn't practical, but I thought I'd throw it out there as a possibility.