Forum OpenACS Q&A: Importing Data: ctl files

Collapse
Posted by Angel Hernandez on
I know this is a simple question, but I can't seem to find the answer in any of the PG documentation... How do you populate a table with a .ctl file?

Ok, we've already got Oracle 8: The Complete Reference by Koch and Loney. It's now time for PostgreSQL 7: The Complete Reference by Adida and Baccus 😊

Collapse
Posted by Don Baccus on
Take a look at the load-geo-tables script in www/install.  It just does a "psql -f" on the .ctl file.  If you've loaded the data model from www/doc/sql before loading the geo tables, you'll have to drop the tables involved before running the script.

Actually, Postgres documentation's not bad.  It's just a bit sparse and not always 100% up-to-date.  You can read about the COPY command, used in the load-geo-tables script, in the Postgres user's guide.

Collapse
Posted by Ben Adida on
The online documentation is decent indeed, but I agree that more would be useful. It'll have to be an online book first, though :)
Collapse
Posted by Cezar Totth on
Hi,

Postgress has an pg_dump utility wich can be used to dump an entire database (both db structure and content), as a file containing sql commands.

If the ACS/pg team already has an initialised postgress database, with all structure and config data initialised, is simple for them to run pg_dump, then include the dumped file in the ACS/pg distribution.

Then to install it we'll need only to run this sql script against our postgres server in order to initialise the ACS/pg database.

Regards,
Cezar

Collapse
Posted by Don Baccus on
Well, as distributed you only need to run two script files, and these could be merged into one.  I'm not sure why Ars Digita hasn't done so, probably is due to the fact that the geospatial stuff was added for a single client and as of 3.1 at least still wasn't cleanly integrated into the toolkit.

Now that we have installation instructions, this question's obsolete as the instructions tell you how to run the two install scripts.

The problem with pg_dump output is that it is kinda unreadable, spewed  forth in a legal but obtuse format that includes a lot of redundant information (emitted to minimize the amount of bookeeping work done by  pg_dump, it is easier to dump everything that other tables might need  to reference rather than just the stuff actually used).

So we'd need to distribute the "clean", human-formatted and commented datamodel anyway.

Because of these reasons, I don't see any benefit to distributing a dump rather than the two initialization scripts - now that we have documentation!