Oh! Thanks Janine!!!
I wrote a small python program to do the same because I thought that I might contribute something as well :)
Here is the code:
#!/usr/bin/python
#
#
# Author: Bruno Mattarollo bruno.mattarollo@diala.greenpeace.org
# Creation Date: 05th Feb 2003
# License: Free, do what you want with this!
# DISCLAIMER! This might corrupt your dump file
# I AM NOT RESPONSIBLE FOR THE USE OF THIS! Don't blame me for any
# problem that might occur. DON'T USE THIS IN PRODUCTION!
import re
DUMP_FILE="arsdigita-export-file-20021218-2.dmp"
OUTPUT_FILE="dump-cleaned.dmp"
FROM_TABLESPACE="ARSDIGITA"
TO_TABLESPACE="DEVELOPMENT"
compiled_re = re.compile("(TABLESPACE)\s+(\"" + FROM_TABLESPACE + "\")")
def main():
print "About to start this miserable task ... :("
fh = open(DUMP_FILE, 'rb')
oh = open(OUTPUT_FILE, 'wb')
# Maybe reading bigger chunks of the file would be better!
# My file is 3.3GB!!!! :((((((
chunk = fh.read(1024*1024)
while (chunk):
if ( compiled_re.search(chunk) ):
# We have a match!
print "We have a match!"
# We need to replace the FROM_TABLESPACE with the TO_TABLESPACE
chunk = re.sub(r'(TABLESPACE)(\s+)(\"' + FROM_TABLESPACE + '\")',
r'\1\2"' + TO_TABLESPACE + '"',
chunk)
oh.write(chunk)
# If you change the chunk size above, change it here as well!
chunk = fh.read(1024*1024)
# We are done
fh.close()
oh.close()
print "Ouch! Finally!"
if __name__ == '__main__':
main()
I coded this in 4 minutes and ran it on our export (as it says in the code it's 3.3GB) and I am importing it right now without problems so far... 😊... crossing my fingers...