Postgres 9 dump database to file


















The idea behind this dump method is to generate a text file with SQL commands that, when fed back to the server, will recreate the database in the same state as it was at the time of the dump.

The basic usage of this command is:. We will see below how this can be useful. This means that you can perform this backup procedure from any remote host that has access to the database. In particular, it must have read access to all tables that you want to back up, so in practice you almost always have to run it as a database superuser. Similarly, the default port is indicated by the PGPORT environment variable or, failing that, by the compiled-in default.

Conveniently, the server will normally have the same compiled-in default. To do this, use the -o command-line option. The general command form to restore a dump is. The database dbname will not be created by this command, so you must create it yourself from template0 before executing psql e. See the psql reference page for more information. Before restoring an SQL dump, all the users who own objects or were granted permissions on objects in the dumped database must already exist.

Sometimes this is what you want, but usually it is not. By default, the psql script will continue to execute after an SQL error is encountered. Either way, you will only have a partially restored database. It is frequently used to store and manipulate information related to websites and applications. As with any kind of valuable data, it is important to implement a backup scheme to protect against data loss. This guide will cover some practical ways that you can backup your PostgreSQL data.

We will be using an Ubuntu Most modern distributions and recent versions of PostgreSQL will operate in a similar way. The command must be run by a user with privileges to read all of the database information, so it is run as the superuser most of the time.

For a real-world example, we can log into the "postgres" user and execute the command on the default database, also called "postgres":. This command is actually a PostgreSQL client program, so it can be run from a remote system as long as that system has access to the database.

If you wish to backup a remote system, you can pass the "-h" flag for specifying the remote host, and the "-p" flag to give the remote port:. This means that you must ensure that your log in credentials are valid for the systems you are trying to back up. Note: this redirection operation does not create the database in question.

This must be done in a separate step prior to running the command. Another step that must be performed in order to restore correctly is to recreate any users who own or have grant permissions on objects within the database. By default, PostgreSQL will attempt to continue restoring a database, even when it encounters an error along the way. In many cases, this is undesirable for obvious reasons.

The -n and -N switches have no effect when -t is used, because tables selected by -t will be dumped regardless of those switches, and non-table objects will not be dumped.

Therefore, there is no guarantee that the results of a specific-table dump can be successfully restored by themselves into a clean database. Note: The behavior of the -t switch is not entirely upward compatible with pre Formerly, writing -t tab would dump all tables named tab , but now it just dumps whichever one is visible in your default search path. Also, you must write something like -t sch. Do not dump any tables matching the table pattern. The pattern is interpreted according to the same rules as for -t.

When both -t and -T are given, the behavior is to dump just the tables that match at least one -t switch but no -T switches. If -T appears without -t , then tables matching -T are excluded from what is otherwise a normal dump. Specifies verbose mode. Specify the compression level to use. Zero means no compression. For the custom archive format, this specifies compression of individual table-data segments, and the default is to compress at a moderate level.

For plain text output, setting a nonzero compression level causes the entire output file to be compressed, as though it had been fed through gzip ; but the default is not to compress.

The tar archive format currently does not support compression at all. This option is for use by in-place upgrade utilities. Its use for other purposes is not recommended or supported. The behavior of the option may change in future releases without notice. This will make restoration very slow; it is mainly useful for making dumps that can be loaded into non- PostgreSQL databases. However, since this option generates a separate command for each row, an error in reloading a row causes only that row to be lost rather than the entire table contents.

This option disables the use of dollar quoting for function bodies, and forces them to be quoted using SQL standard string syntax. This option is only relevant when creating a data-only dump. Use this if you have referential integrity checks or other triggers on the tables that you do not want to invoke during data reload. Presently, the commands emitted for --disable-triggers must be done as superuser.

So, you should also specify a superuser name with -S , or preferably be careful to start the resulting script as a superuser. Note that the restore might fail altogether if you have rearranged column order. The --column-inserts option is safe against column order changes, though even slower. Do not wait forever to acquire shared table locks at the beginning of the dump. Instead fail if unable to lock a table within the specified timeout.

Allowed values vary depending on the server version you are dumping from, but an integer number of milliseconds is accepted by all versions since 7. This option is ignored when dumping from a pre Do not output commands to select tablespaces.

With this option, all objects will be created in whichever tablespace is the default during restore. Do not dump the contents of unlogged tables. This option has no effect on whether or not the table definitions schema are dumped; it only suppresses dumping the table data. Data in unlogged tables is always excluded when dumping from a standby server. Force quoting of all identifiers. This sometimes results in compatibility issues when dealing with servers of other versions that may have slightly different sets of reserved words.

Using --quote-all-identifiers prevents such issues, at the price of a harder-to-read dump script. See Chapter 13 for more information about transaction isolation and concurrency control. This option is not beneficial for a dump which is intended only for disaster recovery.

It could be useful for a dump used to load a copy of the database for reporting or other read-only load sharing while the original database continues to be updated.



0コメント

  • 1000 / 1000