diff --git a/docs/download.html b/docs/download.html index 8938be4..914b23b 100644 --- a/docs/download.html +++ b/docs/download.html @@ -96,7 +96,7 @@
diff --git a/docs/howto/pgloader.1.html b/docs/howto/pgloader.1.html index c75e3c3..0ce099a 100644 --- a/docs/howto/pgloader.1.html +++ b/docs/howto/pgloader.1.html @@ -121,7 +121,7 @@ pgloader --version
Use th postgresql:///pgloader?districts_longlat
Now the OS will take care of the streaming and buffering between the network and the commands and pgloader will take care of streaming the data down to PostgreSQL.
The following command will open the SQLite database, discover its tables definitions including indexes and foreign keys, migrate those definitions while casting the data type specifications to their PostgreSQL equivalent and then migrate the data over:
createdb newdb
pgloader ./test/sqlite/sqlite.db postgresql:///newdb Just create a database where to host the MySQL data and definitions and have pgloader do the migration for you in a single command line:
createdb pagila
pgloader mysql://user@localhost/sakila postgresql:///pagila It's possible for pgloader to download a file from HTTP, unarchive it, and only then open it to discover the schema then load the data:
createdb foo
-pgloader --type dbf http://www.insee.fr/fr/methodes/nomenclatures/cog/telechargement/2013/dbf/historiq2013.zip postgresql:///foo Here it's not possible for pgloader to guess the kind of data source it's being given, so it's necessary to use the --type command line switch.
To load data to PostgreSQL, pgloader uses the COPY streaming protocol. While this is the faster way to load data, COPY has an important drawback: as soon as PostgreSQL emits an error with any bit of data sent to it, whatever the problem is, the whole data set is rejected by PostgreSQL.
To work around that, pgloader cuts the data into batches of 25000 rows each, so that when a problem occurs it's only impacting that many rows of data. Each batch is kept in memory while the COPY streaming happens, in order to be able to handle errors should some happen.
When PostgreSQL rejects the whole batch, pgloader logs the error message then isolates the bad row(s) from the accepted ones by retrying the batched rows in smaller batches. To do that, pgloader parses the CONTEXT error message from the failed COPY, as the message contains the line number where the error was found in the batch, as in the following example:
CONTEXT: COPY errors, line 3, column b: "2006-13-11" Using that information, pgloader will reload all rows in the batch before the erroneous one, log the erroneous one as rejected, then try loading the remaining of the batch in a single attempt, which may or may not contain other erroneous data.
At the end of a load containing rejected rows, you will find two files in the root-dir location, under a directory named the same as the target database of your setup. The filenames are the target table, and their extensions are .dat for the rejected data and .log for the file containing the full PostgreSQL client side logs about the rejected data.
The .dat file is formatted in PostgreSQL the text COPY format as documented in http://www.postgresql.org/docs/9.2/static/sql-copy.html#AEN66609.
pgloader has been developed with performance in mind, to be able to cope with ever growing needs in loading large amounts of data into PostgreSQL.
The basic architecture it uses is the old Unix pipe model, where a thread is responsible for loading the data (reading a CSV file, querying MySQL, etc) and fills pre-processed data into a queue. Another threads feeds from the queue, apply some more transformations to the input data and stream the end result to PostgreSQL using the COPY protocol.
When given a file that the PostgreSQL COPY command knows how to parse, and if the file contains no erroneous data, then pgloader will never be as fast as just using the PostgreSQL COPY command.
Note that while the COPY command is restricted to read either from its standard input or from a local file on the server's file system, the command line tool psql implements a \copy command that knows how to stream a file local to the client over the network and into the PostgreSQL server, using the same protocol as pgloader uses.
pgloader uses several concurrent tasks to process the data being loaded:
a reader task reads the data in,
at least one transformer task is responsible for applying the needed transformations to given data so that it fits PostgreSQL expectations, those transformations include CSV like user-defined projections, database casting (default and user given), and PostgreSQL specific formatting of the data for the COPY protocol and in unicode,
at least one writer task is responsible for sending the data down to PostgreSQL using the COPY protocol.
The idea behind having the transformer task do the formatting is so that in the event of bad rows being rejected by PostgreSQL the retry process doesn't have to do that step again.
At the moment, the number of transformer and writer tasks are forced into being the same, which allows for a very simple queueing model to be implemented: the reader task fills in one queue per transformer task, which then pops from that queue and pushes to a writer queue per COPY task.
The parameter workers allows to control how many worker threads are allowed to be active at any time (that's the parallelism level); and the parameter concurrency allows to control how many tasks are started to handle the data (they may not all run at the same time, depending on the workers setting).
We allow workers simultaneous workers to be active at the same time in the context of a single table. A single unit of work consist of several kinds of workers:
The N here is setup to the concurrency parameter: with a CONCURRENCY of 2, we start (+ 1 2 2) = 5 concurrent tasks, with a concurrency of 4 we start (+ 1 4 4) = 9 concurrent tasks, of which only workers may be active simultaneously.
So with workers = 4, concurrency = 2, the parallel scheduler will maintain active only 4 of the 5 tasks that are started.
With workers = 8, concurrency = 1, we then are able to work on several units of work at the same time. In the database sources, a unit of work is a table, so those settings allow pgloader to be active on as many as 3 tables at any time in the load process.
The defaults are workers = 4, concurrency = 1 when loading from a database source, and workers = 8, concurrency = 2 when loading from something else (currently, a file). Those defaults are arbitrary and waiting for feedback from users, so please consider providing feedback if you play with the settings.
As the CREATE INDEX threads started by pgloader are only waiting until PostgreSQL is done with the real work, those threads are NOT counted into the concurrency levels as detailed here.
By default, as many CREATE INDEX threads as the maximum number of indexes per table are found in your source schema. It is possible to set the max parallel create index WITH option to another number in case there's just too many of them to create.
pgloader supports the following input formats:
csv, which includes also tsv and other common variants where you can change the separator and the quoting rules and how to escape the quotes themselves;
fixed columns file, where pgloader is flexible enough to accomodate with source files missing columns (ragged fixed length column files do exist);
PostgreSLQ COPY formatted files, following the COPY TEXT documentation of PostgreSQL, such as the reject files prepared by pgloader;
dbase files known as db3 or dbf file;
ixf formated files, ixf being a binary storage format from IBM;
sqlite databases with fully automated discovery of the schema and advanced cast rules;
mysql databases with fully automated discovery of the schema and advanced cast rules;
MS SQL databases with fully automated discovery of the schema and advanced cast rules.
pgloader implements a Domain Specific Language allowing to setup complex data loading scripts handling computed columns and on-the-fly sanitization of the input data. For more complex data loading scenarios, you will be required to learn that DSL's syntax. It's meant to look familiar to DBA by being inspired by SQL where it makes sense, which is not that much after all.
The pgloader commands follow the same global grammar rules. Each of them might support only a subset of the general options and provide specific options.
LOAD <source-type>
+pgloader --type dbf http://www.insee.fr/fr/methodes/nomenclatures/cog/telechargement/2013/dbf/historiq2013.zip postgresql:///foo Here it's not possible for pgloader to guess the kind of data source it's being given, so it's necessary to use the --type command line switch.
To load data to PostgreSQL, pgloader uses the COPY streaming protocol. While this is the faster way to load data, COPY has an important drawback: as soon as PostgreSQL emits an error with any bit of data sent to it, whatever the problem is, the whole data set is rejected by PostgreSQL.
To work around that, pgloader cuts the data into batches of 25000 rows each, so that when a problem occurs it's only impacting that many rows of data. Each batch is kept in memory while the COPY streaming happens, in order to be able to handle errors should some happen.
When PostgreSQL rejects the whole batch, pgloader logs the error message then isolates the bad row(s) from the accepted ones by retrying the batched rows in smaller batches. To do that, pgloader parses the CONTEXT error message from the failed COPY, as the message contains the line number where the error was found in the batch, as in the following example:
CONTEXT: COPY errors, line 3, column b: "2006-13-11" Using that information, pgloader will reload all rows in the batch before the erroneous one, log the erroneous one as rejected, then try loading the remaining of the batch in a single attempt, which may or may not contain other erroneous data.
At the end of a load containing rejected rows, you will find two files in the root-dir location, under a directory named the same as the target database of your setup. The filenames are the target table, and their extensions are .dat for the rejected data and .log for the file containing the full PostgreSQL client side logs about the rejected data.
The .dat file is formatted in PostgreSQL the text COPY format as documented in http://www.postgresql.org/docs/9.2/static/sql-copy.html#AEN66609.
pgloader has been developed with performance in mind, to be able to cope with ever growing needs in loading large amounts of data into PostgreSQL.
The basic architecture it uses is the old Unix pipe model, where a thread is responsible for loading the data (reading a CSV file, querying MySQL, etc) and fills pre-processed data into a queue. Another threads feeds from the queue, apply some more transformations to the input data and stream the end result to PostgreSQL using the COPY protocol.
When given a file that the PostgreSQL COPY command knows how to parse, and if the file contains no erroneous data, then pgloader will never be as fast as just using the PostgreSQL COPY command.
Note that while the COPY command is restricted to read either from its standard input or from a local file on the server's file system, the command line tool psql implements a \copy command that knows how to stream a file local to the client over the network and into the PostgreSQL server, using the same protocol as pgloader uses.
pgloader uses several concurrent tasks to process the data being loaded:
a reader task reads the data in and pushes it to a queue,
at last one write task feeds from the queue and formats the raw into the PostgreSQL COPY format in batches (so that it's possible to then retry a failed batch without reading the data from source again), and then sends the data to PostgreSQL using the COPY protocol.
The parameter workers allows to control how many worker threads are allowed to be active at any time (that's the parallelism level); and the parameter concurrency allows to control how many tasks are started to handle the data (they may not all run at the same time, depending on the workers setting).
We allow workers simultaneous workers to be active at the same time in the context of a single table. A single unit of work consist of several kinds of workers:
The N here is setup to the concurrency parameter: with a CONCURRENCY of 2, we start (+ 1 2) = 3 concurrent tasks, with a concurrency of 4 we start (+ 1 4) = 9 concurrent tasks, of which only workers may be active simultaneously.
The defaults are workers = 4, concurrency = 1 when loading from a database source, and workers = 8, concurrency = 2 when loading from something else (currently, a file). Those defaults are arbitrary and waiting for feedback from users, so please consider providing feedback if you play with the settings.
As the CREATE INDEX threads started by pgloader are only waiting until PostgreSQL is done with the real work, those threads are NOT counted into the concurrency levels as detailed here.
By default, as many CREATE INDEX threads as the maximum number of indexes per table are found in your source schema. It is possible to set the max parallel create index WITH option to another number in case there's just too many of them to create.
pgloader supports the following input formats:
csv, which includes also tsv and other common variants where you can change the separator and the quoting rules and how to escape the quotes themselves;
fixed columns file, where pgloader is flexible enough to accomodate with source files missing columns (ragged fixed length column files do exist);
PostgreSLQ COPY formatted files, following the COPY TEXT documentation of PostgreSQL, such as the reject files prepared by pgloader;
dbase files known as db3 or dbf file;
ixf formated files, ixf being a binary storage format from IBM;
sqlite databases with fully automated discovery of the schema and advanced cast rules;
mysql databases with fully automated discovery of the schema and advanced cast rules;
MS SQL databases with fully automated discovery of the schema and advanced cast rules.
pgloader implements a Domain Specific Language allowing to setup complex data loading scripts handling computed columns and on-the-fly sanitization of the input data. For more complex data loading scenarios, you will be required to learn that DSL's syntax. It's meant to look familiar to DBA by being inspired by SQL where it makes sense, which is not that much after all.
The pgloader commands follow the same global grammar rules. Each of them might support only a subset of the general options and provide specific options.
LOAD <source-type>
FROM <source-url> [ HAVING FIELDS <source-level-options> ]
INTO <postgresql-url> [ TARGET COLUMNS <columns-and-options> ]
@@ -131,7 +131,7 @@ pgloader --type dbf http://www.insee.fr/fr/methodes/nomenclatures/cog/telecharge
[ BEFORE LOAD [ DO <sql statements> | EXECUTE <sql file> ] ... ]
[ AFTER LOAD [ DO <sql statements> | EXECUTE <sql file> ] ... ]
-; The main clauses are the LOAD, FROM, INTO and WITH clauses that each command implements. Some command then implement the SET command, or some specific clauses such as the CAST clause.
Some clauses are common to all commands:
The FROM clause specifies where to read the data from, and each command introduces its own variant of sources. For instance, the CSV source supports inline, stdin, a filename, a quoted filename, and a FILENAME MATCHING clause (see above); whereas the MySQL source only supports a MySQL database URI specification.
In all cases, the FROM clause is able to read its value from an environment variable when using the form GETENV 'varname'.
The PostgreSQL connection URI must contains the name of the target table where to load the data into. That table must have already been created in PostgreSQL, and the name might be schema qualified.
The INTO target database connection URI can be parsed from the value of an environment variable when using the form GETENV 'varname'.
Then INTO option also supports an optional comma separated list of target columns, which are either the name of an input field or the white space separated list of the target column name, its PostgreSQL data type and a USING expression.
The USING expression can be any valid Common Lisp form and will be read with the current package set to pgloader.transforms, so that you can use functions defined in that package, such as functions loaded dynamically with the --load command line parameter.
Each USING expression is compiled at runtime to native code.
This feature allows pgloader to load any number of fields in a CSV file into a possibly different number of columns in the database, using custom code for that projection.
Set of options to apply to the command, using a global syntax of either:
See each specific command for details.
All data sources specific commands support the following options:
See the section BATCH BEHAVIOUR OPTIONS for more details.
In addition, the following settings are available:
See section A NOTE ABOUT PARALLELISM for more details.
This clause allows to specify session parameters to be set for all the sessions opened by pgloader. It expects a list of parameter name, the equal sign, then the single-quoted value as a comma separated list.
The names and values of the parameters are not validated by pgloader, they are given as-is to PostgreSQL.
You can run SQL queries against the database before loading the data from the CSV file. Most common SQL queries are CREATE TABLE IF NOT EXISTS so that the data can be loaded.
Each command must be dollar-quoted: it must begin and end with a double dollar sign, $$. Dollar-quoted queries are then comma separated. No extra punctuation is expected after the last SQL query.
Same behaviour as in the BEFORE LOAD DO clause. Allows you to read the SQL queries from a SQL file. Implements support for PostgreSQL dollar-quoting and the \i and \ir include facilities as in psql batch mode (where they are the same thing).
Same format as BEFORE LOAD DO, the dollar-quoted queries found in that section are executed once the load is done. That's the right time to create indexes and constraints, or re-enable triggers.
Same behaviour as in the AFTER LOAD DO clause. Allows you to read the SQL queries from a SQL file. Implements support for PostgreSQL dollar-quoting and the \i and \ir include facilities as in psql batch mode (where they are the same thing).
The <postgresql-url> parameter is expected to be given as a Connection URI as documented in the PostgreSQL documentation at http://www.postgresql.org/docs/9.3/static/libpq-connect.html#LIBPQ-CONNSTRING.
postgresql://[user[:password]@][netloc][:port][/dbname][?option=value&...] Where:
Can contain any character, including colon (:) which must then be doubled (::) and at-sign (@) which must then be doubled (@@).
When omitted, the user name defaults to the value of the PGUSER environment variable, and if it is unset, the value of the USER environment variable.
Can contain any character, including the at sign (@) which must then be doubled (@@). To leave the password empty, when the user name ends with at at sign, you then have to use the syntax user:@.
When omitted, the password defaults to the value of the PGPASSWORD environment variable if it is set, otherwise the password is left unset.
Can be either a hostname in dotted notation, or an ipv4, or an Unix domain socket path. Empty is the default network location, under a system providing unix domain socket that method is preferred, otherwise the netloc default to localhost.
It's possible to force the unix domain socket path by using the syntax unix:/path/to/where/the/socket/file/is, so to force a non default socket path and a non default port, you would have:
postgresql://unix:/tmp:54321/dbname The netloc defaults to the value of the PGHOST environment variable, and if it is unset, to either the default unix socket path when running on a Unix system, and localhost otherwise.
Should be a proper identifier (letter followed by a mix of letters, digits and the punctuation signs comma (,), dash (-) and underscore (_).
When omitted, the dbname defaults to the value of the environment variable PGDATABASE, and if that is unset, to the user value as determined above.
The optional parameters must be supplied with the form name=value, and you may use several parameters by separating them away using an ampersand (&) character.
Only some options are supported here, tablename (which might be qualified with a schema name) sslmode, host, port, dbname, user and password.
The sslmode parameter values can be one of disable, allow, prefer or require.
For backward compatibility reasons, it's possible to specify the tablename option directly, without spelling out the tablename= parts.
The options override the main URI components when both are given, and using the percent-encoded option parameters allow using passwords starting with a colon and bypassing other URI components parsing limitations.
Several clauses listed in the following accept regular expressions with the following input rules:
A regular expression begins with a tilde sign (~),
is then followed with an opening sign,
then any character is allowed and considered part of the regular expression, except for the closing sign,
then a closing sign is expected.
The opening and closing sign are allowed by pair, here's the complete list of allowed delimiters:
~//
+; The main clauses are the LOAD, FROM, INTO and WITH clauses that each command implements. Some command then implement the SET command, or some specific clauses such as the CAST clause.
Some clauses are common to all commands:
The FROM clause specifies where to read the data from, and each command introduces its own variant of sources. For instance, the CSV source supports inline, stdin, a filename, a quoted filename, and a FILENAME MATCHING clause (see above); whereas the MySQL source only supports a MySQL database URI specification.
In all cases, the FROM clause is able to read its value from an environment variable when using the form GETENV 'varname'.
The PostgreSQL connection URI must contains the name of the target table where to load the data into. That table must have already been created in PostgreSQL, and the name might be schema qualified.
The INTO target database connection URI can be parsed from the value of an environment variable when using the form GETENV 'varname'.
Then INTO option also supports an optional comma separated list of target columns, which are either the name of an input field or the white space separated list of the target column name, its PostgreSQL data type and a USING expression.
The USING expression can be any valid Common Lisp form and will be read with the current package set to pgloader.transforms, so that you can use functions defined in that package, such as functions loaded dynamically with the --load command line parameter.
Each USING expression is compiled at runtime to native code.
This feature allows pgloader to load any number of fields in a CSV file into a possibly different number of columns in the database, using custom code for that projection.
Set of options to apply to the command, using a global syntax of either:
See each specific command for details.
All data sources specific commands support the following options:
See the section BATCH BEHAVIOUR OPTIONS for more details.
In addition, the following settings are available:
See section A NOTE ABOUT PARALLELISM for more details.
This clause allows to specify session parameters to be set for all the sessions opened by pgloader. It expects a list of parameter name, the equal sign, then the single-quoted value as a comma separated list.
The names and values of the parameters are not validated by pgloader, they are given as-is to PostgreSQL.
You can run SQL queries against the database before loading the data from the CSV file. Most common SQL queries are CREATE TABLE IF NOT EXISTS so that the data can be loaded.
Each command must be dollar-quoted: it must begin and end with a double dollar sign, $$. Dollar-quoted queries are then comma separated. No extra punctuation is expected after the last SQL query.
Same behaviour as in the BEFORE LOAD DO clause. Allows you to read the SQL queries from a SQL file. Implements support for PostgreSQL dollar-quoting and the \i and \ir include facilities as in psql batch mode (where they are the same thing).
Same format as BEFORE LOAD DO, the dollar-quoted queries found in that section are executed once the load is done. That's the right time to create indexes and constraints, or re-enable triggers.
Same behaviour as in the AFTER LOAD DO clause. Allows you to read the SQL queries from a SQL file. Implements support for PostgreSQL dollar-quoting and the \i and \ir include facilities as in psql batch mode (where they are the same thing).
The <postgresql-url> parameter is expected to be given as a Connection URI as documented in the PostgreSQL documentation at http://www.postgresql.org/docs/9.3/static/libpq-connect.html#LIBPQ-CONNSTRING.
postgresql://[user[:password]@][netloc][:port][/dbname][?option=value&...] Where:
Can contain any character, including colon (:) which must then be doubled (::) and at-sign (@) which must then be doubled (@@).
When omitted, the user name defaults to the value of the PGUSER environment variable, and if it is unset, the value of the USER environment variable.
Can contain any character, including the at sign (@) which must then be doubled (@@). To leave the password empty, when the user name ends with at at sign, you then have to use the syntax user:@.
When omitted, the password defaults to the value of the PGPASSWORD environment variable if it is set, otherwise the password is left unset.
Can be either a hostname in dotted notation, or an ipv4, or an Unix domain socket path. Empty is the default network location, under a system providing unix domain socket that method is preferred, otherwise the netloc default to localhost.
It's possible to force the unix domain socket path by using the syntax unix:/path/to/where/the/socket/file/is, so to force a non default socket path and a non default port, you would have:
postgresql://unix:/tmp:54321/dbname The netloc defaults to the value of the PGHOST environment variable, and if it is unset, to either the default unix socket path when running on a Unix system, and localhost otherwise.
Should be a proper identifier (letter followed by a mix of letters, digits and the punctuation signs comma (,), dash (-) and underscore (_).
When omitted, the dbname defaults to the value of the environment variable PGDATABASE, and if that is unset, to the user value as determined above.
The optional parameters must be supplied with the form name=value, and you may use several parameters by separating them away using an ampersand (&) character.
Only some options are supported here, tablename (which might be qualified with a schema name) sslmode, host, port, dbname, user and password.
The sslmode parameter values can be one of disable, allow, prefer or require.
For backward compatibility reasons, it's possible to specify the tablename option directly, without spelling out the tablename= parts.
The options override the main URI components when both are given, and using the percent-encoded option parameters allow using passwords starting with a colon and bypassing other URI components parsing limitations.
Several clauses listed in the following accept regular expressions with the following input rules:
A regular expression begins with a tilde sign (~),
is then followed with an opening sign,
then any character is allowed and considered part of the regular expression, except for the closing sign,
then a closing sign is expected.
The opening and closing sign are allowed by pair, here's the complete list of allowed delimiters:
~//
~[]
~{}
~()
@@ -139,7 +139,7 @@ pgloader --type dbf http://www.insee.fr/fr/methodes/nomenclatures/cog/telecharge
~""
~''
~||
-~## Pick the set of delimiters that don't collide with the regular expression you're trying to input. If your expression is such that none of the solutions allow you to enter it, the places where such expressions are allowed should allow for a list of expressions.
Any command may contain comments, following those input rules:
the -- delimiter begins a comment that ends with the end of the current line,
the delimiters /* and */ respectively start and end a comment, which can be found in the middle of a command or span several lines.
Any place where you could enter a whitespace will accept a comment too.
All pgloader commands have support for a WITH clause that allows for specifying options. Some options are generic and accepted by all commands, such as the batch behaviour options, and some options are specific to a data source kind, such as the CSV skip header option.
The global batch behaviour options are:
Takes a numeric value as argument, used as the maximum number of rows allowed in a batch. The default is 25 000 and can be changed to try having better performance characteristics or to control pgloader memory usage;
Takes a memory unit as argument, such as 20 MB, its default value. Accepted multipliers are kB, MB, GB, TB and PB. The case is important so as not to be confused about bits versus bytes, we're only talking bytes here.
Takes a numeric value as argument, defaults to 10. That's the number of batches that pgloader is allows to build in memory in each reader thread. See the workers setting for how many reader threads are allowed to run at the same time: each of them is allowed as many as batch concurrency batches.
Other options are specific to each input source, please refer to specific parts of the documentation for their listing and covering.
A batch is then closed as soon as either the batch rows or the batch size threshold is crossed, whichever comes first. In cases when a batch has to be closed because of the batch size setting, a debug level log message is printed with how many rows did fit in the oversized batch.
This command instructs pgloader to load data from a CSV file. Here's an example:
LOAD CSV
+~## Pick the set of delimiters that don't collide with the regular expression you're trying to input. If your expression is such that none of the solutions allow you to enter it, the places where such expressions are allowed should allow for a list of expressions.
Any command may contain comments, following those input rules:
the -- delimiter begins a comment that ends with the end of the current line,
the delimiters /* and */ respectively start and end a comment, which can be found in the middle of a command or span several lines.
Any place where you could enter a whitespace will accept a comment too.
All pgloader commands have support for a WITH clause that allows for specifying options. Some options are generic and accepted by all commands, such as the batch behaviour options, and some options are specific to a data source kind, such as the CSV skip header option.
The global batch behaviour options are:
Takes a numeric value as argument, used as the maximum number of rows allowed in a batch. The default is 25 000 and can be changed to try having better performance characteristics or to control pgloader memory usage;
Takes a memory unit as argument, such as 20 MB, its default value. Accepted multipliers are kB, MB, GB, TB and PB. The case is important so as not to be confused about bits versus bytes, we're only talking bytes here.
Takes a numeric value as argument, defaults to 100000. That's the number of rows that pgloader is allowed to read in memory in each reader thread. See the workers setting for how many reader threads are allowed to run at the same time.
Other options are specific to each input source, please refer to specific parts of the documentation for their listing and covering.
A batch is then closed as soon as either the batch rows or the batch size threshold is crossed, whichever comes first. In cases when a batch has to be closed because of the batch size setting, a debug level log message is printed with how many rows did fit in the oversized batch.
This command instructs pgloader to load data from a CSV file. Here's an example:
LOAD CSV
FROM 'GeoLiteCity-Blocks.csv' WITH ENCODING iso-646-us
HAVING FIELDS
(
@@ -285,18 +285,24 @@ MATCHING regexp
fields terminated by ','
FINALLY DO
- $$ create index blocks_ip4r_idx on geolite.blocks using gist(iprange); $$; The archive command accepts the following clauses and options:
Filename or HTTP URI where to load the data from. When given an HTTP URL the linked file will get downloaded locally before processing.
If the file is a zip file, the command line utility unzip is used to expand the archive into files in $TMPDIR, or /tmp if $TMPDIR is unset or set to a non-existing directory.
Then the following commands are used from the top level directory where the archive has been expanded.
A series of commands against the contents of the archive, at the moment only CSV,'FIXED and DBF commands are supported.
Note that commands are supporting the clause FROM FILENAME MATCHING which allows the pgloader command not to depend on the exact names of the archive directories.
The same clause can also be applied to several files with using the spelling FROM ALL FILENAMES MATCHING and a regular expression.
The whole matching clause must follow the following rule:
FROM [ ALL FILENAMES | [ FIRST ] FILENAME ] MATCHING SQL Queries to run once the data is loaded, such as CREATE INDEX.
This command instructs pgloader to load data from a database connection. The only supported database source is currently MySQL, and pgloader supports dynamically converting the schema of the source database and the indexes building.
A default set of casting rules are provided and might be overloaded and appended to by the command.
Here's an example:
LOAD DATABASE
+ $$ create index blocks_ip4r_idx on geolite.blocks using gist(iprange); $$; The archive command accepts the following clauses and options:
Filename or HTTP URI where to load the data from. When given an HTTP URL the linked file will get downloaded locally before processing.
If the file is a zip file, the command line utility unzip is used to expand the archive into files in $TMPDIR, or /tmp if $TMPDIR is unset or set to a non-existing directory.
Then the following commands are used from the top level directory where the archive has been expanded.
A series of commands against the contents of the archive, at the moment only CSV,'FIXED and DBF commands are supported.
Note that commands are supporting the clause FROM FILENAME MATCHING which allows the pgloader command not to depend on the exact names of the archive directories.
The same clause can also be applied to several files with using the spelling FROM ALL FILENAMES MATCHING and a regular expression.
The whole matching clause must follow the following rule:
FROM [ ALL FILENAMES | [ FIRST ] FILENAME ] MATCHING SQL Queries to run once the data is loaded, such as CREATE INDEX.
This command instructs pgloader to load data from a database connection. The only supported database source is currently MySQL, and pgloader supports dynamically converting the schema of the source database and the indexes building.
A default set of casting rules are provided and might be overloaded and appended to by the command.
Here's an example using as many options as possible, some of them even being defaults. Chances are you don't need that complex a setup, don't copy and paste it, use it only as a reference!
LOAD DATABASE
FROM mysql://root@localhost/sakila
INTO postgresql://localhost:54393/sakila
WITH include drop, create tables, create indexes, reset sequences,
- workers = 8, concurrency = 1
+ workers = 8, concurrency = 1,
+ multiple readers per thread, rows per range = 50000
- SET maintenance_work_mem to '128MB',
+ SET PostgreSQL PARAMETERS
+ maintenance_work_mem to '128MB',
work_mem to '12MB',
- search_path to 'sakila'
+ search_path to 'sakila, public, "$user"'
- CAST type datetime to timestamptz drop default drop not null using zero-dates-to-null,
+ SET MySQL PARAMETERS
+ net_read_timeout = '120',
+ net_write_timeout = '120'
+
+ CAST type bigint when (= precision 20) to bigserial drop typemod,
type date drop not null drop default using zero-dates-to-null,
-- type tinyint to boolean using tinyint-to-boolean,
type year to integer
@@ -321,7 +327,7 @@ MATCHING regexp
$$ create schema if not exists pagila; $$,
$$ create schema if not exists mv; $$,
$$ alter database sakila set search_path to pagila, mv, public; $$;
-The database command accepts the following clauses and options:
Must be a connection URL pointing to a MySQL database.
If the connection URI contains a table name, then only this table is migrated from MySQL to PostgreSQL.
See the SOURCE CONNECTION STRING section above for details on how to write the connection string. Environment variables described in USER environment variable value. The password can be provided with the environment variable MYSQL_PWD. The host can be provided with the environment variable MYSQL_HOST and otherwise defaults to localhost. The port can be provided with the environment variable MYSQL_TCP_PORT and otherwise defaults to 3306.
When loading from a MySQL database, the following options are supported, and the efault WITH clause is: no truncate, create tables, include drop, create indexes, reset sequences, foreign keys, downcase identifiers.
WITH options:
When this option is listed, pgloader drops all the tables in the target PostgreSQL database whose names appear in the MySQL database. This option allows for using the same command several times in a row until you figure out all the options, starting automatically from a clean environment. Please note that CASCADE is used to ensure that tables are dropped even if there are foreign keys pointing to them. This is precisely what include drop is intended to do: drop all target tables and recreate them.
Great care needs to be taken when using include drop, as it will cascade to all objects referencing the target tables, possibly including other tables that are not being loaded from the source DB.
When this option is listed, pgloader will not include any DROP statement when loading the data.
When this option is listed, pgloader issue the TRUNCATE command against each PostgreSQL table just before loading data into it.
When this option is listed, pgloader issues no TRUNCATE command.
When this option is listed, pgloader issues an ALTER TABLE ... DISABLE TRIGGER ALL command against the PostgreSQL target table before copying the data, then the command ALTER TABLE ... ENABLE TRIGGER ALL once the COPY is done.
This option allows loading data into a pre-existing table ignoring the foreign key constraints and user defined triggers and may result in invalid foreign key constraints once the data is loaded. Use with care.
When this option is listed, pgloader creates the table using the meta data found in the MySQL file, which must contain a list of fields with their data type. A standard data type conversion from DBF to PostgreSQL is done.
When this option is listed, pgloader skips the creation of table before loading data, target tables must then already exist.
Also, when using create no tables pgloader fetches the metadata from the current target database and checks type casting, then will remove constraints and indexes prior to loading the data and install them back again once the loading is done.
When this option is listed, pgloader gets the definitions of all the indexes found in the MySQL database and create the same set of index definitions against the PostgreSQL database.
When this option is listed, pgloader skips the creating indexes.
MySQL index names are unique per-table whereas in PostgreSQL index names have to be unique per-schema. The default for pgloader is to change the index name by prefixing it with idx_OID where OID is the internal numeric identifier of the table the index is built against.
In somes cases like when the DDL are entirely left to a framework it might be sensible for pgloader to refrain from handling index unique names, that is achieved by using the preserve index names option.
The default is to uniquify index names.
Even when using the option preserve index names, MySQL primary key indexes named "PRIMARY" will get their names uniquified. Failing to do so would prevent the primary keys to be created again in PostgreSQL where the index names must be unique per schema.
When this option is listed, pgloader gets the definitions of all the foreign keys found in the MySQL database and create the same set of foreign key definitions against the PostgreSQL database.
When this option is listed, pgloader skips creating foreign keys.
When this option is listed, at the end of the data loading and after the indexes have all been created, pgloader resets all the PostgreSQL sequences created to the current maximum value of the column they are attached to.
The options schema only and data only have no effects on this option.
When this option is listed, pgloader skips resetting sequences after the load.
The options schema only and data only have no effects on this option.
When this option is listed, pgloader converts all MySQL identifiers (table names, index names, column names) to downcase, except for PostgreSQL reserved keywords.
The PostgreSQL reserved keywords are determined dynamically by using the system function pg_get_keywords().
When this option is listed, pgloader quotes all MySQL identifiers so that their case is respected. Note that you will then have to do the same thing in your application code queries.
When this option is listed pgloader refrains from migrating the data over. Note that the schema in this context includes the indexes when the option create indexes has been listed.
When this option is listed pgloader only issues the COPY statements, without doing any other processing.
The cast clause allows to specify custom casting rules, either to overload the default casting rules or to amend them with special cases.
A casting rule is expected to follow one of the forms:
type <mysql-type-name> [ <guard> ... ] to <pgsql-type-name> [ <option> ... ]
+The database command accepts the following clauses and options:
Must be a connection URL pointing to a MySQL database.
If the connection URI contains a table name, then only this table is migrated from MySQL to PostgreSQL.
See the SOURCE CONNECTION STRING section above for details on how to write the connection string. Environment variables described in USER environment variable value. The password can be provided with the environment variable MYSQL_PWD. The host can be provided with the environment variable MYSQL_HOST and otherwise defaults to localhost. The port can be provided with the environment variable MYSQL_TCP_PORT and otherwise defaults to 3306.
When loading from a MySQL database, the following options are supported, and the default WITH clause is: no truncate, create schema, create tables, include drop, create indexes, reset sequences, foreign keys, downcase identifiers, uniquify index names.
WITH options:
When this option is listed, pgloader drops all the tables in the target PostgreSQL database whose names appear in the MySQL database. This option allows for using the same command several times in a row until you figure out all the options, starting automatically from a clean environment. Please note that CASCADE is used to ensure that tables are dropped even if there are foreign keys pointing to them. This is precisely what include drop is intended to do: drop all target tables and recreate them.
Great care needs to be taken when using include drop, as it will cascade to all objects referencing the target tables, possibly including other tables that are not being loaded from the source DB.
When this option is listed, pgloader will not include any DROP statement when loading the data.
When this option is listed, pgloader issue the TRUNCATE command against each PostgreSQL table just before loading data into it.
When this option is listed, pgloader issues no TRUNCATE command.
When this option is listed, pgloader issues an ALTER TABLE ... DISABLE TRIGGER ALL command against the PostgreSQL target table before copying the data, then the command ALTER TABLE ... ENABLE TRIGGER ALL once the COPY is done.
This option allows loading data into a pre-existing table ignoring the foreign key constraints and user defined triggers and may result in invalid foreign key constraints once the data is loaded. Use with care.
When this option is listed, pgloader creates the table using the meta data found in the MySQL file, which must contain a list of fields with their data type. A standard data type conversion from DBF to PostgreSQL is done.
When this option is listed, pgloader skips the creation of table before loading data, target tables must then already exist.
Also, when using create no tables pgloader fetches the metadata from the current target database and checks type casting, then will remove constraints and indexes prior to loading the data and install them back again once the loading is done.
When this option is listed, pgloader gets the definitions of all the indexes found in the MySQL database and create the same set of index definitions against the PostgreSQL database.
When this option is listed, pgloader skips the creating indexes.
When this option is listed, pgloader drops the indexes in the target database before loading the data, and creates them again at the end of the data copy.
MySQL index names are unique per-table whereas in PostgreSQL index names have to be unique per-schema. The default for pgloader is to change the index name by prefixing it with idx_OID where OID is the internal numeric identifier of the table the index is built against.
In somes cases like when the DDL are entirely left to a framework it might be sensible for pgloader to refrain from handling index unique names, that is achieved by using the preserve index names option.
The default is to uniquify index names.
Even when using the option preserve index names, MySQL primary key indexes named "PRIMARY" will get their names uniquified. Failing to do so would prevent the primary keys to be created again in PostgreSQL where the index names must be unique per schema.
When this option is listed, pgloader gets the definitions of all the foreign keys found in the MySQL database and create the same set of foreign key definitions against the PostgreSQL database.
When this option is listed, pgloader skips creating foreign keys.
When this option is listed, at the end of the data loading and after the indexes have all been created, pgloader resets all the PostgreSQL sequences created to the current maximum value of the column they are attached to.
The options schema only and data only have no effects on this option.
When this option is listed, pgloader skips resetting sequences after the load.
The options schema only and data only have no effects on this option.
When this option is listed, pgloader converts all MySQL identifiers (table names, index names, column names) to downcase, except for PostgreSQL reserved keywords.
The PostgreSQL reserved keywords are determined dynamically by using the system function pg_get_keywords().
When this option is listed, pgloader quotes all MySQL identifiers so that their case is respected. Note that you will then have to do the same thing in your application code queries.
When this option is listed pgloader refrains from migrating the data over. Note that the schema in this context includes the indexes when the option create indexes has been listed.
When this option is listed pgloader only issues the COPY statements, without doing any other processing.
The default is single reader per thread and it means that each MySQL table is read by a single thread as a whole, with a single SELECT statement using no WHERE clause.
When using multiple readers per thread pgloader may be able to divide the reading work into several threads, as many as the concurrency setting, which needs to be greater than 1 for this option to kick be activated.
For each source table, pgloader searches for a primary key over a single numeric column, or a multiple-column primary key index for which the first column is of a numeric data type (one of integer or bigint). When such an index exists, pgloader runs a query to find the min and max values on this column, and then split that range into many ranges containing a maximum of rows per range.
When the range list we then obtain contains at least as many ranges than our concurrency setting, then we distribute those ranges to each reader thread.
So when all the conditions are met, pgloader then starts as many reader thread as the concurrency setting, and each reader thread issues several queries with a WHERE id >= x AND id < y, where y - x = rows per range or less (for the last range, depending on the max value just obtained.
How many rows are fetched per SELECT query when using multiple readers per thread, see above for details.
The SET MySQL PARAMETERS allows setting MySQL parameters using the MySQL SET command each time pgloader connects to it.
The cast clause allows to specify custom casting rules, either to overload the default casting rules or to amend them with special cases.
A casting rule is expected to follow one of the forms:
type <mysql-type-name> [ <guard> ... ] to <pgsql-type-name> [ <option> ... ]
column <table-name>.<column-name> [ <guards> ] to ... It's possible for a casting rule to either match against a MySQL data type or against a given column name in a given table name. That flexibility allows to cope with cases where the type tinyint might have been used as a boolean in some cases but as a smallint in others.
The casting rules are applied in order, the first match prevents following rules to be applied, and user defined rules are evaluated first.
The supported guards are:
The casting rule is only applied against MySQL columns of the source type that have given value, which must be a single-quoted or a double-quoted string.
The casting rule is only applied against MySQL columns of the source type that have a typemod value matching the given typemod expression. The typemod is separated into its precision and scale components.
Example of a cast rule using a typemod guard:
type char when (= precision 1) to char keep typemod This expression casts MySQL char(1) column to a PostgreSQL column of type char(1) while allowing for the general case char(N) will be converted by the default cast rule into a PostgreSQL type varchar(N).
The casting rule is only applied against MySQL columns having the extra column auto_increment option set, so that it's possible to target e.g. serial rather than integer.
The default matching behavior, when this option isn't set, is to match both columns with the extra definition and without.
This means that if you want to implement a casting rule that target either serial or integer from a smallint definition depending on the auto_increment extra bit of information from MySQL, then you need to spell out two casting rules as following:
type smallint with extra auto_increment
to serial drop typemod keep default keep not null,
type smallint
@@ -336,7 +342,7 @@ ALTER TABLE NAMES MATCHING ~/./ SET (fillfactor='40') You can us with include drop, create tables, create indexes, reset sequences - set work_mem to '16MB', maintenance_work_mem to '512 MB';
The sqlite command accepts the following clauses and options:
Path or HTTP URL to a SQLite file, might be a .zip file.
When loading from a SQLite database, the following options are supported:
When loading from a SQLite database, the following options are supported, and the default WITH clause is: no truncate, create tables, include drop, create indexes, reset sequences, downcase identifiers, encoding 'utf-8'.
When this option is listed, pgloader drops all the tables in the target PostgreSQL database whose names appear in the SQLite database. This option allows for using the same command several times in a row until you figure out all the options, starting automatically from a clean environment. Please note that CASCADE is used to ensure that tables are dropped even if there are foreign keys pointing to them. This is precisely what include drop is intended to do: drop all target tables and recreate them.
Great care needs to be taken when using include drop, as it will cascade to all objects referencing the target tables, possibly including other tables that are not being loaded from the source DB.
When this option is listed, pgloader will not include any DROP statement when loading the data.
When this option is listed, pgloader issue the TRUNCATE command against each PostgreSQL table just before loading data into it.
When this option is listed, pgloader issues no TRUNCATE command.
When this option is listed, pgloader issues an ALTER TABLE ... DISABLE TRIGGER ALL command against the PostgreSQL target table before copying the data, then the command ALTER TABLE ... ENABLE TRIGGER ALL once the COPY is done.
This option allows loading data into a pre-existing table ignoring the foreign key constraints and user defined triggers and may result in invalid foreign key constraints once the data is loaded. Use with care.
When this option is listed, pgloader creates the table using the meta data found in the SQLite file, which must contain a list of fields with their data type. A standard data type conversion from DBF to PostgreSQL is done.
When this option is listed, pgloader skips the creation of table before loading data, target tables must then already exist.
Also, when using create no tables pgloader fetches the metadata from the current target database and checks type casting, then will remove constraints and indexes prior to loading the data and install them back again once the loading is done.
When this option is listed, pgloader gets the definitions of all the indexes found in the SQLite database and create the same set of index definitions against the PostgreSQL database.
When this option is listed, pgloader skips the creating indexes.
When this option is listed, at the end of the data loading and after the indexes have all been created, pgloader resets all the PostgreSQL sequences created to the current maximum value of the column they are attached to.
When this option is listed, pgloader skips resetting sequences after the load.
The options schema only and data only have no effects on this option.
When this option is listed pgloader will refrain from migrating the data over. Note that the schema in this context includes the indexes when the option create indexes has been listed.
When this option is listed pgloader only issues the COPY statements, without doing any other processing.
This option allows to control which encoding to parse the SQLite text data with. Defaults to UTF-8.
The cast clause allows to specify custom casting rules, either to overload the default casting rules or to amend them with special cases.
Please refer to the MySQL CAST clause for details.
Introduce a comma separated list of table name patterns used to limit the tables to migrate to a sublist.
Example:
INCLUDING ONLY TABLE NAMES LIKE 'Invoice%' Introduce a comma separated list of table name patterns used to exclude table names from the migration. This filter only applies to the result of the INCLUDING filter.
EXCLUDING TABLE NAMES LIKE 'appointments' When migrating from SQLite the following Casting Rules are provided:
Numbers:
type integer to bigint using integer-to-string
type float to float using float-to-string
Texts:
Binary:
Date:
This command instructs pgloader to load data from a MS SQL database. Automatic discovery of the schema is supported, including build of the indexes, primary and foreign keys constraints.
Here's an example:
load database
+ set work_mem to '16MB', maintenance_work_mem to '512 MB'; The sqlite command accepts the following clauses and options:
Path or HTTP URL to a SQLite file, might be a .zip file.
When loading from a SQLite database, the following options are supported:
When loading from a SQLite database, the following options are supported, and the default WITH clause is: no truncate, create tables, include drop, create indexes, reset sequences, downcase identifiers, encoding 'utf-8'.
When this option is listed, pgloader drops all the tables in the target PostgreSQL database whose names appear in the SQLite database. This option allows for using the same command several times in a row until you figure out all the options, starting automatically from a clean environment. Please note that CASCADE is used to ensure that tables are dropped even if there are foreign keys pointing to them. This is precisely what include drop is intended to do: drop all target tables and recreate them.
Great care needs to be taken when using include drop, as it will cascade to all objects referencing the target tables, possibly including other tables that are not being loaded from the source DB.
When this option is listed, pgloader will not include any DROP statement when loading the data.
When this option is listed, pgloader issue the TRUNCATE command against each PostgreSQL table just before loading data into it.
When this option is listed, pgloader issues no TRUNCATE command.
When this option is listed, pgloader issues an ALTER TABLE ... DISABLE TRIGGER ALL command against the PostgreSQL target table before copying the data, then the command ALTER TABLE ... ENABLE TRIGGER ALL once the COPY is done.
This option allows loading data into a pre-existing table ignoring the foreign key constraints and user defined triggers and may result in invalid foreign key constraints once the data is loaded. Use with care.
When this option is listed, pgloader creates the table using the meta data found in the SQLite file, which must contain a list of fields with their data type. A standard data type conversion from DBF to PostgreSQL is done.
When this option is listed, pgloader skips the creation of table before loading data, target tables must then already exist.
Also, when using create no tables pgloader fetches the metadata from the current target database and checks type casting, then will remove constraints and indexes prior to loading the data and install them back again once the loading is done.
When this option is listed, pgloader gets the definitions of all the indexes found in the SQLite database and create the same set of index definitions against the PostgreSQL database.
When this option is listed, pgloader skips the creating indexes.
When this option is listed, pgloader drops the indexes in the target database before loading the data, and creates them again at the end of the data copy.
When this option is listed, at the end of the data loading and after the indexes have all been created, pgloader resets all the PostgreSQL sequences created to the current maximum value of the column they are attached to.
When this option is listed, pgloader skips resetting sequences after the load.
The options schema only and data only have no effects on this option.
When this option is listed pgloader will refrain from migrating the data over. Note that the schema in this context includes the indexes when the option create indexes has been listed.
When this option is listed pgloader only issues the COPY statements, without doing any other processing.
This option allows to control which encoding to parse the SQLite text data with. Defaults to UTF-8.
The cast clause allows to specify custom casting rules, either to overload the default casting rules or to amend them with special cases.
Please refer to the MySQL CAST clause for details.
Introduce a comma separated list of table name patterns used to limit the tables to migrate to a sublist.
Example:
INCLUDING ONLY TABLE NAMES LIKE 'Invoice%' Introduce a comma separated list of table name patterns used to exclude table names from the migration. This filter only applies to the result of the INCLUDING filter.
EXCLUDING TABLE NAMES LIKE 'appointments' When migrating from SQLite the following Casting Rules are provided:
Numbers:
type integer to bigint using integer-to-string
type float to float using float-to-string
Texts:
Binary:
Date:
This command instructs pgloader to load data from a MS SQL database. Automatic discovery of the schema is supported, including build of the indexes, primary and foreign keys constraints.
Here's an example:
load database
from mssql://user@host/dbname
into postgresql:///dbname