From 154c74f85ef4c9ccebda2ed5d3110ddc0e605038 Mon Sep 17 00:00:00 2001 From: Dimitri Fontaine Date: Thu, 6 Jul 2017 17:07:55 +0200 Subject: [PATCH] Update online docs with new release. The docs/ directory goes to http://pgloader.io. --- docs/download.html | 2 +- docs/howto/pgloader.1.html | 26 ++++++++++++++++---------- 2 files changed, 17 insertions(+), 11 deletions(-) diff --git a/docs/download.html b/docs/download.html index 8938be4..914b23b 100644 --- a/docs/download.html +++ b/docs/download.html @@ -96,7 +96,7 @@ https://github.com/dimitri/pgloader Sources - pgloader-latest.tgz + pgloader-bundle-3.4.1.tgz

diff --git a/docs/howto/pgloader.1.html b/docs/howto/pgloader.1.html index c75e3c3..0ce099a 100644 --- a/docs/howto/pgloader.1.html +++ b/docs/howto/pgloader.1.html @@ -121,7 +121,7 @@ pgloader --version

Loading from a complex command

Use th postgresql:///pgloader?districts_longlat

Now the OS will take care of the streaming and buffering between the network and the commands and pgloader will take care of streaming the data down to PostgreSQL.

Migrating from SQLite

The following command will open the SQLite database, discover its tables definitions including indexes and foreign keys, migrate those definitions while casting the data type specifications to their PostgreSQL equivalent and then migrate the data over:

createdb newdb  
 pgloader ./test/sqlite/sqlite.db postgresql:///newdb 

Migrating from MySQL

Just create a database where to host the MySQL data and definitions and have pgloader do the migration for you in a single command line:

createdb pagila  
 pgloader mysql://user@localhost/sakila postgresql:///pagila 

Fetching an archived DBF file from a HTTP remote location

It's possible for pgloader to download a file from HTTP, unarchive it, and only then open it to discover the schema then load the data:

createdb foo  
-pgloader --type dbf http://www.insee.fr/fr/methodes/nomenclatures/cog/telechargement/2013/dbf/historiq2013.zip postgresql:///foo 

Here it's not possible for pgloader to guess the kind of data source it's being given, so it's necessary to use the --type command line switch.

BATCHES AND RETRY BEHAVIOUR

To load data to PostgreSQL, pgloader uses the COPY streaming protocol. While this is the faster way to load data, COPY has an important drawback: as soon as PostgreSQL emits an error with any bit of data sent to it, whatever the problem is, the whole data set is rejected by PostgreSQL.

To work around that, pgloader cuts the data into batches of 25000 rows each, so that when a problem occurs it's only impacting that many rows of data. Each batch is kept in memory while the COPY streaming happens, in order to be able to handle errors should some happen.

When PostgreSQL rejects the whole batch, pgloader logs the error message then isolates the bad row(s) from the accepted ones by retrying the batched rows in smaller batches. To do that, pgloader parses the CONTEXT error message from the failed COPY, as the message contains the line number where the error was found in the batch, as in the following example:

CONTEXT: COPY errors, line 3, column b: "2006-13-11" 

Using that information, pgloader will reload all rows in the batch before the erroneous one, log the erroneous one as rejected, then try loading the remaining of the batch in a single attempt, which may or may not contain other erroneous data.

At the end of a load containing rejected rows, you will find two files in the root-dir location, under a directory named the same as the target database of your setup. The filenames are the target table, and their extensions are .dat for the rejected data and .log for the file containing the full PostgreSQL client side logs about the rejected data.

The .dat file is formatted in PostgreSQL the text COPY format as documented in http://www.postgresql.org/docs/9.2/static/sql-copy.html#AEN66609.

A NOTE ABOUT PERFORMANCE

pgloader has been developed with performance in mind, to be able to cope with ever growing needs in loading large amounts of data into PostgreSQL.

The basic architecture it uses is the old Unix pipe model, where a thread is responsible for loading the data (reading a CSV file, querying MySQL, etc) and fills pre-processed data into a queue. Another threads feeds from the queue, apply some more transformations to the input data and stream the end result to PostgreSQL using the COPY protocol.

When given a file that the PostgreSQL COPY command knows how to parse, and if the file contains no erroneous data, then pgloader will never be as fast as just using the PostgreSQL COPY command.

Note that while the COPY command is restricted to read either from its standard input or from a local file on the server's file system, the command line tool psql implements a \copy command that knows how to stream a file local to the client over the network and into the PostgreSQL server, using the same protocol as pgloader uses.

A NOTE ABOUT PARALLELISM

pgloader uses several concurrent tasks to process the data being loaded:

The idea behind having the transformer task do the formatting is so that in the event of bad rows being rejected by PostgreSQL the retry process doesn't have to do that step again.

At the moment, the number of transformer and writer tasks are forced into being the same, which allows for a very simple queueing model to be implemented: the reader task fills in one queue per transformer task, which then pops from that queue and pushes to a writer queue per COPY task.

The parameter workers allows to control how many worker threads are allowed to be active at any time (that's the parallelism level); and the parameter concurrency allows to control how many tasks are started to handle the data (they may not all run at the same time, depending on the workers setting).

We allow workers simultaneous workers to be active at the same time in the context of a single table. A single unit of work consist of several kinds of workers:

The N here is setup to the concurrency parameter: with a CONCURRENCY of 2, we start (+ 1 2 2) = 5 concurrent tasks, with a concurrency of 4 we start (+ 1 4 4) = 9 concurrent tasks, of which only workers may be active simultaneously.

So with workers = 4, concurrency = 2, the parallel scheduler will maintain active only 4 of the 5 tasks that are started.

With workers = 8, concurrency = 1, we then are able to work on several units of work at the same time. In the database sources, a unit of work is a table, so those settings allow pgloader to be active on as many as 3 tables at any time in the load process.

The defaults are workers = 4, concurrency = 1 when loading from a database source, and workers = 8, concurrency = 2 when loading from something else (currently, a file). Those defaults are arbitrary and waiting for feedback from users, so please consider providing feedback if you play with the settings.

As the CREATE INDEX threads started by pgloader are only waiting until PostgreSQL is done with the real work, those threads are NOT counted into the concurrency levels as detailed here.

By default, as many CREATE INDEX threads as the maximum number of indexes per table are found in your source schema. It is possible to set the max parallel create index WITH option to another number in case there's just too many of them to create.

SOURCE FORMATS

pgloader supports the following input formats:

PGLOADER COMMANDS SYNTAX

pgloader implements a Domain Specific Language allowing to setup complex data loading scripts handling computed columns and on-the-fly sanitization of the input data. For more complex data loading scenarios, you will be required to learn that DSL's syntax. It's meant to look familiar to DBA by being inspired by SQL where it makes sense, which is not that much after all.

The pgloader commands follow the same global grammar rules. Each of them might support only a subset of the general options and provide specific options.

LOAD <source-type>  
+pgloader --type dbf http://www.insee.fr/fr/methodes/nomenclatures/cog/telechargement/2013/dbf/historiq2013.zip postgresql:///foo 

Here it's not possible for pgloader to guess the kind of data source it's being given, so it's necessary to use the --type command line switch.

BATCHES AND RETRY BEHAVIOUR

To load data to PostgreSQL, pgloader uses the COPY streaming protocol. While this is the faster way to load data, COPY has an important drawback: as soon as PostgreSQL emits an error with any bit of data sent to it, whatever the problem is, the whole data set is rejected by PostgreSQL.

To work around that, pgloader cuts the data into batches of 25000 rows each, so that when a problem occurs it's only impacting that many rows of data. Each batch is kept in memory while the COPY streaming happens, in order to be able to handle errors should some happen.

When PostgreSQL rejects the whole batch, pgloader logs the error message then isolates the bad row(s) from the accepted ones by retrying the batched rows in smaller batches. To do that, pgloader parses the CONTEXT error message from the failed COPY, as the message contains the line number where the error was found in the batch, as in the following example:

CONTEXT: COPY errors, line 3, column b: "2006-13-11" 

Using that information, pgloader will reload all rows in the batch before the erroneous one, log the erroneous one as rejected, then try loading the remaining of the batch in a single attempt, which may or may not contain other erroneous data.

At the end of a load containing rejected rows, you will find two files in the root-dir location, under a directory named the same as the target database of your setup. The filenames are the target table, and their extensions are .dat for the rejected data and .log for the file containing the full PostgreSQL client side logs about the rejected data.

The .dat file is formatted in PostgreSQL the text COPY format as documented in http://www.postgresql.org/docs/9.2/static/sql-copy.html#AEN66609.

A NOTE ABOUT PERFORMANCE

pgloader has been developed with performance in mind, to be able to cope with ever growing needs in loading large amounts of data into PostgreSQL.

The basic architecture it uses is the old Unix pipe model, where a thread is responsible for loading the data (reading a CSV file, querying MySQL, etc) and fills pre-processed data into a queue. Another threads feeds from the queue, apply some more transformations to the input data and stream the end result to PostgreSQL using the COPY protocol.

When given a file that the PostgreSQL COPY command knows how to parse, and if the file contains no erroneous data, then pgloader will never be as fast as just using the PostgreSQL COPY command.

Note that while the COPY command is restricted to read either from its standard input or from a local file on the server's file system, the command line tool psql implements a \copy command that knows how to stream a file local to the client over the network and into the PostgreSQL server, using the same protocol as pgloader uses.

A NOTE ABOUT PARALLELISM

pgloader uses several concurrent tasks to process the data being loaded:

The parameter workers allows to control how many worker threads are allowed to be active at any time (that's the parallelism level); and the parameter concurrency allows to control how many tasks are started to handle the data (they may not all run at the same time, depending on the workers setting).

We allow workers simultaneous workers to be active at the same time in the context of a single table. A single unit of work consist of several kinds of workers:

The N here is setup to the concurrency parameter: with a CONCURRENCY of 2, we start (+ 1 2) = 3 concurrent tasks, with a concurrency of 4 we start (+ 1 4) = 9 concurrent tasks, of which only workers may be active simultaneously.

The defaults are workers = 4, concurrency = 1 when loading from a database source, and workers = 8, concurrency = 2 when loading from something else (currently, a file). Those defaults are arbitrary and waiting for feedback from users, so please consider providing feedback if you play with the settings.

As the CREATE INDEX threads started by pgloader are only waiting until PostgreSQL is done with the real work, those threads are NOT counted into the concurrency levels as detailed here.

By default, as many CREATE INDEX threads as the maximum number of indexes per table are found in your source schema. It is possible to set the max parallel create index WITH option to another number in case there's just too many of them to create.

SOURCE FORMATS

pgloader supports the following input formats:

PGLOADER COMMANDS SYNTAX

pgloader implements a Domain Specific Language allowing to setup complex data loading scripts handling computed columns and on-the-fly sanitization of the input data. For more complex data loading scenarios, you will be required to learn that DSL's syntax. It's meant to look familiar to DBA by being inspired by SQL where it makes sense, which is not that much after all.

The pgloader commands follow the same global grammar rules. Each of them might support only a subset of the general options and provide specific options.

LOAD <source-type>  
      FROM <source-url>     [ HAVING FIELDS <source-level-options> ]  
 	 INTO <postgresql-url> [ TARGET COLUMNS <columns-and-options> ]  
  
@@ -131,7 +131,7 @@ pgloader --type dbf http://www.insee.fr/fr/methodes/nomenclatures/cog/telecharge
  
 [ BEFORE LOAD [ DO <sql statements> | EXECUTE <sql file> ] ... ]  
 [  AFTER LOAD [ DO <sql statements> | EXECUTE <sql file> ] ... ]  
-; 

The main clauses are the LOAD, FROM, INTO and WITH clauses that each command implements. Some command then implement the SET command, or some specific clauses such as the CAST clause.

COMMON CLAUSES

Some clauses are common to all commands:

Connection String

The <postgresql-url> parameter is expected to be given as a Connection URI as documented in the PostgreSQL documentation at http://www.postgresql.org/docs/9.3/static/libpq-connect.html#LIBPQ-CONNSTRING.

postgresql://[user[:password]@][netloc][:port][/dbname][?option=value&...] 

Where:

Regular Expressions

Several clauses listed in the following accept regular expressions with the following input rules:

The opening and closing sign are allowed by pair, here's the complete list of allowed delimiters:

~//  
+; 

The main clauses are the LOAD, FROM, INTO and WITH clauses that each command implements. Some command then implement the SET command, or some specific clauses such as the CAST clause.

COMMON CLAUSES

Some clauses are common to all commands:

Connection String

The <postgresql-url> parameter is expected to be given as a Connection URI as documented in the PostgreSQL documentation at http://www.postgresql.org/docs/9.3/static/libpq-connect.html#LIBPQ-CONNSTRING.

postgresql://[user[:password]@][netloc][:port][/dbname][?option=value&...] 

Where:

Regular Expressions

Several clauses listed in the following accept regular expressions with the following input rules:

The opening and closing sign are allowed by pair, here's the complete list of allowed delimiters:

~//  
 ~[]  
 ~{}  
 ~()  
@@ -139,7 +139,7 @@ pgloader --type dbf http://www.insee.fr/fr/methodes/nomenclatures/cog/telecharge
 ~""  
 ~''  
 ~||  
-~## 

Pick the set of delimiters that don't collide with the regular expression you're trying to input. If your expression is such that none of the solutions allow you to enter it, the places where such expressions are allowed should allow for a list of expressions.

Comments

Any command may contain comments, following those input rules:

Any place where you could enter a whitespace will accept a comment too.

Batch behaviour options

All pgloader commands have support for a WITH clause that allows for specifying options. Some options are generic and accepted by all commands, such as the batch behaviour options, and some options are specific to a data source kind, such as the CSV skip header option.

The global batch behaviour options are:

Other options are specific to each input source, please refer to specific parts of the documentation for their listing and covering.

A batch is then closed as soon as either the batch rows or the batch size threshold is crossed, whichever comes first. In cases when a batch has to be closed because of the batch size setting, a debug level log message is printed with how many rows did fit in the oversized batch.

LOAD CSV

This command instructs pgloader to load data from a CSV file. Here's an example:

LOAD CSV  
+~## 

Pick the set of delimiters that don't collide with the regular expression you're trying to input. If your expression is such that none of the solutions allow you to enter it, the places where such expressions are allowed should allow for a list of expressions.

Comments

Any command may contain comments, following those input rules:

Any place where you could enter a whitespace will accept a comment too.

Batch behaviour options

All pgloader commands have support for a WITH clause that allows for specifying options. Some options are generic and accepted by all commands, such as the batch behaviour options, and some options are specific to a data source kind, such as the CSV skip header option.

The global batch behaviour options are:

Other options are specific to each input source, please refer to specific parts of the documentation for their listing and covering.

A batch is then closed as soon as either the batch rows or the batch size threshold is crossed, whichever comes first. In cases when a batch has to be closed because of the batch size setting, a debug level log message is printed with how many rows did fit in the oversized batch.

LOAD CSV

This command instructs pgloader to load data from a CSV file. Here's an example:

LOAD CSV  
    FROM 'GeoLiteCity-Blocks.csv' WITH ENCODING iso-646-us  
         HAVING FIELDS  
         (  
@@ -285,18 +285,24 @@ MATCHING regexp
              fields terminated by ','  
  
    FINALLY DO  
-     $$ create index blocks_ip4r_idx on geolite.blocks using gist(iprange); $$; 

The archive command accepts the following clauses and options:

LOAD MYSQL DATABASE

This command instructs pgloader to load data from a database connection. The only supported database source is currently MySQL, and pgloader supports dynamically converting the schema of the source database and the indexes building.

A default set of casting rules are provided and might be overloaded and appended to by the command.

Here's an example:

LOAD DATABASE  
+     $$ create index blocks_ip4r_idx on geolite.blocks using gist(iprange); $$; 

The archive command accepts the following clauses and options:

LOAD MYSQL DATABASE

This command instructs pgloader to load data from a database connection. The only supported database source is currently MySQL, and pgloader supports dynamically converting the schema of the source database and the indexes building.

A default set of casting rules are provided and might be overloaded and appended to by the command.

Here's an example using as many options as possible, some of them even being defaults. Chances are you don't need that complex a setup, don't copy and paste it, use it only as a reference!

LOAD DATABASE  
      FROM      mysql://root@localhost/sakila  
      INTO postgresql://localhost:54393/sakila  
  
  WITH include drop, create tables, create indexes, reset sequences,  
-      workers = 8, concurrency = 1  
+      workers = 8, concurrency = 1,  
+      multiple readers per thread, rows per range = 50000  
  
-  SET maintenance_work_mem to '128MB',  
+  SET PostgreSQL PARAMETERS  
+      maintenance_work_mem to '128MB',  
       work_mem to '12MB',  
-      search_path to 'sakila'  
+      search_path to 'sakila, public, "$user"'  
  
- CAST type datetime to timestamptz drop default drop not null using zero-dates-to-null,  
+  SET MySQL PARAMETERS  
+      net_read_timeout  = '120',  
+      net_write_timeout = '120'  
+ 
+ CAST type bigint when (= precision 20) to bigserial drop typemod,  
       type date drop not null drop default using zero-dates-to-null,  
       -- type tinyint to boolean using tinyint-to-boolean,  
       type year to integer  
@@ -321,7 +327,7 @@ MATCHING regexp
    $$ create schema if not exists pagila; $$,  
    $$ create schema if not exists mv;     $$,  
    $$ alter database sakila set search_path to pagila, mv, public; $$;  
-

The database command accepts the following clauses and options: