diff --git a/docs/howto/geolite.html b/docs/howto/geolite.html index 17cd683..c39b954 100644 --- a/docs/howto/geolite.html +++ b/docs/howto/geolite.html @@ -121,18 +121,18 @@ LOAD ARCHIVE LOAD CSV FROM FILENAME MATCHING ~/GeoLiteCity-Location.csv/ - WITH ENCODING iso-8859-1 - ( - locId, - country, - region [ null if blanks ], - city [ null if blanks ], - postalCode [ null if blanks ], - latitude, - longitude, - metroCode [ null if blanks ], - areaCode [ null if blanks ] - ) + WITH ENCODING iso-8859-1 + ( + locId, + country, + region null if blanks, + city null if blanks, + postalCode null if blanks, + latitude, + longitude, + metroCode null if blanks, + areaCode null if blanks + ) INTO postgresql:///ip4r?geolite.location ( locid,country,region,city,postalCode, diff --git a/docs/howto/mysql.html b/docs/howto/mysql.html index c9fac86..a82999a 100644 --- a/docs/howto/mysql.html +++ b/docs/howto/mysql.html @@ -82,7 +82,45 @@
-

Migrating from MySQL with pgloader

If you want to migrate your data over to PostgreSQL from MySQL then pgloader is the tool of choice!

Most tools around are skipping the main problem with migrating from MySQL, which is to do with the type casting and data sanitizing that needs to be done. pgloader will not leave you alone on those topics.

The Command

To load data with pgloader you need to define in a command the operations in some details. Here's our example for loading the MySQL Sakila Sample Database:

Here's our command:

load database  
+

Migrating from MySQL to PostgreSQL

If you want to migrate your data over to PostgreSQL from MySQL then pgloader is the tool of choice!

Most tools around are skipping the main problem with migrating from MySQL, which is to do with the type casting and data sanitizing that needs to be done. pgloader will not leave you alone on those topics.

In a Single Command Line

As an example, we will use the f1db database from which which provides a historical record of motor racing data for non-commercial purposes. You can either use their API or download the whole database at . Once you've done that load the database in MySQL:

$ mysql -u root  
+> create database f1db;  
+> source f1db.sql 

Now let's migrate this database into PostgreSQL in a single command line:

$ createdb f1db  
+$ pgloader mysql://root@localhost/f1db pgsql://f1db 

Done! All with schema, table definitions, constraints, indexes, primary keys, auto_increment columns turned into bigserial , foreign keys, comments, and if you had some MySQL default values such as ON UPDATE CURRENT_TIMESTAMP they would have been translated to a PostgreSQL before update trigger automatically.

$ pgloader mysql://root@localhost/f1db pgsql:///f1db  
+2017-06-16T08:56:14.064000+02:00 LOG Main logs in '/private/tmp/pgloader/pgloader.log'  
+2017-06-16T08:56:14.068000+02:00 LOG Data errors in '/private/tmp/pgloader/'  
+2017-06-16T08:56:19.542000+02:00 LOG report summary reset  
+               table name       read   imported     errors      total time  
+-------------------------  ---------  ---------  ---------  --------------  
+          fetch meta data         33         33          0          0.365s  
+           Create Schemas          0          0          0          0.007s  
+         Create SQL Types          0          0          0          0.006s  
+            Create tables         26         26          0          0.068s  
+           Set Table OIDs         13         13          0          0.012s  
+-------------------------  ---------  ---------  ---------  --------------  
+  f1db.constructorresults      11011      11011          0          0.205s  
+            f1db.circuits         73         73          0          0.150s  
+        f1db.constructors        208        208          0          0.059s  
+f1db.constructorstandings      11766      11766          0          0.365s  
+             f1db.drivers        841        841          0          0.268s  
+            f1db.laptimes     413578     413578          0          2.892s  
+     f1db.driverstandings      31420      31420          0          0.583s  
+            f1db.pitstops       5796       5796          0          2.154s  
+               f1db.races        976        976          0          0.227s  
+          f1db.qualifying       7257       7257          0          0.228s  
+             f1db.seasons         68         68          0          0.527s  
+             f1db.results      23514      23514          0          0.658s  
+              f1db.status        133        133          0          0.130s  
+-------------------------  ---------  ---------  ---------  --------------  
+  COPY Threads Completion         39         39          0          4.303s  
+           Create Indexes         20         20          0          1.497s  
+   Index Build Completion         20         20          0          0.214s  
+          Reset Sequences          0         10          0          0.058s  
+             Primary Keys         13         13          0          0.012s  
+      Create Foreign Keys          0          0          0          0.000s  
+          Create Triggers          0          0          0          0.001s  
+         Install Comments          0          0          0          0.000s  
+-------------------------  ---------  ---------  ---------  --------------  
+        Total import time     506641     506641          0          5.547s 

You may need to have special cases to take care of tho, or views that you want to materialize while doing the migration. In advanced case you can use the pgloader command.

The Command

To load data with pgloader you need to define in a command the operations in some details. Here's our example for loading the MySQL Sakila Sample Database:

Here's our command:

load database  
      from      mysql://root@localhost/sakila  
      into postgresql:///sakila  
  
diff --git a/docs/howto/pgloader.1.html b/docs/howto/pgloader.1.html
index af614cd..c75e3c3 100644
--- a/docs/howto/pgloader.1.html
+++ b/docs/howto/pgloader.1.html
@@ -121,7 +121,7 @@ pgloader --version 

Loading from a complex command

Use th postgresql:///pgloader?districts_longlat

Now the OS will take care of the streaming and buffering between the network and the commands and pgloader will take care of streaming the data down to PostgreSQL.

Migrating from SQLite

The following command will open the SQLite database, discover its tables definitions including indexes and foreign keys, migrate those definitions while casting the data type specifications to their PostgreSQL equivalent and then migrate the data over:

createdb newdb  
 pgloader ./test/sqlite/sqlite.db postgresql:///newdb 

Migrating from MySQL

Just create a database where to host the MySQL data and definitions and have pgloader do the migration for you in a single command line:

createdb pagila  
 pgloader mysql://user@localhost/sakila postgresql:///pagila 

Fetching an archived DBF file from a HTTP remote location

It's possible for pgloader to download a file from HTTP, unarchive it, and only then open it to discover the schema then load the data:

createdb foo  
-pgloader --type dbf http://www.insee.fr/fr/methodes/nomenclatures/cog/telechargement/2013/dbf/historiq2013.zip postgresql:///foo 

Here it's not possible for pgloader to guess the kind of data source it's being given, so it's necessary to use the --type command line switch.

BATCHES AND RETRY BEHAVIOUR

To load data to PostgreSQL, pgloader uses the COPY streaming protocol. While this is the faster way to load data, COPY has an important drawback: as soon as PostgreSQL emits an error with any bit of data sent to it, whatever the problem is, the whole data set is rejected by PostgreSQL.

To work around that, pgloader cuts the data into batches of 25000 rows each, so that when a problem occurs it's only impacting that many rows of data. Each batch is kept in memory while the COPY streaming happens, in order to be able to handle errors should some happen.

When PostgreSQL rejects the whole batch, pgloader logs the error message then isolates the bad row(s) from the accepted ones by retrying the batched rows in smaller batches. To do that, pgloader parses the CONTEXT error message from the failed COPY, as the message contains the line number where the error was found in the batch, as in the following example:

CONTEXT: COPY errors, line 3, column b: "2006-13-11" 

Using that information, pgloader will reload all rows in the batch before the erroneous one, log the erroneous one as rejected, then try loading the remaining of the batch in a single attempt, which may or may not contain other erroneous data.

At the end of a load containing rejected rows, you will find two files in the root-dir location, under a directory named the same as the target database of your setup. The filenames are the target table, and their extensions are .dat for the rejected data and .log for the file containing the full PostgreSQL client side logs about the rejected data.

The .dat file is formatted in PostgreSQL the text COPY format as documented in http://www.postgresql.org/docs/9.2/static/sql-copy.html#AEN66609.

A NOTE ABOUT PERFORMANCE

pgloader has been developed with performance in mind, to be able to cope with ever growing needs in loading large amounts of data into PostgreSQL.

The basic architecture it uses is the old Unix pipe model, where a thread is responsible for loading the data (reading a CSV file, querying MySQL, etc) and fills pre-processed data into a queue. Another threads feeds from the queue, apply some more transformations to the input data and stream the end result to PostgreSQL using the COPY protocol.

When given a file that the PostgreSQL COPY command knows how to parse, and if the file contains no erroneous data, then pgloader will never be as fast as just using the PostgreSQL COPY command.

Note that while the COPY command is restricted to read either from its standard input or from a local file on the server's file system, the command line tool psql implements a \copy command that knows how to stream a file local to the client over the network and into the PostgreSQL server, using the same protocol as pgloader uses.

A NOTE ABOUT PARALLELISM

pgloader uses several concurrent tasks to process the data being loaded:

The idea behind having the transformer task do the formatting is so that in the event of bad rows being rejected by PostgreSQL the retry process doesn't have to do that step again.

At the moment, the number of transformer and writer tasks are forced into being the same, which allows for a very simple queueing model to be implemented: the reader task fills in one queue per transformer task, which then pops from that queue and pushes to a writer queue per COPY task.

The parameter workers allows to control how many worker threads are allowed to be active at any time (that's the parallelism level); and the parameter concurrency allows to control how many tasks are started to handle the data (they may not all run at the same time, depending on the workers setting).

With a concurrency of 2, we start 1 reader thread, 2 transformer threads and 2 writer tasks, that's 5 concurrent tasks to schedule into workers threads.

So with workers = 4, concurrency = 2, the parallel scheduler will maintain active only 4 of the 5 tasks that are started.

With workers = 8, concurrency = 1, we then are able to work on several units of work at the same time. In the database sources, a unit of work is a table, so those settings allow pgloader to be active on as many as 3 tables at any time in the load process.

As the CREATE INDEX threads started by pgloader are only waiting until PostgreSQL is done with the real work, those threads are NOT counted into the concurrency levels as detailed here.

By default, as many CREATE INDEX threads as the maximum number of indexes per table are found in your source schema. It is possible to set the max parallel create index WITH option to another number in case there's just too many of them to create.

SOURCE FORMATS

pgloader supports the following input formats:

PGLOADER COMMANDS SYNTAX

pgloader implements a Domain Specific Language allowing to setup complex data loading scripts handling computed columns and on-the-fly sanitization of the input data. For more complex data loading scenarios, you will be required to learn that DSL's syntax. It's meant to look familiar to DBA by being inspired by SQL where it makes sense, which is not that much after all.

The pgloader commands follow the same global grammar rules. Each of them might support only a subset of the general options and provide specific options.

LOAD <source-type>  
+pgloader --type dbf http://www.insee.fr/fr/methodes/nomenclatures/cog/telechargement/2013/dbf/historiq2013.zip postgresql:///foo 

Here it's not possible for pgloader to guess the kind of data source it's being given, so it's necessary to use the --type command line switch.

BATCHES AND RETRY BEHAVIOUR

To load data to PostgreSQL, pgloader uses the COPY streaming protocol. While this is the faster way to load data, COPY has an important drawback: as soon as PostgreSQL emits an error with any bit of data sent to it, whatever the problem is, the whole data set is rejected by PostgreSQL.

To work around that, pgloader cuts the data into batches of 25000 rows each, so that when a problem occurs it's only impacting that many rows of data. Each batch is kept in memory while the COPY streaming happens, in order to be able to handle errors should some happen.

When PostgreSQL rejects the whole batch, pgloader logs the error message then isolates the bad row(s) from the accepted ones by retrying the batched rows in smaller batches. To do that, pgloader parses the CONTEXT error message from the failed COPY, as the message contains the line number where the error was found in the batch, as in the following example:

CONTEXT: COPY errors, line 3, column b: "2006-13-11" 

Using that information, pgloader will reload all rows in the batch before the erroneous one, log the erroneous one as rejected, then try loading the remaining of the batch in a single attempt, which may or may not contain other erroneous data.

At the end of a load containing rejected rows, you will find two files in the root-dir location, under a directory named the same as the target database of your setup. The filenames are the target table, and their extensions are .dat for the rejected data and .log for the file containing the full PostgreSQL client side logs about the rejected data.

The .dat file is formatted in PostgreSQL the text COPY format as documented in http://www.postgresql.org/docs/9.2/static/sql-copy.html#AEN66609.

A NOTE ABOUT PERFORMANCE

pgloader has been developed with performance in mind, to be able to cope with ever growing needs in loading large amounts of data into PostgreSQL.

The basic architecture it uses is the old Unix pipe model, where a thread is responsible for loading the data (reading a CSV file, querying MySQL, etc) and fills pre-processed data into a queue. Another threads feeds from the queue, apply some more transformations to the input data and stream the end result to PostgreSQL using the COPY protocol.

When given a file that the PostgreSQL COPY command knows how to parse, and if the file contains no erroneous data, then pgloader will never be as fast as just using the PostgreSQL COPY command.

Note that while the COPY command is restricted to read either from its standard input or from a local file on the server's file system, the command line tool psql implements a \copy command that knows how to stream a file local to the client over the network and into the PostgreSQL server, using the same protocol as pgloader uses.

A NOTE ABOUT PARALLELISM

pgloader uses several concurrent tasks to process the data being loaded:

The idea behind having the transformer task do the formatting is so that in the event of bad rows being rejected by PostgreSQL the retry process doesn't have to do that step again.

At the moment, the number of transformer and writer tasks are forced into being the same, which allows for a very simple queueing model to be implemented: the reader task fills in one queue per transformer task, which then pops from that queue and pushes to a writer queue per COPY task.

The parameter workers allows to control how many worker threads are allowed to be active at any time (that's the parallelism level); and the parameter concurrency allows to control how many tasks are started to handle the data (they may not all run at the same time, depending on the workers setting).

We allow workers simultaneous workers to be active at the same time in the context of a single table. A single unit of work consist of several kinds of workers:

The N here is setup to the concurrency parameter: with a CONCURRENCY of 2, we start (+ 1 2 2) = 5 concurrent tasks, with a concurrency of 4 we start (+ 1 4 4) = 9 concurrent tasks, of which only workers may be active simultaneously.

So with workers = 4, concurrency = 2, the parallel scheduler will maintain active only 4 of the 5 tasks that are started.

With workers = 8, concurrency = 1, we then are able to work on several units of work at the same time. In the database sources, a unit of work is a table, so those settings allow pgloader to be active on as many as 3 tables at any time in the load process.

The defaults are workers = 4, concurrency = 1 when loading from a database source, and workers = 8, concurrency = 2 when loading from something else (currently, a file). Those defaults are arbitrary and waiting for feedback from users, so please consider providing feedback if you play with the settings.

As the CREATE INDEX threads started by pgloader are only waiting until PostgreSQL is done with the real work, those threads are NOT counted into the concurrency levels as detailed here.

By default, as many CREATE INDEX threads as the maximum number of indexes per table are found in your source schema. It is possible to set the max parallel create index WITH option to another number in case there's just too many of them to create.

SOURCE FORMATS

pgloader supports the following input formats:

PGLOADER COMMANDS SYNTAX

pgloader implements a Domain Specific Language allowing to setup complex data loading scripts handling computed columns and on-the-fly sanitization of the input data. For more complex data loading scenarios, you will be required to learn that DSL's syntax. It's meant to look familiar to DBA by being inspired by SQL where it makes sense, which is not that much after all.

The pgloader commands follow the same global grammar rules. Each of them might support only a subset of the general options and provide specific options.

LOAD <source-type>  
      FROM <source-url>     [ HAVING FIELDS <source-level-options> ]  
 	 INTO <postgresql-url> [ TARGET COLUMNS <columns-and-options> ]  
  
@@ -131,7 +131,7 @@ pgloader --type dbf http://www.insee.fr/fr/methodes/nomenclatures/cog/telecharge
  
 [ BEFORE LOAD [ DO <sql statements> | EXECUTE <sql file> ] ... ]  
 [  AFTER LOAD [ DO <sql statements> | EXECUTE <sql file> ] ... ]  
-; 

The main clauses are the LOAD, FROM, INTO and WITH clauses that each command implements. Some command then implement the SET command, or some specific clauses such as the CAST clause.

COMMON CLAUSES

Some clauses are common to all commands:

Connection String

The <postgresql-url> parameter is expected to be given as a Connection URI as documented in the PostgreSQL documentation at http://www.postgresql.org/docs/9.3/static/libpq-connect.html#LIBPQ-CONNSTRING.

postgresql://[user[:password]@][netloc][:port][/dbname][?option=value&...] 

Where:

Regular Expressions

Several clauses listed in the following accept regular expressions with the following input rules:

The opening and closing sign are allowed by pair, here's the complete list of allowed delimiters:

~//  
+; 

The main clauses are the LOAD, FROM, INTO and WITH clauses that each command implements. Some command then implement the SET command, or some specific clauses such as the CAST clause.

COMMON CLAUSES

Some clauses are common to all commands:

Connection String

The <postgresql-url> parameter is expected to be given as a Connection URI as documented in the PostgreSQL documentation at http://www.postgresql.org/docs/9.3/static/libpq-connect.html#LIBPQ-CONNSTRING.

postgresql://[user[:password]@][netloc][:port][/dbname][?option=value&...] 

Where:

Regular Expressions

Several clauses listed in the following accept regular expressions with the following input rules:

The opening and closing sign are allowed by pair, here's the complete list of allowed delimiters:

~//  
 ~[]  
 ~{}  
 ~()  
@@ -309,15 +309,28 @@ MATCHING regexp
  -- ALTER TABLE NAMES MATCHING 'film' RENAME TO 'films'  
  -- ALTER TABLE NAMES MATCHING ~/_list$/ SET SCHEMA 'mv'  
  
+ ALTER TABLE NAMES MATCHING ~/_list$/, 'sales_by_store', ~/sales_by/  
+  SET SCHEMA 'mv'  
+ 
+ ALTER TABLE NAMES MATCHING 'film' RENAME TO 'films'  
+ ALTER TABLE NAMES MATCHING ~/./ SET (fillfactor='40')  
+ 
+ ALTER SCHEMA 'sakila' RENAME TO 'pagila'  
+ 
  BEFORE LOAD DO  
- $$ create schema if not exists sakila; $$; 

The database command accepts the following clauses and options:

LIMITATIONS

The database command currently only supports MySQL source database and has the following limitations:

DEFAULT MySQL CASTING RULES

When migrating from MySQL the following Casting Rules are provided:

Numbers:

Texts:

Binary:

Date:

Geometric:

Enum types are declared inline in MySQL and separately with a CREATE TYPE command in PostgreSQL, so each column of Enum Type is converted to a type named after the table and column names defined with the same labels in the same order.

When the source type definition is not matched in the default casting rules nor in the casting rules provided in the command, then the type name with the typemod is used.

LOAD SQLite DATABASE

This command instructs pgloader to load data from a SQLite file. Automatic discovery of the schema is supported, including build of the indexes.

Here's an example:

load database  
      from sqlite:///Users/dim/Downloads/lastfm_tags.db  
      into postgresql:///tags  
  
@@ -331,7 +344,9 @@ including only table names like 'GlobalAccount' in schema 'dbo'
  
 set work_mem to '16MB', maintenance_work_mem to '512 MB'  
  
-before load do $$ drop schema if exists dbo cascade; $$; 

The mssql command accepts the following clauses and options:

DEFAULT MS SQL CASTING RULES

When migrating from MS SQL the following Casting Rules are provided:

Numbers:

Texts:

Binary:

Date:

Others:

TRANSFORMATION FUNCTIONS

Some data types are implemented in a different enough way that a transformation function is necessary. This function must be written in Common lisp and is searched in the pgloader.transforms package.

Some default transformation function are provided with pgloader, and you can use the --load command line option to load and compile your own lisp file into pgloader at runtime. For your functions to be found, remember to begin your lisp file with the following form:

(in-package #:pgloader.transforms) 

The provided transformation functions are: