From a222a82f66dbf4b2d10cc0bbd2f1cff15e1e83f9 Mon Sep 17 00:00:00 2001 From: Dimitri Fontaine Date: Tue, 20 Jun 2017 16:24:25 +0200 Subject: [PATCH] Improve docs on pgloader.io. In the SQLite and MySQL cases, expand on the simple case before detailing the command language. With our solid defaults, most times a single command line with the source and target connection strings are going to be all you need. --- docs/howto/geolite.html | 24 +++++++------- docs/howto/mysql.html | 40 ++++++++++++++++++++++- docs/howto/pgloader.1.html | 25 +++++++++++--- docs/howto/sqlite.html | 38 ++++++++++++++++++++- docs/src/mysql.md | 67 +++++++++++++++++++++++++++++++++++++- docs/src/sqlite.md | 52 +++++++++++++++++++++++++++++ 6 files changed, 226 insertions(+), 20 deletions(-) diff --git a/docs/howto/geolite.html b/docs/howto/geolite.html index 17cd683..c39b954 100644 --- a/docs/howto/geolite.html +++ b/docs/howto/geolite.html @@ -121,18 +121,18 @@ LOAD ARCHIVE LOAD CSV FROM FILENAME MATCHING ~/GeoLiteCity-Location.csv/ - WITH ENCODING iso-8859-1 - ( - locId, - country, - region [ null if blanks ], - city [ null if blanks ], - postalCode [ null if blanks ], - latitude, - longitude, - metroCode [ null if blanks ], - areaCode [ null if blanks ] - ) + WITH ENCODING iso-8859-1 + ( + locId, + country, + region null if blanks, + city null if blanks, + postalCode null if blanks, + latitude, + longitude, + metroCode null if blanks, + areaCode null if blanks + ) INTO postgresql:///ip4r?geolite.location ( locid,country,region,city,postalCode, diff --git a/docs/howto/mysql.html b/docs/howto/mysql.html index c9fac86..a82999a 100644 --- a/docs/howto/mysql.html +++ b/docs/howto/mysql.html @@ -82,7 +82,45 @@
-

Migrating from MySQL with pgloader

If you want to migrate your data over to PostgreSQL from MySQL then pgloader is the tool of choice!

Most tools around are skipping the main problem with migrating from MySQL, which is to do with the type casting and data sanitizing that needs to be done. pgloader will not leave you alone on those topics.

The Command

To load data with pgloader you need to define in a command the operations in some details. Here's our example for loading the MySQL Sakila Sample Database:

Here's our command:

load database  
+

Migrating from MySQL to PostgreSQL

If you want to migrate your data over to PostgreSQL from MySQL then pgloader is the tool of choice!

Most tools around are skipping the main problem with migrating from MySQL, which is to do with the type casting and data sanitizing that needs to be done. pgloader will not leave you alone on those topics.

In a Single Command Line

As an example, we will use the f1db database from which which provides a historical record of motor racing data for non-commercial purposes. You can either use their API or download the whole database at . Once you've done that load the database in MySQL:

$ mysql -u root  
+> create database f1db;  
+> source f1db.sql 

Now let's migrate this database into PostgreSQL in a single command line:

$ createdb f1db  
+$ pgloader mysql://root@localhost/f1db pgsql://f1db 

Done! All with schema, table definitions, constraints, indexes, primary keys, auto_increment columns turned into bigserial , foreign keys, comments, and if you had some MySQL default values such as ON UPDATE CURRENT_TIMESTAMP they would have been translated to a PostgreSQL before update trigger automatically.

$ pgloader mysql://root@localhost/f1db pgsql:///f1db  
+2017-06-16T08:56:14.064000+02:00 LOG Main logs in '/private/tmp/pgloader/pgloader.log'  
+2017-06-16T08:56:14.068000+02:00 LOG Data errors in '/private/tmp/pgloader/'  
+2017-06-16T08:56:19.542000+02:00 LOG report summary reset  
+               table name       read   imported     errors      total time  
+-------------------------  ---------  ---------  ---------  --------------  
+          fetch meta data         33         33          0          0.365s  
+           Create Schemas          0          0          0          0.007s  
+         Create SQL Types          0          0          0          0.006s  
+            Create tables         26         26          0          0.068s  
+           Set Table OIDs         13         13          0          0.012s  
+-------------------------  ---------  ---------  ---------  --------------  
+  f1db.constructorresults      11011      11011          0          0.205s  
+            f1db.circuits         73         73          0          0.150s  
+        f1db.constructors        208        208          0          0.059s  
+f1db.constructorstandings      11766      11766          0          0.365s  
+             f1db.drivers        841        841          0          0.268s  
+            f1db.laptimes     413578     413578          0          2.892s  
+     f1db.driverstandings      31420      31420          0          0.583s  
+            f1db.pitstops       5796       5796          0          2.154s  
+               f1db.races        976        976          0          0.227s  
+          f1db.qualifying       7257       7257          0          0.228s  
+             f1db.seasons         68         68          0          0.527s  
+             f1db.results      23514      23514          0          0.658s  
+              f1db.status        133        133          0          0.130s  
+-------------------------  ---------  ---------  ---------  --------------  
+  COPY Threads Completion         39         39          0          4.303s  
+           Create Indexes         20         20          0          1.497s  
+   Index Build Completion         20         20          0          0.214s  
+          Reset Sequences          0         10          0          0.058s  
+             Primary Keys         13         13          0          0.012s  
+      Create Foreign Keys          0          0          0          0.000s  
+          Create Triggers          0          0          0          0.001s  
+         Install Comments          0          0          0          0.000s  
+-------------------------  ---------  ---------  ---------  --------------  
+        Total import time     506641     506641          0          5.547s 

You may need to have special cases to take care of tho, or views that you want to materialize while doing the migration. In advanced case you can use the pgloader command.

The Command

To load data with pgloader you need to define in a command the operations in some details. Here's our example for loading the MySQL Sakila Sample Database:

Here's our command:

load database  
      from      mysql://root@localhost/sakila  
      into postgresql:///sakila  
  
diff --git a/docs/howto/pgloader.1.html b/docs/howto/pgloader.1.html
index af614cd..c75e3c3 100644
--- a/docs/howto/pgloader.1.html
+++ b/docs/howto/pgloader.1.html
@@ -121,7 +121,7 @@ pgloader --version 

Loading from a complex command

Use th postgresql:///pgloader?districts_longlat

Now the OS will take care of the streaming and buffering between the network and the commands and pgloader will take care of streaming the data down to PostgreSQL.

Migrating from SQLite

The following command will open the SQLite database, discover its tables definitions including indexes and foreign keys, migrate those definitions while casting the data type specifications to their PostgreSQL equivalent and then migrate the data over:

createdb newdb  
 pgloader ./test/sqlite/sqlite.db postgresql:///newdb 

Migrating from MySQL

Just create a database where to host the MySQL data and definitions and have pgloader do the migration for you in a single command line:

createdb pagila  
 pgloader mysql://user@localhost/sakila postgresql:///pagila 

Fetching an archived DBF file from a HTTP remote location

It's possible for pgloader to download a file from HTTP, unarchive it, and only then open it to discover the schema then load the data:

createdb foo  
-pgloader --type dbf http://www.insee.fr/fr/methodes/nomenclatures/cog/telechargement/2013/dbf/historiq2013.zip postgresql:///foo 

Here it's not possible for pgloader to guess the kind of data source it's being given, so it's necessary to use the --type command line switch.

BATCHES AND RETRY BEHAVIOUR

To load data to PostgreSQL, pgloader uses the COPY streaming protocol. While this is the faster way to load data, COPY has an important drawback: as soon as PostgreSQL emits an error with any bit of data sent to it, whatever the problem is, the whole data set is rejected by PostgreSQL.

To work around that, pgloader cuts the data into batches of 25000 rows each, so that when a problem occurs it's only impacting that many rows of data. Each batch is kept in memory while the COPY streaming happens, in order to be able to handle errors should some happen.

When PostgreSQL rejects the whole batch, pgloader logs the error message then isolates the bad row(s) from the accepted ones by retrying the batched rows in smaller batches. To do that, pgloader parses the CONTEXT error message from the failed COPY, as the message contains the line number where the error was found in the batch, as in the following example:

CONTEXT: COPY errors, line 3, column b: "2006-13-11" 

Using that information, pgloader will reload all rows in the batch before the erroneous one, log the erroneous one as rejected, then try loading the remaining of the batch in a single attempt, which may or may not contain other erroneous data.

At the end of a load containing rejected rows, you will find two files in the root-dir location, under a directory named the same as the target database of your setup. The filenames are the target table, and their extensions are .dat for the rejected data and .log for the file containing the full PostgreSQL client side logs about the rejected data.

The .dat file is formatted in PostgreSQL the text COPY format as documented in http://www.postgresql.org/docs/9.2/static/sql-copy.html#AEN66609.

A NOTE ABOUT PERFORMANCE

pgloader has been developed with performance in mind, to be able to cope with ever growing needs in loading large amounts of data into PostgreSQL.

The basic architecture it uses is the old Unix pipe model, where a thread is responsible for loading the data (reading a CSV file, querying MySQL, etc) and fills pre-processed data into a queue. Another threads feeds from the queue, apply some more transformations to the input data and stream the end result to PostgreSQL using the COPY protocol.

When given a file that the PostgreSQL COPY command knows how to parse, and if the file contains no erroneous data, then pgloader will never be as fast as just using the PostgreSQL COPY command.

Note that while the COPY command is restricted to read either from its standard input or from a local file on the server's file system, the command line tool psql implements a \copy command that knows how to stream a file local to the client over the network and into the PostgreSQL server, using the same protocol as pgloader uses.

A NOTE ABOUT PARALLELISM

pgloader uses several concurrent tasks to process the data being loaded:

  • a reader task reads the data in,

  • at least one transformer task is responsible for applying the needed transformations to given data so that it fits PostgreSQL expectations, those transformations include CSV like user-defined projections, database casting (default and user given), and PostgreSQL specific formatting of the data for the COPY protocol and in unicode,

  • at least one writer task is responsible for sending the data down to PostgreSQL using the COPY protocol.

The idea behind having the transformer task do the formatting is so that in the event of bad rows being rejected by PostgreSQL the retry process doesn't have to do that step again.

At the moment, the number of transformer and writer tasks are forced into being the same, which allows for a very simple queueing model to be implemented: the reader task fills in one queue per transformer task, which then pops from that queue and pushes to a writer queue per COPY task.

The parameter workers allows to control how many worker threads are allowed to be active at any time (that's the parallelism level); and the parameter concurrency allows to control how many tasks are started to handle the data (they may not all run at the same time, depending on the workers setting).

With a concurrency of 2, we start 1 reader thread, 2 transformer threads and 2 writer tasks, that's 5 concurrent tasks to schedule into workers threads.

So with workers = 4, concurrency = 2, the parallel scheduler will maintain active only 4 of the 5 tasks that are started.

With workers = 8, concurrency = 1, we then are able to work on several units of work at the same time. In the database sources, a unit of work is a table, so those settings allow pgloader to be active on as many as 3 tables at any time in the load process.

As the CREATE INDEX threads started by pgloader are only waiting until PostgreSQL is done with the real work, those threads are NOT counted into the concurrency levels as detailed here.

By default, as many CREATE INDEX threads as the maximum number of indexes per table are found in your source schema. It is possible to set the max parallel create index WITH option to another number in case there's just too many of them to create.

SOURCE FORMATS

pgloader supports the following input formats:

  • csv, which includes also tsv and other common variants where you can change the separator and the quoting rules and how to escape the quotes themselves;

  • fixed columns file, where pgloader is flexible enough to accomodate with source files missing columns (ragged fixed length column files do exist);

  • PostgreSLQ COPY formatted files, following the COPY TEXT documentation of PostgreSQL, such as the reject files prepared by pgloader;

  • dbase files known as db3 or dbf file;

  • ixf formated files, ixf being a binary storage format from IBM;

  • sqlite databases with fully automated discovery of the schema and advanced cast rules;

  • mysql databases with fully automated discovery of the schema and advanced cast rules;

  • MS SQL databases with fully automated discovery of the schema and advanced cast rules.

PGLOADER COMMANDS SYNTAX

pgloader implements a Domain Specific Language allowing to setup complex data loading scripts handling computed columns and on-the-fly sanitization of the input data. For more complex data loading scenarios, you will be required to learn that DSL's syntax. It's meant to look familiar to DBA by being inspired by SQL where it makes sense, which is not that much after all.

The pgloader commands follow the same global grammar rules. Each of them might support only a subset of the general options and provide specific options.

LOAD <source-type>  
+pgloader --type dbf http://www.insee.fr/fr/methodes/nomenclatures/cog/telechargement/2013/dbf/historiq2013.zip postgresql:///foo 

Here it's not possible for pgloader to guess the kind of data source it's being given, so it's necessary to use the --type command line switch.

BATCHES AND RETRY BEHAVIOUR

To load data to PostgreSQL, pgloader uses the COPY streaming protocol. While this is the faster way to load data, COPY has an important drawback: as soon as PostgreSQL emits an error with any bit of data sent to it, whatever the problem is, the whole data set is rejected by PostgreSQL.

To work around that, pgloader cuts the data into batches of 25000 rows each, so that when a problem occurs it's only impacting that many rows of data. Each batch is kept in memory while the COPY streaming happens, in order to be able to handle errors should some happen.

When PostgreSQL rejects the whole batch, pgloader logs the error message then isolates the bad row(s) from the accepted ones by retrying the batched rows in smaller batches. To do that, pgloader parses the CONTEXT error message from the failed COPY, as the message contains the line number where the error was found in the batch, as in the following example:

CONTEXT: COPY errors, line 3, column b: "2006-13-11" 

Using that information, pgloader will reload all rows in the batch before the erroneous one, log the erroneous one as rejected, then try loading the remaining of the batch in a single attempt, which may or may not contain other erroneous data.

At the end of a load containing rejected rows, you will find two files in the root-dir location, under a directory named the same as the target database of your setup. The filenames are the target table, and their extensions are .dat for the rejected data and .log for the file containing the full PostgreSQL client side logs about the rejected data.

The .dat file is formatted in PostgreSQL the text COPY format as documented in http://www.postgresql.org/docs/9.2/static/sql-copy.html#AEN66609.

A NOTE ABOUT PERFORMANCE

pgloader has been developed with performance in mind, to be able to cope with ever growing needs in loading large amounts of data into PostgreSQL.

The basic architecture it uses is the old Unix pipe model, where a thread is responsible for loading the data (reading a CSV file, querying MySQL, etc) and fills pre-processed data into a queue. Another threads feeds from the queue, apply some more transformations to the input data and stream the end result to PostgreSQL using the COPY protocol.

When given a file that the PostgreSQL COPY command knows how to parse, and if the file contains no erroneous data, then pgloader will never be as fast as just using the PostgreSQL COPY command.

Note that while the COPY command is restricted to read either from its standard input or from a local file on the server's file system, the command line tool psql implements a \copy command that knows how to stream a file local to the client over the network and into the PostgreSQL server, using the same protocol as pgloader uses.

A NOTE ABOUT PARALLELISM

pgloader uses several concurrent tasks to process the data being loaded:

  • a reader task reads the data in,

  • at least one transformer task is responsible for applying the needed transformations to given data so that it fits PostgreSQL expectations, those transformations include CSV like user-defined projections, database casting (default and user given), and PostgreSQL specific formatting of the data for the COPY protocol and in unicode,

  • at least one writer task is responsible for sending the data down to PostgreSQL using the COPY protocol.

The idea behind having the transformer task do the formatting is so that in the event of bad rows being rejected by PostgreSQL the retry process doesn't have to do that step again.

At the moment, the number of transformer and writer tasks are forced into being the same, which allows for a very simple queueing model to be implemented: the reader task fills in one queue per transformer task, which then pops from that queue and pushes to a writer queue per COPY task.

The parameter workers allows to control how many worker threads are allowed to be active at any time (that's the parallelism level); and the parameter concurrency allows to control how many tasks are started to handle the data (they may not all run at the same time, depending on the workers setting).

We allow workers simultaneous workers to be active at the same time in the context of a single table. A single unit of work consist of several kinds of workers:

  • a reader getting raw data from the source,
  • N transformers preparing raw data for PostgreSQL COPY protocol,
  • N writers sending the data down to PostgreSQL.

The N here is setup to the concurrency parameter: with a CONCURRENCY of 2, we start (+ 1 2 2) = 5 concurrent tasks, with a concurrency of 4 we start (+ 1 4 4) = 9 concurrent tasks, of which only workers may be active simultaneously.

So with workers = 4, concurrency = 2, the parallel scheduler will maintain active only 4 of the 5 tasks that are started.

With workers = 8, concurrency = 1, we then are able to work on several units of work at the same time. In the database sources, a unit of work is a table, so those settings allow pgloader to be active on as many as 3 tables at any time in the load process.

The defaults are workers = 4, concurrency = 1 when loading from a database source, and workers = 8, concurrency = 2 when loading from something else (currently, a file). Those defaults are arbitrary and waiting for feedback from users, so please consider providing feedback if you play with the settings.

As the CREATE INDEX threads started by pgloader are only waiting until PostgreSQL is done with the real work, those threads are NOT counted into the concurrency levels as detailed here.

By default, as many CREATE INDEX threads as the maximum number of indexes per table are found in your source schema. It is possible to set the max parallel create index WITH option to another number in case there's just too many of them to create.

SOURCE FORMATS

pgloader supports the following input formats:

  • csv, which includes also tsv and other common variants where you can change the separator and the quoting rules and how to escape the quotes themselves;

  • fixed columns file, where pgloader is flexible enough to accomodate with source files missing columns (ragged fixed length column files do exist);

  • PostgreSLQ COPY formatted files, following the COPY TEXT documentation of PostgreSQL, such as the reject files prepared by pgloader;

  • dbase files known as db3 or dbf file;

  • ixf formated files, ixf being a binary storage format from IBM;

  • sqlite databases with fully automated discovery of the schema and advanced cast rules;

  • mysql databases with fully automated discovery of the schema and advanced cast rules;

  • MS SQL databases with fully automated discovery of the schema and advanced cast rules.

PGLOADER COMMANDS SYNTAX

pgloader implements a Domain Specific Language allowing to setup complex data loading scripts handling computed columns and on-the-fly sanitization of the input data. For more complex data loading scenarios, you will be required to learn that DSL's syntax. It's meant to look familiar to DBA by being inspired by SQL where it makes sense, which is not that much after all.

The pgloader commands follow the same global grammar rules. Each of them might support only a subset of the general options and provide specific options.

LOAD <source-type>  
      FROM <source-url>     [ HAVING FIELDS <source-level-options> ]  
 	 INTO <postgresql-url> [ TARGET COLUMNS <columns-and-options> ]  
  
@@ -131,7 +131,7 @@ pgloader --type dbf http://www.insee.fr/fr/methodes/nomenclatures/cog/telecharge
  
 [ BEFORE LOAD [ DO <sql statements> | EXECUTE <sql file> ] ... ]  
 [  AFTER LOAD [ DO <sql statements> | EXECUTE <sql file> ] ... ]  
-; 

The main clauses are the LOAD, FROM, INTO and WITH clauses that each command implements. Some command then implement the SET command, or some specific clauses such as the CAST clause.

COMMON CLAUSES

Some clauses are common to all commands:

  • FROM

    The FROM clause specifies where to read the data from, and each command introduces its own variant of sources. For instance, the CSV source supports inline, stdin, a filename, a quoted filename, and a FILENAME MATCHING clause (see above); whereas the MySQL source only supports a MySQL database URI specification.

  • In all cases, the FROM clause is able to read its value from an environment variable when using the form GETENV 'varname'.

  • INTO
  • The PostgreSQL connection URI must contains the name of the target table where to load the data into. That table must have already been created in PostgreSQL, and the name might be schema qualified.

    The INTO target database connection URI can be parsed from the value of an environment variable when using the form GETENV 'varname'.

    Then INTO option also supports an optional comma separated list of target columns, which are either the name of an input field or the white space separated list of the target column name, its PostgreSQL data type and a USING expression.

    The USING expression can be any valid Common Lisp form and will be read with the current package set to pgloader.transforms, so that you can use functions defined in that package, such as functions loaded dynamically with the --load command line parameter.

    Each USING expression is compiled at runtime to native code.

    This feature allows pgloader to load any number of fields in a CSV file into a possibly different number of columns in the database, using custom code for that projection.

  • WITH

    Set of options to apply to the command, using a global syntax of either:

    • key = value
    • use option
    • do not use option
  • See each specific command for details.

    All data sources specific commands support the following options:

    • batch rows = R
    • batch size = ... MB
    • batch concurrency = ...

    See the section BATCH BEHAVIOUR OPTIONS for more details.

    In addition, the following settings are available:

    • workers = W
    • concurrency = C
    • max parallel create index = I

    See section A NOTE ABOUT PARALLELISM for more details.

  • SET

    This clause allows to specify session parameters to be set for all the sessions opened by pgloader. It expects a list of parameter name, the equal sign, then the single-quoted value as a comma separated list.

  • The names and values of the parameters are not validated by pgloader, they are given as-is to PostgreSQL.

  • BEFORE LOAD DO
  • You can run SQL queries against the database before loading the data from the CSV file. Most common SQL queries are CREATE TABLE IF NOT EXISTS so that the data can be loaded.

    Each command must be dollar-quoted: it must begin and end with a double dollar sign, $$. Dollar-quoted queries are then comma separated. No extra punctuation is expected after the last SQL query.

  • BEFORE LOAD EXECUTE

    Same behaviour as in the BEFORE LOAD DO clause. Allows you to read the SQL queries from a SQL file. Implements support for PostgreSQL dollar-quoting and the \i and \ir include facilities as in psql batch mode (where they are the same thing).

  • AFTER LOAD DO

    Same format as BEFORE LOAD DO, the dollar-quoted queries found in that section are executed once the load is done. That's the right time to create indexes and constraints, or re-enable triggers.

  • AFTER LOAD EXECUTE

    Same behaviour as in the AFTER LOAD DO clause. Allows you to read the SQL queries from a SQL file. Implements support for PostgreSQL dollar-quoting and the \i and \ir include facilities as in psql batch mode (where they are the same thing).

Connection String

The <postgresql-url> parameter is expected to be given as a Connection URI as documented in the PostgreSQL documentation at http://www.postgresql.org/docs/9.3/static/libpq-connect.html#LIBPQ-CONNSTRING.

postgresql://[user[:password]@][netloc][:port][/dbname][?option=value&...] 

Where:

  • user

    Can contain any character, including colon (:) which must then be doubled (::) and at-sign (@) which must then be doubled (@@).

  • When omitted, the user name defaults to the value of the PGUSER environment variable, and if it is unset, the value of the USER environment variable.

  • password
  • Can contain any character, including the at sign (@) which must then be doubled (@@). To leave the password empty, when the user name ends with at at sign, you then have to use the syntax user:@.

    When omitted, the password defaults to the value of the PGPASSWORD environment variable if it is set, otherwise the password is left unset.

  • netloc

    Can be either a hostname in dotted notation, or an ipv4, or an Unix domain socket path. Empty is the default network location, under a system providing unix domain socket that method is preferred, otherwise the netloc default to localhost.

  • It's possible to force the unix domain socket path by using the syntax unix:/path/to/where/the/socket/file/is, so to force a non default socket path and a non default port, you would have:

    postgresql://unix:/tmp:54321/dbname 

    The netloc defaults to the value of the PGHOST environment variable, and if it is unset, to either the default unix socket path when running on a Unix system, and localhost otherwise.

  • dbname
  • Should be a proper identifier (letter followed by a mix of letters, digits and the punctuation signs comma (,), dash (-) and underscore (_).

    When omitted, the dbname defaults to the value of the environment variable PGDATABASE, and if that is unset, to the user value as determined above.

  • options

    The optional parameters must be supplied with the form name=value, and you may use several parameters by separating them away using an ampersand (&) character.

  • Only some options are supported here, tablename (which might be qualified with a schema name) sslmode, host, port, dbname, user and password.

    The sslmode parameter values can be one of disable, allow, prefer or require.

    For backward compatibility reasons, it's possible to specify the tablename option directly, without spelling out the tablename= parts.

    The options override the main URI components when both are given, and using the percent-encoded option parameters allow using passwords starting with a colon and bypassing other URI components parsing limitations.

Regular Expressions

Several clauses listed in the following accept regular expressions with the following input rules:

  • A regular expression begins with a tilde sign (~),

  • is then followed with an opening sign,

  • then any character is allowed and considered part of the regular expression, except for the closing sign,

  • then a closing sign is expected.

The opening and closing sign are allowed by pair, here's the complete list of allowed delimiters:

~//  
+; 

The main clauses are the LOAD, FROM, INTO and WITH clauses that each command implements. Some command then implement the SET command, or some specific clauses such as the CAST clause.

COMMON CLAUSES

Some clauses are common to all commands:

  • FROM

    The FROM clause specifies where to read the data from, and each command introduces its own variant of sources. For instance, the CSV source supports inline, stdin, a filename, a quoted filename, and a FILENAME MATCHING clause (see above); whereas the MySQL source only supports a MySQL database URI specification.

  • In all cases, the FROM clause is able to read its value from an environment variable when using the form GETENV 'varname'.

  • INTO
  • The PostgreSQL connection URI must contains the name of the target table where to load the data into. That table must have already been created in PostgreSQL, and the name might be schema qualified.

    The INTO target database connection URI can be parsed from the value of an environment variable when using the form GETENV 'varname'.

    Then INTO option also supports an optional comma separated list of target columns, which are either the name of an input field or the white space separated list of the target column name, its PostgreSQL data type and a USING expression.

    The USING expression can be any valid Common Lisp form and will be read with the current package set to pgloader.transforms, so that you can use functions defined in that package, such as functions loaded dynamically with the --load command line parameter.

    Each USING expression is compiled at runtime to native code.

    This feature allows pgloader to load any number of fields in a CSV file into a possibly different number of columns in the database, using custom code for that projection.

  • WITH

    Set of options to apply to the command, using a global syntax of either:

    • key = value
    • use option
    • do not use option
  • See each specific command for details.

    All data sources specific commands support the following options:

    • on error stop
    • batch rows = R
    • batch size = ... MB
    • batch concurrency = ...

    See the section BATCH BEHAVIOUR OPTIONS for more details.

    In addition, the following settings are available:

    • workers = W
    • concurrency = C
    • max parallel create index = I

    See section A NOTE ABOUT PARALLELISM for more details.

  • SET

    This clause allows to specify session parameters to be set for all the sessions opened by pgloader. It expects a list of parameter name, the equal sign, then the single-quoted value as a comma separated list.

  • The names and values of the parameters are not validated by pgloader, they are given as-is to PostgreSQL.

  • BEFORE LOAD DO
  • You can run SQL queries against the database before loading the data from the CSV file. Most common SQL queries are CREATE TABLE IF NOT EXISTS so that the data can be loaded.

    Each command must be dollar-quoted: it must begin and end with a double dollar sign, $$. Dollar-quoted queries are then comma separated. No extra punctuation is expected after the last SQL query.

  • BEFORE LOAD EXECUTE

    Same behaviour as in the BEFORE LOAD DO clause. Allows you to read the SQL queries from a SQL file. Implements support for PostgreSQL dollar-quoting and the \i and \ir include facilities as in psql batch mode (where they are the same thing).

  • AFTER LOAD DO

    Same format as BEFORE LOAD DO, the dollar-quoted queries found in that section are executed once the load is done. That's the right time to create indexes and constraints, or re-enable triggers.

  • AFTER LOAD EXECUTE

    Same behaviour as in the AFTER LOAD DO clause. Allows you to read the SQL queries from a SQL file. Implements support for PostgreSQL dollar-quoting and the \i and \ir include facilities as in psql batch mode (where they are the same thing).

Connection String

The <postgresql-url> parameter is expected to be given as a Connection URI as documented in the PostgreSQL documentation at http://www.postgresql.org/docs/9.3/static/libpq-connect.html#LIBPQ-CONNSTRING.

postgresql://[user[:password]@][netloc][:port][/dbname][?option=value&...] 

Where:

  • user

    Can contain any character, including colon (:) which must then be doubled (::) and at-sign (@) which must then be doubled (@@).

  • When omitted, the user name defaults to the value of the PGUSER environment variable, and if it is unset, the value of the USER environment variable.

  • password
  • Can contain any character, including the at sign (@) which must then be doubled (@@). To leave the password empty, when the user name ends with at at sign, you then have to use the syntax user:@.

    When omitted, the password defaults to the value of the PGPASSWORD environment variable if it is set, otherwise the password is left unset.

  • netloc

    Can be either a hostname in dotted notation, or an ipv4, or an Unix domain socket path. Empty is the default network location, under a system providing unix domain socket that method is preferred, otherwise the netloc default to localhost.

  • It's possible to force the unix domain socket path by using the syntax unix:/path/to/where/the/socket/file/is, so to force a non default socket path and a non default port, you would have:

    postgresql://unix:/tmp:54321/dbname 

    The netloc defaults to the value of the PGHOST environment variable, and if it is unset, to either the default unix socket path when running on a Unix system, and localhost otherwise.

  • dbname
  • Should be a proper identifier (letter followed by a mix of letters, digits and the punctuation signs comma (,), dash (-) and underscore (_).

    When omitted, the dbname defaults to the value of the environment variable PGDATABASE, and if that is unset, to the user value as determined above.

  • options

    The optional parameters must be supplied with the form name=value, and you may use several parameters by separating them away using an ampersand (&) character.

  • Only some options are supported here, tablename (which might be qualified with a schema name) sslmode, host, port, dbname, user and password.

    The sslmode parameter values can be one of disable, allow, prefer or require.

    For backward compatibility reasons, it's possible to specify the tablename option directly, without spelling out the tablename= parts.

    The options override the main URI components when both are given, and using the percent-encoded option parameters allow using passwords starting with a colon and bypassing other URI components parsing limitations.

Regular Expressions

Several clauses listed in the following accept regular expressions with the following input rules:

  • A regular expression begins with a tilde sign (~),

  • is then followed with an opening sign,

  • then any character is allowed and considered part of the regular expression, except for the closing sign,

  • then a closing sign is expected.

The opening and closing sign are allowed by pair, here's the complete list of allowed delimiters:

~//  
 ~[]  
 ~{}  
 ~()  
@@ -309,15 +309,28 @@ MATCHING regexp
  -- ALTER TABLE NAMES MATCHING 'film' RENAME TO 'films'  
  -- ALTER TABLE NAMES MATCHING ~/_list$/ SET SCHEMA 'mv'  
  
+ ALTER TABLE NAMES MATCHING ~/_list$/, 'sales_by_store', ~/sales_by/  
+  SET SCHEMA 'mv'  
+ 
+ ALTER TABLE NAMES MATCHING 'film' RENAME TO 'films'  
+ ALTER TABLE NAMES MATCHING ~/./ SET (fillfactor='40')  
+ 
+ ALTER SCHEMA 'sakila' RENAME TO 'pagila'  
+ 
  BEFORE LOAD DO  
- $$ create schema if not exists sakila; $$; 

The database command accepts the following clauses and options:

  • FROM

    Must be a connection URL pointing to a MySQL database. At the moment only MySQL is supported as a pgloader source.

  • If the connection URI contains a table name, then only this table is migrated from MySQL to PostgreSQL.

  • WITH
  • When loading from a MySQL database, the following options are supported, and the efault WITH clause is: no truncate, create tables, include drop, create indexes, reset sequences, foreign keys, downcase identifiers.

    WITH options:

    • include drop

      When this option is listed, pgloader drops all the tables in the target PostgreSQL database whose names appear in the MySQL database. This option allows for using the same command several times in a row until you figure out all the options, starting automatically from a clean environment. Please note that CASCADE is used to ensure that tables are dropped even if there are foreign keys pointing to them. This is precisely what include drop is intended to do: drop all target tables and recreate them.

    • Great care needs to be taken when using include drop, as it will cascade to all objects referencing the target tables, possibly including other tables that are not being loaded from the source DB.

    • include no drop
    • When this option is listed, pgloader will not include any DROP statement when loading the data.

    • truncate
    • When this option is listed, pgloader issue the TRUNCATE command against each PostgreSQL table just before loading data into it.

    • no truncate
    • When this option is listed, pgloader issues no TRUNCATE command.

    • disable triggers
    • When this option is listed, pgloader issues an ALTER TABLE ... DISABLE TRIGGER ALL command against the PostgreSQL target table before copying the data, then the command ALTER TABLE ... ENABLE TRIGGER ALL once the COPY is done.

      This option allows loading data into a pre-existing table ignoring the foreign key constraints and user defined triggers and may result in invalid foreign key constraints once the data is loaded. Use with care.

    • create tables

      When this option is listed, pgloader creates the table using the meta data found in the MySQL file, which must contain a list of fields with their data type. A standard data type conversion from DBF to PostgreSQL is done.

    • create no tables

      When this option is listed, pgloader skips the creation of table before loading data, target tables must then already exist.

    • Also, when using create no tables pgloader fetches the metadata from the current target database and checks type casting, then will remove constraints and indexes prior to loading the data and install them back again once the loading is done.

    • create indexes
    • When this option is listed, pgloader gets the definitions of all the indexes found in the MySQL database and create the same set of index definitions against the PostgreSQL database.

    • create no indexes
    • When this option is listed, pgloader skips the creating indexes.

    • uniquify index names, preserve index names
    • MySQL index names are unique per-table whereas in PostgreSQL index names have to be unique per-schema. The default for pgloader is to change the index name by prefixing it with idx_OID where OID is the internal numeric identifier of the table the index is built against.

      In somes cases like when the DDL are entirely left to a framework it might be sensible for pgloader to refrain from handling index unique names, that is achieved by using the preserve index names option.

      The default is to uniquify index names.

      Even when using the option preserve index names, MySQL primary key indexes named "PRIMARY" will get their names uniquified. Failing to do so would prevent the primary keys to be created again in PostgreSQL where the index names must be unique per schema.

    • foreign keys

      When this option is listed, pgloader gets the definitions of all the foreign keys found in the MySQL database and create the same set of foreign key definitions against the PostgreSQL database.

    • no foreign keys

      When this option is listed, pgloader skips creating foreign keys.

    • reset sequences

      When this option is listed, at the end of the data loading and after the indexes have all been created, pgloader resets all the PostgreSQL sequences created to the current maximum value of the column they are attached to.

    • The options schema only and data only have no effects on this option.

    • reset no sequences
    • When this option is listed, pgloader skips resetting sequences after the load.

      The options schema only and data only have no effects on this option.

    • downcase identifiers

      When this option is listed, pgloader converts all MySQL identifiers (table names, index names, column names) to downcase, except for PostgreSQL reserved keywords.

    • The PostgreSQL reserved keywords are determined dynamically by using the system function pg_get_keywords().

    • quote identifiers
    • When this option is listed, pgloader quotes all MySQL identifiers so that their case is respected. Note that you will then have to do the same thing in your application code queries.

    • schema only
    • When this option is listed pgloader refrains from migrating the data over. Note that the schema in this context includes the indexes when the option create indexes has been listed.

    • data only
    • When this option is listed pgloader only issues the COPY statements, without doing any other processing.

  • CAST

    The cast clause allows to specify custom casting rules, either to overload the default casting rules or to amend them with special cases.

  • A casting rule is expected to follow one of the forms:

    type <mysql-type-name> [ <guard> ... ] to <pgsql-type-name> [ <option> ... ]  
    +   $$ create schema if not exists pagila; $$,  
    +   $$ create schema if not exists mv;     $$,  
    +   $$ alter database sakila set search_path to pagila, mv, public; $$;  
    +

    The database command accepts the following clauses and options:

    • FROM

      Must be a connection URL pointing to a MySQL database.

    • If the connection URI contains a table name, then only this table is migrated from MySQL to PostgreSQL.

      See the SOURCE CONNECTION STRING section above for details on how to write the connection string. Environment variables described in can be used as default values too. If the user is not provided, then it defaults to USER environment variable value. The password can be provided with the environment variable MYSQL_PWD. The host can be provided with the environment variable MYSQL_HOST and otherwise defaults to localhost. The port can be provided with the environment variable MYSQL_TCP_PORT and otherwise defaults to 3306.

    • WITH

      When loading from a MySQL database, the following options are supported, and the efault WITH clause is: no truncate, create tables, include drop, create indexes, reset sequences, foreign keys, downcase identifiers.

    • WITH options:

      • include drop

        When this option is listed, pgloader drops all the tables in the target PostgreSQL database whose names appear in the MySQL database. This option allows for using the same command several times in a row until you figure out all the options, starting automatically from a clean environment. Please note that CASCADE is used to ensure that tables are dropped even if there are foreign keys pointing to them. This is precisely what include drop is intended to do: drop all target tables and recreate them.

      • Great care needs to be taken when using include drop, as it will cascade to all objects referencing the target tables, possibly including other tables that are not being loaded from the source DB.

      • include no drop
      • When this option is listed, pgloader will not include any DROP statement when loading the data.

      • truncate
      • When this option is listed, pgloader issue the TRUNCATE command against each PostgreSQL table just before loading data into it.

      • no truncate
      • When this option is listed, pgloader issues no TRUNCATE command.

      • disable triggers
      • When this option is listed, pgloader issues an ALTER TABLE ... DISABLE TRIGGER ALL command against the PostgreSQL target table before copying the data, then the command ALTER TABLE ... ENABLE TRIGGER ALL once the COPY is done.

        This option allows loading data into a pre-existing table ignoring the foreign key constraints and user defined triggers and may result in invalid foreign key constraints once the data is loaded. Use with care.

      • create tables

        When this option is listed, pgloader creates the table using the meta data found in the MySQL file, which must contain a list of fields with their data type. A standard data type conversion from DBF to PostgreSQL is done.

      • create no tables

        When this option is listed, pgloader skips the creation of table before loading data, target tables must then already exist.

      • Also, when using create no tables pgloader fetches the metadata from the current target database and checks type casting, then will remove constraints and indexes prior to loading the data and install them back again once the loading is done.

      • create indexes
      • When this option is listed, pgloader gets the definitions of all the indexes found in the MySQL database and create the same set of index definitions against the PostgreSQL database.

      • create no indexes
      • When this option is listed, pgloader skips the creating indexes.

      • uniquify index names, preserve index names
      • MySQL index names are unique per-table whereas in PostgreSQL index names have to be unique per-schema. The default for pgloader is to change the index name by prefixing it with idx_OID where OID is the internal numeric identifier of the table the index is built against.

        In somes cases like when the DDL are entirely left to a framework it might be sensible for pgloader to refrain from handling index unique names, that is achieved by using the preserve index names option.

        The default is to uniquify index names.

        Even when using the option preserve index names, MySQL primary key indexes named "PRIMARY" will get their names uniquified. Failing to do so would prevent the primary keys to be created again in PostgreSQL where the index names must be unique per schema.

      • foreign keys

        When this option is listed, pgloader gets the definitions of all the foreign keys found in the MySQL database and create the same set of foreign key definitions against the PostgreSQL database.

      • no foreign keys

        When this option is listed, pgloader skips creating foreign keys.

      • reset sequences

        When this option is listed, at the end of the data loading and after the indexes have all been created, pgloader resets all the PostgreSQL sequences created to the current maximum value of the column they are attached to.

      • The options schema only and data only have no effects on this option.

      • reset no sequences
      • When this option is listed, pgloader skips resetting sequences after the load.

        The options schema only and data only have no effects on this option.

      • downcase identifiers

        When this option is listed, pgloader converts all MySQL identifiers (table names, index names, column names) to downcase, except for PostgreSQL reserved keywords.

      • The PostgreSQL reserved keywords are determined dynamically by using the system function pg_get_keywords().

      • quote identifiers
      • When this option is listed, pgloader quotes all MySQL identifiers so that their case is respected. Note that you will then have to do the same thing in your application code queries.

      • schema only
      • When this option is listed pgloader refrains from migrating the data over. Note that the schema in this context includes the indexes when the option create indexes has been listed.

      • data only
      • When this option is listed pgloader only issues the COPY statements, without doing any other processing.

    • CAST

      The cast clause allows to specify custom casting rules, either to overload the default casting rules or to amend them with special cases.

    • A casting rule is expected to follow one of the forms:

      type <mysql-type-name> [ <guard> ... ] to <pgsql-type-name> [ <option> ... ]  
       column <table-name>.<column-name> [ <guards> ] to ... 

      It's possible for a casting rule to either match against a MySQL data type or against a given column name in a given table name. That flexibility allows to cope with cases where the type tinyint might have been used as a boolean in some cases but as a smallint in others.

      The casting rules are applied in order, the first match prevents following rules to be applied, and user defined rules are evaluated first.

      The supported guards are:

      • when default 'value'

        The casting rule is only applied against MySQL columns of the source type that have given value, which must be a single-quoted or a double-quoted string.

      • when typemod expression

        The casting rule is only applied against MySQL columns of the source type that have a typemod value matching the given typemod expression. The typemod is separated into its precision and scale components.

      • Example of a cast rule using a typemod guard:

        type char when (= precision 1) to char keep typemod 

        This expression casts MySQL char(1) column to a PostgreSQL column of type char(1) while allowing for the general case char(N) will be converted by the default cast rule into a PostgreSQL type varchar(N).

      • with extra auto_increment
      • The casting rule is only applied against MySQL columns having the extra column auto_increment option set, so that it's possible to target e.g. serial rather than integer.

        The default matching behavior, when this option isn't set, is to match both columns with the extra definition and without.

        This means that if you want to implement a casting rule that target either serial or integer from a smallint definition depending on the auto_increment extra bit of information from MySQL, then you need to spell out two casting rules as following:

        type smallint  with extra auto_increment  
           to serial drop typemod keep default keep not null,  
         type smallint  
           to integer drop typemod keep default keep not null 

      The supported casting options are:

      • drop default, keep default

        When the option drop default is listed, pgloader drops any existing default expression in the MySQL database for columns of the source type from the CREATE TABLE statement it generates.

      • The spelling keep default explicitly prevents that behaviour and can be used to overload the default casting rules.

      • drop not null, keep not null, set not null
      • When the option drop not null is listed, pgloader drops any existing NOT NULL constraint associated with the given source MySQL datatype when it creates the tables in the PostgreSQL database.

        The spelling keep not null explicitly prevents that behaviour and can be used to overload the default casting rules.

        When the option set not null is listed, pgloader sets a NOT NULL constraint on the target column regardless whether it has been set in the source MySQL column.

      • drop typemod, keep typemod

        When the option drop typemod is listed, pgloader drops any existing typemod definition (e.g. precision and scale) from the datatype definition found in the MySQL columns of the source type when it created the tables in the PostgreSQL database.

      • The spelling keep typemod explicitly prevents that behaviour and can be used to overload the default casting rules.

      • using
      • This option takes as its single argument the name of a function to be found in the pgloader.transforms Common Lisp package. See above for details.

        It's possible to augment a default cast rule (such as one that applies against ENUM data type for example) with a transformation function by omitting entirely the type parts of the casting rule, as in the following example:

        column enumerate.foo using empty-string-to-null 
    • MATERIALIZE VIEWS

      This clause allows you to implement custom data processing at the data source by providing a view definition against which pgloader will query the data. It's not possible to just allow for plain SQL because we want to know a lot about the exact data types of each column involved in the query output.

    • This clause expect a comma separated list of view definitions, each one being either the name of an existing view in your database or the following expression:

      name AS $$ sql query $$

      The name and the sql query will be used in a CREATE VIEW statement at the beginning of the data loading, and the resulting view will then be dropped at the end of the data loading.

    • MATERIALIZE ALL VIEWS

      Same behaviour as MATERIALIZE VIEWS using the dynamic list of views as returned by MySQL rather than asking the user to specify the list.

    • INCLUDING ONLY TABLE NAMES MATCHING

      Introduce a comma separated list of table names or regular expression used to limit the tables to migrate to a sublist.

    • Example:

      INCLUDING ONLY TABLE NAMES MATCHING ~/film/, 'actor' 
    • EXCLUDING TABLE NAMES MATCHING

      Introduce a comma separated list of table names or regular expression used to exclude table names from the migration. This filter only applies to the result of the INCLUDING filter.

      EXCLUDING TABLE NAMES MATCHING ~<ory> 
    • DECODING TABLE NAMES MATCHING

      Introduce a comma separated list of table names or regular expressions used to force the encoding to use when processing data from MySQL. If the data encoding known to you is different from MySQL's idea about it, this is the option to use.

      DECODING TABLE NAMES MATCHING ~/messed/, ~/encoding/ AS utf8 
    • You can use as many such rules as you need, all with possibly different encodings.

    • ALTER TABLE NAMES MATCHING
    • Introduce a comma separated list of table names or regular expressions that you want to target in the pgloader ALTER TABLE command. The only two available actions are SET SCHEMA and RENAME TO, both take a quoted string as parameter:

      ALTER TABLE NAMES MATCHING ~/_list$/, 'sales_by_store', ~/sales_by/  
        SET SCHEMA 'mv'  
        
      -ALTER TABLE NAMES MATCHING 'film' RENAME TO 'films' 

      You can use as many such rules as you need. The list of tables to be migrated is searched in pgloader memory against the ALTER TABLE matching rules, and for each command pgloader stops at the first matching criteria (regexp or string).

      No ALTER TABLE command is sent to PostgreSQL, the modification happens at the level of the pgloader in-memory representation of your source database schema. In case of a name change, the mapping is kept and reused in the foreign key and index support.

    LIMITATIONS

    The database command currently only supports MySQL source database and has the following limitations:

    • Views are not migrated,

      Supporting views might require implementing a full SQL parser for the MySQL dialect with a porting engine to rewrite the SQL against PostgreSQL, including renaming functions and changing some constructs.

    • While it's not theoretically impossible, don't hold your breath.

    • Triggers are not migrated
    • The difficulty of doing so is not yet assessed.

    • ON UPDATE CURRENT_TIMESTAMP is currently not migrated
    • It's simple enough to implement, just not on the priority list yet.

    • Of the geometric datatypes, only the POINT database has been covered. The other ones should be easy enough to implement now, it's just not done yet.

    DEFAULT MySQL CASTING RULES

    When migrating from MySQL the following Casting Rules are provided:

    Numbers:

    • type int with extra auto_increment to serial when (< precision 10)
    • type int with extra auto_increment to bigserial when (<= 10 precision)
    • type int to int when (< precision 10)
    • type int to bigint when (<= 10 precision)
    • type tinyint with extra auto_increment to serial
    • type smallint with extra auto_increment to serial
    • type mediumint with extra auto_increment to serial
    • type bigint with extra auto_increment to bigserial

    • type tinyint to boolean when (= 1 precision) using tinyint-to-boolean

    • type tinyint to smallint drop typemod

    • type smallint to smallint drop typemod
    • type mediumint to integer drop typemod
    • type integer to integer drop typemod
    • type float to float drop typemod
    • type bigint to bigint drop typemod
    • type double to double precision drop typemod

    • type numeric to numeric keep typemod

    • type decimal to decimal keep typemod

    Texts:

    • type char to char keep typemod using remove-null-characters
    • type varchar to varchar keep typemod using remove-null-characters
    • type tinytext to text using remove-null-characters
    • type text to text using remove-null-characters
    • type mediumtext to text using remove-null-characters
    • type longtext to text using remove-null-characters

    Binary:

    • type binary to bytea
    • type varbinary to bytea
    • type tinyblob to bytea
    • type blob to bytea
    • type mediumblob to bytea
    • type longblob to bytea

    Date:

    • type datetime when default "0000-00-00 00:00:00" and not null to timestamptz drop not null drop default using zero-dates-to-null

    • type datetime when default "0000-00-00 00:00:00" to timestamptz drop default using zero-dates-to-null

    • type timestamp when default "0000-00-00 00:00:00" and not null to timestamptz drop not null drop default using zero-dates-to-null

    • type timestamp when default "0000-00-00 00:00:00" to timestamptz drop default using zero-dates-to-null

    • type date when default "0000-00-00" to date drop default using zero-dates-to-null

    • type date to date

    • type datetime to timestamptz
    • type timestamp to timestamptz
    • type year to integer drop typemod

    Geometric:

    • type point to point using pgloader.transforms::convert-mysql-point

    Enum types are declared inline in MySQL and separately with a CREATE TYPE command in PostgreSQL, so each column of Enum Type is converted to a type named after the table and column names defined with the same labels in the same order.

    When the source type definition is not matched in the default casting rules nor in the casting rules provided in the command, then the type name with the typemod is used.

    LOAD SQLite DATABASE

    This command instructs pgloader to load data from a SQLite file. Automatic discovery of the schema is supported, including build of the indexes.

    Here's an example:

    load database  
    +ALTER TABLE NAMES MATCHING 'film' RENAME TO 'films'  
    + 
    +ALTER TABLE NAMES MATCHING ~/./ SET (fillfactor='40') 

    You can use as many such rules as you need. The list of tables to be migrated is searched in pgloader memory against the ALTER TABLE matching rules, and for each command pgloader stops at the first matching criteria (regexp or string).

    No ALTER TABLE command is sent to PostgreSQL, the modification happens at the level of the pgloader in-memory representation of your source database schema. In case of a name change, the mapping is kept and reused in the foreign key and index support.

    The SET () action takes effect as a WITH clause for the CREATE TABLE command that pgloader will run when it has to create a table.

LIMITATIONS

The database command currently only supports MySQL source database and has the following limitations:

  • Views are not migrated,

    Supporting views might require implementing a full SQL parser for the MySQL dialect with a porting engine to rewrite the SQL against PostgreSQL, including renaming functions and changing some constructs.

  • While it's not theoretically impossible, don't hold your breath.

  • Triggers are not migrated
  • The difficulty of doing so is not yet assessed.

  • ON UPDATE CURRENT_TIMESTAMP is currently not migrated
  • It's simple enough to implement, just not on the priority list yet.

  • Of the geometric datatypes, only the POINT database has been covered. The other ones should be easy enough to implement now, it's just not done yet.

DEFAULT MySQL CASTING RULES

When migrating from MySQL the following Casting Rules are provided:

Numbers:

  • type int with extra auto_increment to serial when (< precision 10)
  • type int with extra auto_increment to bigserial when (<= 10 precision)
  • type int to int when (< precision 10)
  • type int to bigint when (<= 10 precision)
  • type tinyint with extra auto_increment to serial
  • type smallint with extra auto_increment to serial
  • type mediumint with extra auto_increment to serial
  • type bigint with extra auto_increment to bigserial

  • type tinyint to boolean when (= 1 precision) using tinyint-to-boolean

  • type tinyint to smallint drop typemod

  • type smallint to smallint drop typemod
  • type mediumint to integer drop typemod
  • type integer to integer drop typemod
  • type float to float drop typemod
  • type bigint to bigint drop typemod
  • type double to double precision drop typemod

  • type numeric to numeric keep typemod

  • type decimal to decimal keep typemod

Texts:

  • type char to char keep typemod using remove-null-characters
  • type varchar to varchar keep typemod using remove-null-characters
  • type tinytext to text using remove-null-characters
  • type text to text using remove-null-characters
  • type mediumtext to text using remove-null-characters
  • type longtext to text using remove-null-characters

Binary:

  • type binary to bytea
  • type varbinary to bytea
  • type tinyblob to bytea
  • type blob to bytea
  • type mediumblob to bytea
  • type longblob to bytea

Date:

  • type datetime when default "0000-00-00 00:00:00" and not null to timestamptz drop not null drop default using zero-dates-to-null

  • type datetime when default "0000-00-00 00:00:00" to timestamptz drop default using zero-dates-to-null

  • type timestamp when default "0000-00-00 00:00:00" and not null to timestamptz drop not null drop default using zero-dates-to-null

  • type timestamp when default "0000-00-00 00:00:00" to timestamptz drop default using zero-dates-to-null

  • type date when default "0000-00-00" to date drop default using zero-dates-to-null

  • type date to date

  • type datetime to timestamptz
  • type timestamp to timestamptz
  • type year to integer drop typemod

Geometric:

  • type point to point using pgloader.transforms::convert-mysql-point

Enum types are declared inline in MySQL and separately with a CREATE TYPE command in PostgreSQL, so each column of Enum Type is converted to a type named after the table and column names defined with the same labels in the same order.

When the source type definition is not matched in the default casting rules nor in the casting rules provided in the command, then the type name with the typemod is used.

LOAD SQLite DATABASE

This command instructs pgloader to load data from a SQLite file. Automatic discovery of the schema is supported, including build of the indexes.

Here's an example:

load database  
      from sqlite:///Users/dim/Downloads/lastfm_tags.db  
      into postgresql:///tags  
  
@@ -331,7 +344,9 @@ including only table names like 'GlobalAccount' in schema 'dbo'
  
 set work_mem to '16MB', maintenance_work_mem to '512 MB'  
  
-before load do $$ drop schema if exists dbo cascade; $$; 

The mssql command accepts the following clauses and options:

  • FROM

    Connection string to an existing MS SQL database server that listens and welcome external TCP/IP connection. As pgloader currently piggybacks on the FreeTDS driver, to change the port of the server please export the TDSPORT environment variable.

  • WITH

    When loading from a MS SQL database, the same options as when loading a MySQL database are supported. Please refer to the MySQL section. The following options are added:

    • create schemas

      When this option is listed, pgloader creates the same schemas as found on the MS SQL instance. This is the default.

    • create no schemas

      When this option is listed, pgloader refrains from creating any schemas at all, you must then ensure that the target schema do exist.

  • CAST

    The cast clause allows to specify custom casting rules, either to overload the default casting rules or to amend them with special cases.

  • Please refer to the MySQL CAST clause for details.

  • INCLUDING ONLY TABLE NAMES LIKE '...' [, '...'] IN SCHEMA '...'
  • Introduce a comma separated list of table name patterns used to limit the tables to migrate to a sublist. More than one such clause may be used, they will be accumulated together.

    Example:

    including only table names lile 'GlobalAccount' in schema 'dbo' 
  • EXCLUDING TABLE NAMES LIKE '...' [, '...'] IN SCHEMA '...'

    Introduce a comma separated list of table name patterns used to exclude table names from the migration. This filter only applies to the result of the INCLUDING filter.

    EXCLUDING TABLE NAMES MATCHING 'LocalAccount' in schema 'dbo' 
  • ALTER SCHEMA '...' RENAME TO '...'

    Allows to rename a schema on the flight, so that for instance the tables found in the schema 'dbo' in your source database will get migrated into the schema 'public' in the target database with this command:

    ALTER SCHEMA 'dbo' RENAME TO 'public' 
  • ALTER TABLE NAMES MATCHING ... IN SCHEMA '...'

    See the MySQL explanation for this clause above. It works the same in the context of migrating from MS SQL, only with the added option to specify the name of the schema where to find the definition of the target tables.

  • The matching is done in pgloader itself, with a Common Lisp regular expression lib, so doesn't depend on the LIKE implementation of MS SQL, nor on the lack of support for regular expressions in the engine.

DEFAULT MS SQL CASTING RULES

When migrating from MS SQL the following Casting Rules are provided:

Numbers:

  • type tinyint to smallint

  • type float to float using float-to-string

  • type real to real using float-to-string
  • type double to double precision using float-to-string
  • type numeric to numeric using float-to-string
  • type decimal to numeric using float-to-string
  • type money to numeric using float-to-string
  • type smallmoney to numeric using float-to-string

Texts:

  • type char to text drop typemod
  • type nchat to text drop typemod
  • type varchar to text drop typemod
  • type nvarchar to text drop typemod
  • type xml to text drop typemod

Binary:

  • type binary to bytea using byte-vector-to-bytea
  • type varbinary to bytea using byte-vector-to-bytea

Date:

  • type datetime to timestamptz
  • type datetime2 to timestamptz

Others:

  • type bit to boolean
  • type hierarchyid to bytea
  • type geography to bytea
  • type uniqueidentifier to uuid using sql-server-uniqueidentifier-to-uuid

TRANSFORMATION FUNCTIONS

Some data types are implemented in a different enough way that a transformation function is necessary. This function must be written in Common lisp and is searched in the pgloader.transforms package.

Some default transformation function are provided with pgloader, and you can use the --load command line option to load and compile your own lisp file into pgloader at runtime. For your functions to be found, remember to begin your lisp file with the following form:

(in-package #:pgloader.transforms) 

The provided transformation functions are:

  • zero-dates-to-null

    When the input date is all zeroes, return nil, which gets loaded as a PostgreSQL NULL value.

  • date-with-no-separator

    Applies zero-dates-to-null then transform the given date into a format that PostgreSQL will actually process:

    In:  "20041002152952"  
    +before load do $$ drop schema if exists dbo cascade; $$; 

    The mssql command accepts the following clauses and options:

    • FROM

      Connection string to an existing MS SQL database server that listens and welcome external TCP/IP connection. As pgloader currently piggybacks on the FreeTDS driver, to change the port of the server please export the TDSPORT environment variable.

    • WITH

      When loading from a MS SQL database, the same options as when loading a MySQL database are supported. Please refer to the MySQL section. The following options are added:

      • create schemas

        When this option is listed, pgloader creates the same schemas as found on the MS SQL instance. This is the default.

      • create no schemas

        When this option is listed, pgloader refrains from creating any schemas at all, you must then ensure that the target schema do exist.

    • CAST

      The cast clause allows to specify custom casting rules, either to overload the default casting rules or to amend them with special cases.

    • Please refer to the MySQL CAST clause for details.

    • INCLUDING ONLY TABLE NAMES LIKE '...' [, '...'] IN SCHEMA '...'
    • Introduce a comma separated list of table name patterns used to limit the tables to migrate to a sublist. More than one such clause may be used, they will be accumulated together.

      Example:

      including only table names lile 'GlobalAccount' in schema 'dbo' 
    • EXCLUDING TABLE NAMES LIKE '...' [, '...'] IN SCHEMA '...'

      Introduce a comma separated list of table name patterns used to exclude table names from the migration. This filter only applies to the result of the INCLUDING filter.

      EXCLUDING TABLE NAMES MATCHING 'LocalAccount' in schema 'dbo' 
    • ALTER SCHEMA '...' RENAME TO '...'

      Allows to rename a schema on the flight, so that for instance the tables found in the schema 'dbo' in your source database will get migrated into the schema 'public' in the target database with this command:

      ALTER SCHEMA 'dbo' RENAME TO 'public' 
    • ALTER TABLE NAMES MATCHING ... IN SCHEMA '...'

      See the MySQL explanation for this clause above. It works the same in the context of migrating from MS SQL, only with the added option to specify the name of the schema where to find the definition of the target tables.

    • The matching is done in pgloader itself, with a Common Lisp regular expression lib, so doesn't depend on the LIKE implementation of MS SQL, nor on the lack of support for regular expressions in the engine.

    Driver setup and encoding

    pgloader is using the FreeTDS driver, and internally expects the data to be sent in utf-8. To achieve that, you can configure the FreeTDS driver with those defaults, in the file ~/.freetds.conf:

    [global]  
    +	tds version = 7.4  
    +	client charset = UTF-8 

    DEFAULT MS SQL CASTING RULES

    When migrating from MS SQL the following Casting Rules are provided:

    Numbers:

    • type tinyint to smallint

    • type float to float using float-to-string

    • type real to real using float-to-string
    • type double to double precision using float-to-string
    • type numeric to numeric using float-to-string
    • type decimal to numeric using float-to-string
    • type money to numeric using float-to-string
    • type smallmoney to numeric using float-to-string

    Texts:

    • type char to text drop typemod
    • type nchat to text drop typemod
    • type varchar to text drop typemod
    • type nvarchar to text drop typemod
    • type xml to text drop typemod

    Binary:

    • type binary to bytea using byte-vector-to-bytea
    • type varbinary to bytea using byte-vector-to-bytea

    Date:

    • type datetime to timestamptz
    • type datetime2 to timestamptz

    Others:

    • type bit to boolean
    • type hierarchyid to bytea
    • type geography to bytea
    • type uniqueidentifier to uuid using sql-server-uniqueidentifier-to-uuid

    TRANSFORMATION FUNCTIONS

    Some data types are implemented in a different enough way that a transformation function is necessary. This function must be written in Common lisp and is searched in the pgloader.transforms package.

    Some default transformation function are provided with pgloader, and you can use the --load command line option to load and compile your own lisp file into pgloader at runtime. For your functions to be found, remember to begin your lisp file with the following form:

    (in-package #:pgloader.transforms) 

    The provided transformation functions are:

    • zero-dates-to-null

      When the input date is all zeroes, return nil, which gets loaded as a PostgreSQL NULL value.

    • date-with-no-separator

      Applies zero-dates-to-null then transform the given date into a format that PostgreSQL will actually process:

      In:  "20041002152952"  
       Out: "2004-10-02 15:29:52" 
    • time-with-no-separator

      Transform the given time into a format that PostgreSQL will actually process:

      In:  "08231560"  
       Out: "08:23:15.60" 
    • tinyint-to-boolean

      As MySQL lacks a proper boolean type, tinyint is often used to implement that. This function transforms 0 to 'false' and anything else to 'true'.

    • bits-to-boolean

      As MySQL lacks a proper boolean type, BIT is often used to implement that. This function transforms 1-bit bit vectors from 0 to f and any other value to t..

    • int-to-ip

      Convert an integer into a dotted representation of an ip4.

      In:  18435761  
       Out: "1.25.78.177" 
    • ip-range

      Converts a couple of integers given as strings into a range of ip4.

      In:  "16825344" "16825599"  
      diff --git a/docs/howto/sqlite.html b/docs/howto/sqlite.html
      index 81c73ec..e9ec1a9 100644
      --- a/docs/howto/sqlite.html
      +++ b/docs/howto/sqlite.html
      @@ -82,7 +82,43 @@
             
      -

      Loading SQLite files with pgloader

      The SQLite database is a respected solution to manage your data with. Its embeded nature makes it a source of migrations when a projects now needs to handle more concurrency, which PostgreSQL is very good at. pgloader can help you there.

      The Command

      To load data with pgloader you need to define in a command the operations in some details. Here's our command:

      load database  
      +

      Loading SQLite files with pgloader

      The SQLite database is a respected solution to manage your data with. Its embeded nature makes it a source of migrations when a projects now needs to handle more concurrency, which PostgreSQL is very good at. pgloader can help you there.

      In a Single Command Line

      You can

      $ createdb chinook  
      +$ pgloader https://github.com/lerocha/chinook-database/raw/master/ChinookDatabase/DataSources/Chinook_Sqlite_AutoIncrementPKs.sqlite pgsql:///chinook 

      Done! All with the schema, data, constraints, primary keys and foreign keys, etc. We also see an error with the Chinook schema that contains several primary key definitions against the same table, which is not accepted by PostgreSQL:

      2017-06-20T16:18:59.019000+02:00 LOG Data errors in '/private/tmp/pgloader/'  
      +2017-06-20T16:18:59.236000+02:00 LOG Fetching 'https://github.com/lerocha/chinook-database/raw/master/ChinookDatabase/DataSources/Chinook_Sqlite_AutoIncrementPKs.sqlite'  
      +2017-06-20T16:19:00.664000+02:00 ERROR Database error 42P16: multiple primary keys for table "playlisttrack" are not allowed  
      +QUERY: ALTER TABLE playlisttrack ADD PRIMARY KEY USING INDEX idx_66873_sqlite_autoindex_playlisttrack_1;  
      +2017-06-20T16:19:00.665000+02:00 LOG report summary reset  
      +             table name       read   imported     errors      total time  
      +-----------------------  ---------  ---------  ---------  --------------  
      +                  fetch          0          0          0          0.877s  
      +        fetch meta data         33         33          0          0.033s  
      +         Create Schemas          0          0          0          0.003s  
      +       Create SQL Types          0          0          0          0.006s  
      +          Create tables         22         22          0          0.043s  
      +         Set Table OIDs         11         11          0          0.012s  
      +-----------------------  ---------  ---------  ---------  --------------  
      +                  album        347        347          0          0.023s  
      +                 artist        275        275          0          0.023s  
      +               customer         59         59          0          0.021s  
      +               employee          8          8          0          0.018s  
      +                invoice        412        412          0          0.031s  
      +                  genre         25         25          0          0.021s  
      +            invoiceline       2240       2240          0          0.034s  
      +              mediatype          5          5          0          0.025s  
      +          playlisttrack       8715       8715          0          0.040s  
      +               playlist         18         18          0          0.016s  
      +                  track       3503       3503          0          0.111s  
      +-----------------------  ---------  ---------  ---------  --------------  
      +COPY Threads Completion         33         33          0          0.313s  
      +         Create Indexes         22         22          0          0.160s  
      + Index Build Completion         22         22          0          0.027s  
      +        Reset Sequences          0          0          0          0.017s  
      +           Primary Keys         12          0          1          0.013s  
      +    Create Foreign Keys         11         11          0          0.040s  
      +        Create Triggers          0          0          0          0.000s  
      +       Install Comments          0          0          0          0.000s  
      +-----------------------  ---------  ---------  ---------  --------------  
      +      Total import time      15607      15607          0          1.669s 

      You may need to have special cases to take care of tho. In advanced case you can use the pgloader command.

      The Command

      To load data with pgloader you need to define in a command the operations in some details. Here's our command:

      load database  
            from 'sqlite/Chinook_Sqlite_AutoIncrementPKs.sqlite'  
            into postgresql:///pgloader  
        
      diff --git a/docs/src/mysql.md b/docs/src/mysql.md
      index ec97113..3d50579 100644
      --- a/docs/src/mysql.md
      +++ b/docs/src/mysql.md
      @@ -1,4 +1,4 @@
      -# Migrating from MySQL with pgloader
      +# Migrating from MySQL to PostgreSQL
       
       If you want to migrate your data over to
       [PostgreSQL](http://www.postgresql.org) from MySQL then pgloader is the tool
      @@ -8,6 +8,71 @@ Most tools around are skipping the main problem with migrating from MySQL,
       which is to do with the type casting and data sanitizing that needs to be
       done. pgloader will not leave you alone on those topics.
       
      +## In a Single Command Line
      +
      +As an example, we will use the f1db database from 
      +which which provides a historical record of motor racing data for
      +non-commercial purposes. You can either use their API or download the whole
      +database at . Once you've done that
      +load the database in MySQL:
      +
      +    $ mysql -u root
      +    > create database f1db;
      +    > source f1db.sql
      +
      +Now let's migrate this database into PostgreSQL in a single command line:
      +
      +    $ createdb f1db
      +    $ pgloader mysql://root@localhost/f1db pgsql://f1db
      +
      +Done! All with schema, table definitions, constraints, indexes, primary
      +keys, *auto_increment* columns turned into *bigserial* , foreign keys,
      +comments, and if you had some MySQL default values such as *ON UPDATE
      +CURRENT_TIMESTAMP* they would have been translated to
      +a
      +[PostgreSQL before update trigger](https://www.postgresql.org/docs/current/static/plpgsql-trigger.html) automatically.
      +
      +    $ pgloader mysql://root@localhost/f1db pgsql:///f1db
      +    2017-06-16T08:56:14.064000+02:00 LOG Main logs in '/private/tmp/pgloader/pgloader.log'
      +    2017-06-16T08:56:14.068000+02:00 LOG Data errors in '/private/tmp/pgloader/'
      +    2017-06-16T08:56:19.542000+02:00 LOG report summary reset
      +                   table name       read   imported     errors      total time
      +    -------------------------  ---------  ---------  ---------  --------------
      +              fetch meta data         33         33          0          0.365s 
      +               Create Schemas          0          0          0          0.007s 
      +             Create SQL Types          0          0          0          0.006s 
      +                Create tables         26         26          0          0.068s 
      +               Set Table OIDs         13         13          0          0.012s 
      +    -------------------------  ---------  ---------  ---------  --------------
      +      f1db.constructorresults      11011      11011          0          0.205s 
      +                f1db.circuits         73         73          0          0.150s 
      +            f1db.constructors        208        208          0          0.059s 
      +    f1db.constructorstandings      11766      11766          0          0.365s 
      +                 f1db.drivers        841        841          0          0.268s 
      +                f1db.laptimes     413578     413578          0          2.892s 
      +         f1db.driverstandings      31420      31420          0          0.583s 
      +                f1db.pitstops       5796       5796          0          2.154s 
      +                   f1db.races        976        976          0          0.227s 
      +              f1db.qualifying       7257       7257          0          0.228s 
      +                 f1db.seasons         68         68          0          0.527s 
      +                 f1db.results      23514      23514          0          0.658s 
      +                  f1db.status        133        133          0          0.130s 
      +    -------------------------  ---------  ---------  ---------  --------------
      +      COPY Threads Completion         39         39          0          4.303s 
      +               Create Indexes         20         20          0          1.497s 
      +       Index Build Completion         20         20          0          0.214s 
      +              Reset Sequences          0         10          0          0.058s 
      +                 Primary Keys         13         13          0          0.012s 
      +          Create Foreign Keys          0          0          0          0.000s 
      +              Create Triggers          0          0          0          0.001s 
      +             Install Comments          0          0          0          0.000s 
      +    -------------------------  ---------  ---------  ---------  --------------
      +            Total import time     506641     506641          0          5.547s 
      +
      +You may need to have special cases to take care of tho, or views that you
      +want to materialize while doing the migration. In advanced case you can use
      +the pgloader command.
      +
       ## The Command
       
       To load data with [pgloader](http://pgloader.tapoueh.org/) you need to
      diff --git a/docs/src/sqlite.md b/docs/src/sqlite.md
      index 3747002..a2777c1 100644
      --- a/docs/src/sqlite.md
      +++ b/docs/src/sqlite.md
      @@ -5,6 +5,58 @@ embeded nature makes it a source of migrations when a projects now needs to
       handle more concurrency, which [PostgreSQL](http://www.postgresql.org/) is
       very good at. pgloader can help you there.
       
      +## In a Single Command Line
      +
      +You can 
      +
      +    $ createdb chinook
      +    $ pgloader https://github.com/lerocha/chinook-database/raw/master/ChinookDatabase/DataSources/Chinook_Sqlite_AutoIncrementPKs.sqlite pgsql:///chinook
      +
      +Done! All with the schema, data, constraints, primary keys and foreign keys,
      +etc. We also see an error with the Chinook schema that contains several
      +primary key definitions against the same table, which is not accepted by
      +PostgreSQL:
      +
      +    2017-06-20T16:18:59.019000+02:00 LOG Data errors in '/private/tmp/pgloader/'
      +    2017-06-20T16:18:59.236000+02:00 LOG Fetching 'https://github.com/lerocha/chinook-database/raw/master/ChinookDatabase/DataSources/Chinook_Sqlite_AutoIncrementPKs.sqlite'
      +    2017-06-20T16:19:00.664000+02:00 ERROR Database error 42P16: multiple primary keys for table "playlisttrack" are not allowed
      +    QUERY: ALTER TABLE playlisttrack ADD PRIMARY KEY USING INDEX idx_66873_sqlite_autoindex_playlisttrack_1;
      +    2017-06-20T16:19:00.665000+02:00 LOG report summary reset
      +                 table name       read   imported     errors      total time
      +    -----------------------  ---------  ---------  ---------  --------------
      +                      fetch          0          0          0          0.877s 
      +            fetch meta data         33         33          0          0.033s 
      +             Create Schemas          0          0          0          0.003s 
      +           Create SQL Types          0          0          0          0.006s 
      +              Create tables         22         22          0          0.043s 
      +             Set Table OIDs         11         11          0          0.012s 
      +    -----------------------  ---------  ---------  ---------  --------------
      +                      album        347        347          0          0.023s 
      +                     artist        275        275          0          0.023s 
      +                   customer         59         59          0          0.021s 
      +                   employee          8          8          0          0.018s 
      +                    invoice        412        412          0          0.031s 
      +                      genre         25         25          0          0.021s 
      +                invoiceline       2240       2240          0          0.034s 
      +                  mediatype          5          5          0          0.025s 
      +              playlisttrack       8715       8715          0          0.040s 
      +                   playlist         18         18          0          0.016s 
      +                      track       3503       3503          0          0.111s 
      +    -----------------------  ---------  ---------  ---------  --------------
      +    COPY Threads Completion         33         33          0          0.313s 
      +             Create Indexes         22         22          0          0.160s 
      +     Index Build Completion         22         22          0          0.027s 
      +            Reset Sequences          0          0          0          0.017s 
      +               Primary Keys         12          0          1          0.013s 
      +        Create Foreign Keys         11         11          0          0.040s 
      +            Create Triggers          0          0          0          0.000s 
      +           Install Comments          0          0          0          0.000s 
      +    -----------------------  ---------  ---------  ---------  --------------
      +          Total import time      15607      15607          0          1.669s 
      +
      +You may need to have special cases to take care of tho. In advanced case you
      +can use the pgloader command.
      +
       ## The Command
       
       To load data with [pgloader](http://pgloader.io/) you need to