Improve docs on pgloader.io.

In the SQLite and MySQL cases, expand on the simple case before detailing
the command language. With our solid defaults, most times a single command
line with the source and target connection strings are going to be all you
need.
This commit is contained in:
Dimitri Fontaine 2017-06-20 16:24:25 +02:00
parent cae86015a0
commit a222a82f66
6 changed files with 226 additions and 20 deletions

View File

@ -121,18 +121,18 @@ LOAD ARCHIVE
LOAD CSV
FROM FILENAME MATCHING ~/GeoLiteCity-Location.csv/
WITH ENCODING iso-8859-1
(
locId,
country,
region [ null if blanks ],
city [ null if blanks ],
postalCode [ null if blanks ],
latitude,
longitude,
metroCode [ null if blanks ],
areaCode [ null if blanks ]
)
WITH ENCODING iso-8859-1
(
locId,
country,
region null if blanks,
city null if blanks,
postalCode null if blanks,
latitude,
longitude,
metroCode null if blanks,
areaCode null if blanks
)
INTO postgresql:///ip4r?geolite.location
(
locid,country,region,city,postalCode,

View File

@ -82,7 +82,45 @@
<div class="row">
<div class="col-md-2"> </div>
<div class="col-md-8">
<h1>Migrating from MySQL with pgloader</h1><p>If you want to migrate your data over to <a href="http://www.postgresql.org">PostgreSQL</a> from MySQL then pgloader is the tool of choice! </p><p>Most tools around are skipping the main problem with migrating from MySQL, which is to do with the type casting and data sanitizing that needs to be done. pgloader will not leave you alone on those topics. </p><h2>The Command</h2><p>To load data with <a href="http://pgloader.tapoueh.org/">pgloader</a> you need to define in a <em>command</em> the operations in some details. Here's our example for loading the <a href="http://dev.mysql.com/doc/sakila/en/">MySQL Sakila Sample Database</a>: </p><p>Here's our command: </p><pre><code>load database
<h1>Migrating from MySQL to PostgreSQL</h1><p>If you want to migrate your data over to <a href="http://www.postgresql.org">PostgreSQL</a> from MySQL then pgloader is the tool of choice! </p><p>Most tools around are skipping the main problem with migrating from MySQL, which is to do with the type casting and data sanitizing that needs to be done. pgloader will not leave you alone on those topics. </p><h2>In a Single Command Line</h2><p>As an example, we will use the f1db database from <http://ergast.com/mrd/> which which provides a historical record of motor racing data for non-commercial purposes. You can either use their API or download the whole database at <http://ergast.com/downloads/f1db.sql.gz> . Once you've done that load the database in MySQL: </p><pre><code>$ mysql -u root
&gt; create database f1db;
&gt; source f1db.sql </code></pre><p>Now let's migrate this database into PostgreSQL in a single command line: </p><pre><code>$ createdb f1db
$ pgloader mysql://root@localhost/f1db pgsql://f1db </code></pre><p>Done! All with schema, table definitions, constraints, indexes, primary keys, <em>auto_increment</em> columns turned into <em>bigserial</em> , foreign keys, comments, and if you had some MySQL default values such as <em>ON UPDATE CURRENT_TIMESTAMP</em> they would have been translated to a <a href="https://www.postgresql.org/docs/current/static/plpgsql-trigger.html">PostgreSQL before update trigger</a> automatically. </p><pre><code>$ pgloader mysql://root@localhost/f1db pgsql:///f1db
2017-06-16T08:56:14.064000+02:00 LOG Main logs in '/private/tmp/pgloader/pgloader.log'
2017-06-16T08:56:14.068000+02:00 LOG Data errors in '/private/tmp/pgloader/'
2017-06-16T08:56:19.542000+02:00 LOG report summary reset
table name read imported errors total time
------------------------- --------- --------- --------- --------------
fetch meta data 33 33 0 0.365s
Create Schemas 0 0 0 0.007s
Create SQL Types 0 0 0 0.006s
Create tables 26 26 0 0.068s
Set Table OIDs 13 13 0 0.012s
------------------------- --------- --------- --------- --------------
f1db.constructorresults 11011 11011 0 0.205s
f1db.circuits 73 73 0 0.150s
f1db.constructors 208 208 0 0.059s
f1db.constructorstandings 11766 11766 0 0.365s
f1db.drivers 841 841 0 0.268s
f1db.laptimes 413578 413578 0 2.892s
f1db.driverstandings 31420 31420 0 0.583s
f1db.pitstops 5796 5796 0 2.154s
f1db.races 976 976 0 0.227s
f1db.qualifying 7257 7257 0 0.228s
f1db.seasons 68 68 0 0.527s
f1db.results 23514 23514 0 0.658s
f1db.status 133 133 0 0.130s
------------------------- --------- --------- --------- --------------
COPY Threads Completion 39 39 0 4.303s
Create Indexes 20 20 0 1.497s
Index Build Completion 20 20 0 0.214s
Reset Sequences 0 10 0 0.058s
Primary Keys 13 13 0 0.012s
Create Foreign Keys 0 0 0 0.000s
Create Triggers 0 0 0 0.001s
Install Comments 0 0 0 0.000s
------------------------- --------- --------- --------- --------------
Total import time 506641 506641 0 5.547s </code></pre><p>You may need to have special cases to take care of tho, or views that you want to materialize while doing the migration. In advanced case you can use the pgloader command. </p><h2>The Command</h2><p>To load data with <a href="http://pgloader.tapoueh.org/">pgloader</a> you need to define in a <em>command</em> the operations in some details. Here's our example for loading the <a href="http://dev.mysql.com/doc/sakila/en/">MySQL Sakila Sample Database</a>: </p><p>Here's our command: </p><pre><code>load database
from mysql://root@localhost/sakila
into postgresql:///sakila

File diff suppressed because one or more lines are too long

View File

@ -82,7 +82,43 @@
<div class="row">
<div class="col-md-2"> </div>
<div class="col-md-8">
<h1>Loading SQLite files with pgloader</h1><p>The SQLite database is a respected solution to manage your data with. Its embeded nature makes it a source of migrations when a projects now needs to handle more concurrency, which <a href="http://www.postgresql.org/">PostgreSQL</a> is very good at. pgloader can help you there. </p><h2>The Command</h2><p>To load data with <a href="http://pgloader.io/">pgloader</a> you need to define in a <em>command</em> the operations in some details. Here's our command: </p><pre><code>load database
<h1>Loading SQLite files with pgloader</h1><p>The SQLite database is a respected solution to manage your data with. Its embeded nature makes it a source of migrations when a projects now needs to handle more concurrency, which <a href="http://www.postgresql.org/">PostgreSQL</a> is very good at. pgloader can help you there. </p><h2>In a Single Command Line</h2><p>You can </p><pre><code>$ createdb chinook
$ pgloader https://github.com/lerocha/chinook-database/raw/master/ChinookDatabase/DataSources/Chinook_Sqlite_AutoIncrementPKs.sqlite pgsql:///chinook </code></pre><p>Done! All with the schema, data, constraints, primary keys and foreign keys, etc. We also see an error with the Chinook schema that contains several primary key definitions against the same table, which is not accepted by PostgreSQL: </p><pre><code>2017-06-20T16:18:59.019000+02:00 LOG Data errors in '/private/tmp/pgloader/'
2017-06-20T16:18:59.236000+02:00 LOG Fetching 'https://github.com/lerocha/chinook-database/raw/master/ChinookDatabase/DataSources/Chinook_Sqlite_AutoIncrementPKs.sqlite'
2017-06-20T16:19:00.664000+02:00 ERROR Database error 42P16: multiple primary keys for table "playlisttrack" are not allowed
QUERY: ALTER TABLE playlisttrack ADD PRIMARY KEY USING INDEX idx_66873_sqlite_autoindex_playlisttrack_1;
2017-06-20T16:19:00.665000+02:00 LOG report summary reset
table name read imported errors total time
----------------------- --------- --------- --------- --------------
fetch 0 0 0 0.877s
fetch meta data 33 33 0 0.033s
Create Schemas 0 0 0 0.003s
Create SQL Types 0 0 0 0.006s
Create tables 22 22 0 0.043s
Set Table OIDs 11 11 0 0.012s
----------------------- --------- --------- --------- --------------
album 347 347 0 0.023s
artist 275 275 0 0.023s
customer 59 59 0 0.021s
employee 8 8 0 0.018s
invoice 412 412 0 0.031s
genre 25 25 0 0.021s
invoiceline 2240 2240 0 0.034s
mediatype 5 5 0 0.025s
playlisttrack 8715 8715 0 0.040s
playlist 18 18 0 0.016s
track 3503 3503 0 0.111s
----------------------- --------- --------- --------- --------------
COPY Threads Completion 33 33 0 0.313s
Create Indexes 22 22 0 0.160s
Index Build Completion 22 22 0 0.027s
Reset Sequences 0 0 0 0.017s
Primary Keys 12 0 1 0.013s
Create Foreign Keys 11 11 0 0.040s
Create Triggers 0 0 0 0.000s
Install Comments 0 0 0 0.000s
----------------------- --------- --------- --------- --------------
Total import time 15607 15607 0 1.669s </code></pre><p>You may need to have special cases to take care of tho. In advanced case you can use the pgloader command. </p><h2>The Command</h2><p>To load data with <a href="http://pgloader.io/">pgloader</a> you need to define in a <em>command</em> the operations in some details. Here's our command: </p><pre><code>load database
from 'sqlite/Chinook_Sqlite_AutoIncrementPKs.sqlite'
into postgresql:///pgloader

View File

@ -1,4 +1,4 @@
# Migrating from MySQL with pgloader
# Migrating from MySQL to PostgreSQL
If you want to migrate your data over to
[PostgreSQL](http://www.postgresql.org) from MySQL then pgloader is the tool
@ -8,6 +8,71 @@ Most tools around are skipping the main problem with migrating from MySQL,
which is to do with the type casting and data sanitizing that needs to be
done. pgloader will not leave you alone on those topics.
## In a Single Command Line
As an example, we will use the f1db database from <http://ergast.com/mrd/>
which which provides a historical record of motor racing data for
non-commercial purposes. You can either use their API or download the whole
database at <http://ergast.com/downloads/f1db.sql.gz>. Once you've done that
load the database in MySQL:
$ mysql -u root
> create database f1db;
> source f1db.sql
Now let's migrate this database into PostgreSQL in a single command line:
$ createdb f1db
$ pgloader mysql://root@localhost/f1db pgsql://f1db
Done! All with schema, table definitions, constraints, indexes, primary
keys, *auto_increment* columns turned into *bigserial* , foreign keys,
comments, and if you had some MySQL default values such as *ON UPDATE
CURRENT_TIMESTAMP* they would have been translated to
a
[PostgreSQL before update trigger](https://www.postgresql.org/docs/current/static/plpgsql-trigger.html) automatically.
$ pgloader mysql://root@localhost/f1db pgsql:///f1db
2017-06-16T08:56:14.064000+02:00 LOG Main logs in '/private/tmp/pgloader/pgloader.log'
2017-06-16T08:56:14.068000+02:00 LOG Data errors in '/private/tmp/pgloader/'
2017-06-16T08:56:19.542000+02:00 LOG report summary reset
table name read imported errors total time
------------------------- --------- --------- --------- --------------
fetch meta data 33 33 0 0.365s
Create Schemas 0 0 0 0.007s
Create SQL Types 0 0 0 0.006s
Create tables 26 26 0 0.068s
Set Table OIDs 13 13 0 0.012s
------------------------- --------- --------- --------- --------------
f1db.constructorresults 11011 11011 0 0.205s
f1db.circuits 73 73 0 0.150s
f1db.constructors 208 208 0 0.059s
f1db.constructorstandings 11766 11766 0 0.365s
f1db.drivers 841 841 0 0.268s
f1db.laptimes 413578 413578 0 2.892s
f1db.driverstandings 31420 31420 0 0.583s
f1db.pitstops 5796 5796 0 2.154s
f1db.races 976 976 0 0.227s
f1db.qualifying 7257 7257 0 0.228s
f1db.seasons 68 68 0 0.527s
f1db.results 23514 23514 0 0.658s
f1db.status 133 133 0 0.130s
------------------------- --------- --------- --------- --------------
COPY Threads Completion 39 39 0 4.303s
Create Indexes 20 20 0 1.497s
Index Build Completion 20 20 0 0.214s
Reset Sequences 0 10 0 0.058s
Primary Keys 13 13 0 0.012s
Create Foreign Keys 0 0 0 0.000s
Create Triggers 0 0 0 0.001s
Install Comments 0 0 0 0.000s
------------------------- --------- --------- --------- --------------
Total import time 506641 506641 0 5.547s
You may need to have special cases to take care of tho, or views that you
want to materialize while doing the migration. In advanced case you can use
the pgloader command.
## The Command
To load data with [pgloader](http://pgloader.tapoueh.org/) you need to

View File

@ -5,6 +5,58 @@ embeded nature makes it a source of migrations when a projects now needs to
handle more concurrency, which [PostgreSQL](http://www.postgresql.org/) is
very good at. pgloader can help you there.
## In a Single Command Line
You can
$ createdb chinook
$ pgloader https://github.com/lerocha/chinook-database/raw/master/ChinookDatabase/DataSources/Chinook_Sqlite_AutoIncrementPKs.sqlite pgsql:///chinook
Done! All with the schema, data, constraints, primary keys and foreign keys,
etc. We also see an error with the Chinook schema that contains several
primary key definitions against the same table, which is not accepted by
PostgreSQL:
2017-06-20T16:18:59.019000+02:00 LOG Data errors in '/private/tmp/pgloader/'
2017-06-20T16:18:59.236000+02:00 LOG Fetching 'https://github.com/lerocha/chinook-database/raw/master/ChinookDatabase/DataSources/Chinook_Sqlite_AutoIncrementPKs.sqlite'
2017-06-20T16:19:00.664000+02:00 ERROR Database error 42P16: multiple primary keys for table "playlisttrack" are not allowed
QUERY: ALTER TABLE playlisttrack ADD PRIMARY KEY USING INDEX idx_66873_sqlite_autoindex_playlisttrack_1;
2017-06-20T16:19:00.665000+02:00 LOG report summary reset
table name read imported errors total time
----------------------- --------- --------- --------- --------------
fetch 0 0 0 0.877s
fetch meta data 33 33 0 0.033s
Create Schemas 0 0 0 0.003s
Create SQL Types 0 0 0 0.006s
Create tables 22 22 0 0.043s
Set Table OIDs 11 11 0 0.012s
----------------------- --------- --------- --------- --------------
album 347 347 0 0.023s
artist 275 275 0 0.023s
customer 59 59 0 0.021s
employee 8 8 0 0.018s
invoice 412 412 0 0.031s
genre 25 25 0 0.021s
invoiceline 2240 2240 0 0.034s
mediatype 5 5 0 0.025s
playlisttrack 8715 8715 0 0.040s
playlist 18 18 0 0.016s
track 3503 3503 0 0.111s
----------------------- --------- --------- --------- --------------
COPY Threads Completion 33 33 0 0.313s
Create Indexes 22 22 0 0.160s
Index Build Completion 22 22 0 0.027s
Reset Sequences 0 0 0 0.017s
Primary Keys 12 0 1 0.013s
Create Foreign Keys 11 11 0 0.040s
Create Triggers 0 0 0 0.000s
Install Comments 0 0 0 0.000s
----------------------- --------- --------- --------- --------------
Total import time 15607 15607 0 1.669s
You may need to have special cases to take care of tho. In advanced case you
can use the pgloader command.
## The Command
To load data with [pgloader](http://pgloader.io/) you need to