In case of an exceptional condition leading to termination of the pgloader
program we tried to use log-message after the monitor should have been
closed. Also the 0.3s delay to let latests messages out looks like a poor
design.
This patch attempts to remedy both the situation: refrain from using a
closed down monitoring thread, and properly wait until it's done before
returning to the shell.
See #583.
pgloader had support for PostgreSQL SET parameters (gucs) from the
beginning, and in the same vein it might be necessary to tweak MySQL
connection parameters, and allow pgloader users to control them.
See #337 and #420 where net_read_timeout and net_write_timeout might need to
be set in order to be able to complete the migration, due to high volumes of
data being processed.
In the SQLite and MySQL cases, expand on the simple case before detailing
the command language. With our solid defaults, most times a single command
line with the source and target connection strings are going to be all you
need.
Upon GitHub's suggestion, add a LICENSE file to make it clear we are using
The PostgreSQL License. Assign the copyright to The PostgreSQL Global
Development Group as it's done for PostgreSQL, as it seems to be the right
thing to do.
To properly handle on-error-stop condition, make it a specific pgloader
condition with a specific handling behavior. In passing add some more log
messages for surprising conditions.
Fix#546.
In the prepare-pgsql-database method we were logging too much details, such
as DDL warnings on if-not-exists for successful queries. And those logs are
to be found in PostgreSQL server logs anyway.
Also fix trying to create or drop a "nil" schema.
In this patch we hard-code some cases when we know the log message won't be
displayed anywhere so as to avoid sending it to the monitor thread. It
certainly is a modularity violation, but given the performance impact...
The code used to take into account content-length HTTP header to load that
number of bytes in memory from the remote server. Not only it's better to
use a fixed size allocated-once buffer for that (now 4k), but also doing so
allows downloading content that you don't know the content-length of.
In passing tell the HTTP-URI parser rule that we also accept https:// as a
prefix, not just http://.
This allows running pgloader in such cases:
$ pgloader https://github.com/lerocha/chinook-database/raw/master/ChinookDatabase/DataSources/Chinook_Sqlite_AutoIncrementPKs.sqlite pgsql:///chinook
And it just works!
Get back in line with what the documentation says, and also fix the case for
default MySQL migrations now that we target a PostgreSQL schema with the
same name as the MySQL database name.
Open question yet: should we also register the new schema on the search_path
by default?
ALTER DATABASE ... SET search_path TO public, newschema, ...;
Is it more of a POLA violation to alter the search_path or to not do it?
Fix#582.
The “magic” options --batch and --heap-reserve will be processed by CCL
itself before pgloader gets to see them, so try that in the testing
environment.
Currently the default value getdate() is replaced by 'now' which creates
statements like:
CREATE TABLE ... created timestamp 'now' ...
which leads to table definitions like:
default '2017-06-12 17:54:04.890129'::timestamp without time zone
This is because 'now' is evaluated when creating the table.
This commit fixes the issue by using CURRENT_TIMESTAMP as default
instead of 'now'.
* Uninstall PostgreSQL during Travis setup
Recent build images start with PostgreSQL running. Uninstall it so we
can install and run a specific version.
* Skip authentication in Travis builds
The wrong combination of connection parameters will cause psql and other
utilities to prompt for a password, stalling the Travis build
indefinitely.
* Move PostgreSQL setup for Travis builds into script
* Use a newer version of SBCL in Travis builds
Recent versions of bordeaux-threads require ASDF >= 3.1
The previous commit makes it possible to convert typemods for various
text types. In MSSQL it's possible to create a column like varchar(max).
Internally this is reported as varchar(-1) which results in a CREATE
TABLE statement that contains e.g. varchar(-1).
This patch drops the typemod if it's -1 (max).
It's based on Dimitris patch slightly modified by myself.
If a custom CAST rule is defined (e.g CAST type varchar to varchar) the
original typemod get's lost. This commit is based on Dimitris patch from
571 and adds typmod support for "char", "nchar", "varchar", "nvarchar"
and "binary".
The previous coding would discard any work done at the apply-casting-rules
step when adding source specific smarts about handling default, because of
what looks like negligence and bad tests. A test case scenario exists but
was not exercized :(
Fix that by defaulting the default value to the one given back at the
apply-casting-rules stage, where we apply the "drop default" clause.
It turns out that in some cases SQLite will not list its primary key indexes
in the pragma index_list(), and that's related to an implementation detail
about using its internal btree structure when rowid can be exploited as the
integer pkey.
Reverse engineer that bahavior in pgloader so that the PostgreSQL schema
contains the primary keys even when they are implicit. In doing that we must
be careful that the pkeys might also appear explicitely, in which case we
refrain from declaring them twice.
SQLite catalogs are not my favorite, not to say hackish as hell.
Fixes#563.
Given new SQLite test case from issue #563 we see that pgloader doesn't
handle errors gracefully in post-copy stage. That's because the API were not
properly defined, we should use pgsql-execute-with-timing rather than other
construct here, because it allows the "on error resume next" behavior we
want with after load DDL statements.
See #563.
It turns out that this catalog table in SQLite may or may not exists
depending on whether the current schema actually uses the feature. So we
need to learn to query the catalogs about the catalogs before querying the
catalogs to learn about the current schema. Thanks SQLite.
Let's make it easy for interested parties to actually buy a license, and
offer several levels of “partnership” with the project. Be a Sponsor, a
Partner or even consider having pgloader custom development made for you.
Oracle™ support anyone?
As reported we still have some complex situations to deal with when buildind
from sources in some environments: windows and MacOXS macports are not in a
very good shape at the moment.
Add some good content in the README to better help people in those
environments get their build.
Fix#444.
In order to support custom SQL files with several queries and psql like
advanced \i feature, we have our own internal SQL parser in pgloader. The
PostgreSQL variant of SQL is pretty complex and allows dollar-quoting and
other subtleties that we need to take care of.
Here we fix the case when we have a dollar sign ($) as part of a single
quoted text (such as a regexp), so both not a new dollar-quoting tag and a
part of a quoted text being read.
In passing we also fix reading double-quoted identifiers, even when they
contain a dollar sign. After all the following is totally supported by
PostgreSQL:
create table dollar("$id" serial, foo text);
select "$id", foo from dollar;
Fix#561.
It turns out that the SQLite catalogs have a way of registering if a column
is an auto_increment. You have to look into the sqlite_sequence catalog for
the name of the table the column is found into, and when there's a non-zero
entry there and the table as a single pk-id column of integer type then it's
safe to assume that we have an "auto_increment" column.
The current way to handle this property is suboptimal in pgloader too where
we need to attach an "extra" slot to our column representation with a value
of the "auto_increment" string. But at least it's easy to hack the code
here... anyway.
Fix#556.
It turns out we forgot to add support for internal catalog munging clauses
to SQLite support. The catalogs being normalized means there's no extra work
here other than allowing the parser to accept those clauses and then pass
them over to our generic `copy-database' method implementation.
It is to be noted that SQLite has no support for schemas as per the standard
and PostgreSQL, so that when we inspect the database schema we create a nil
entry here. It's then not possible to ALTER SCHEMA nil RENAME TO 'target';
unfortunately, but it's easy enough to SET search_path to 'target' anyway,
as shown in the modified test case.
Fix#552.
When the source is not recognized, pgloader would not always output a useful
error message with the hint about using --type, as per #248. Here we fix the
conditions applying to this error message.
In #539 it appears that we fail to honor some actions, and it seems to be
because of sloppy reasonning in how to make sense of all of them. It could
well be that we have too many options for them to make sense in all possible
combinations, but for now it looks safe to bet on an easy fix...
The concurrency nature of pgloader made it non obvious where to implement
the timers properly, and as a result the tracking of how long it took to
actually transfer the data was... just wrong.
Rather than trying to measure the time spent in any particular piece of the
code, we now emit "start" and "stop" stats messages to the monitor thread at
the right places (which are way easier to find, in the worker threads) and
have the monitor figure out how long it took really.
Fix#506.
Now that we have fixed the output of the per-table total timing, we can only
show that timing by default. With more verbosity pgloader will add the extra
columns, and in computer oriented formats (json, csv, copy) all the details
are always provided of course.
See #506.
The handler-case form installed would catch any non-fatal warning and would
also fail to display any error to the user. Both are wrong behavior that
this patch fixes, using *error-output* (that's stderr) explicitely for any
thing that may happen while loading the user provided code.
Fix#526.
When the intarray extension is installed our PostgreSQL catalog query fails
because we now have more than one operator solving smallint[] <@ smallint[].
It is easy to avoid that problem by casting to integer[], smallint being an
implementation detail here anyway.
Fix#532.
Force double-quoting of objects name in DROP INDEX commands by using the
format directive ~s. The names of the objects we are dropping usually come
from a PostgreSQL catalog, but still might contain force-quote conditions
like starting with a number, as shown in #530.
This fix certainly means we will have to review all the DDL formatting we do
in pgloader and apply a single method of quoting all along. The simpler one
is of course to force quote every object name in "", but it might not be the
smartest one (what if some sources are sending already quoted object names,
that needs a check), and it's certainly not the prettier way to go at it:
people usually like to avoid unnecessary quotes, calling them clutter.
Fix#530.
As documented in https://github.com/github/linguist#overrides and from the
conversation in https://github.com/github/linguist/issues/3540 we add a
.gitattributes file to the project wherein we pretend that the sql files are
all vendored-in.
This should allow GitHub to realize that pgloader is all about Common Lisp
and not a single line of PLpgSQL...
Given a test case and some reading of the FreeTDS source code, it appears
that the XML data type is sent on the wire as (unicode) text. This patch
makes pgloader aware of that and also revisit the choice of casting XML to
PostgreSQL XML data type (thanks to the test case where we see it just works
without surprise).
Fix#503.
Add a cast rule to support tinyint being an “identity” type in MS SQL, which
means using a sequence to derive its values from. We didn't address the
whole MS SQL integer type tower here, and suspect we will have to add more
in the future.
Fix#528 where I could have access to a test case and easily reproduce the
bug, thanks!
In advanced projections it could be that we call the transformation function
for some input fields twice. This is a bug that manifest in particular when
the output of the transformation can't be used/parsed again by the same
function as shown in the bug reported.
Fix#523.