The casting support for DB3 was hand-crafted and didn't get upgraded to
using the current CAST grammar and facilities, for no other reasons than
lack of time and interest. It so happens what implementing it now fixes two
bug reports.
Bug #938 is about conversion defaulting to "not null" column, and that's due
to the usage of the internal pgloader catalogs where the target column's
nullable field is NIL by default, which doesn't make much sense. With
support for user-defined casting rules, the default is nullable columns, so
that's kind of a free fix.
Fixes#927.
Fixes#938.
When adding support for Mat Views to MS SQL, we added support for the view
names to be fully qualified (with their schema), using a cons to host
the (schema . name) data.
Well, turns out the MySQL side of things didn't get the memo.
Blind attempt at fixing #932, see also #918.
In some cases when migrating from MySQL we want to transform data from
binary representation to an hexadecimal number. One such case is going from
MySQL binary(16) to PostgreSQL UUID data type.
Fixes#904.
When using interactive recompiling of the code in Emacs/SLIME, extra closing
parens are just ignored in Emacs before sending the current form to the CL
compiler. When compiling from the source files, of course, that doesn't
work.
See #910.
It turns out that MS SQL Server is using its own representation for GUID
with a mixed endianness. In this blind patch we attempt to parse the binary
vector in the right way when building our internal representation of an
UUID, before making a string out of it for Postgres, which doesn't use the
same mixed-endianness format.
Fixes#910. Maybe?
When calling the create_distributed_table() function, the column name is
given as a literal parameter to the function and should be quoted that way,
with single quotes. In particular, if our column-name is already
double-quoted, we need to get rid of those extra quotes.
Also, the source-table-name might be a cons object when qualified, or a
plain string when not schema-qualified. Adjust the citus-find-table code to
take that into account.
pgloarder parses the COPY error messages to find out the line number where
we have a problem in the batch, allowing for a quite efficient recovery
mechanism where it's easy enough to just skip the known faulty input.
Now, some error messages do not contain a COPY line number, such as fkey
violation messages:
Database error 23503: insert or update on table "produtos" violates
foreign key constraint "produtos_categorias_produtos_fk"
In that case rather than failing the whole batch at once (thanks to the
previous commit, we used to just badly fail before that), we can retry the
batch one row at a time until we find our culprit, and then continue one
input row at a time.
Fixes#836.
We don't know how to parse the PostgreSQL condition sent when there is a
fkey error... and the message would not contain the row number where that
error happened anyway.
At the moment it means that the retry-batch facility errors out for failing
to realize that NIL isn't a number on which we can do arithmetic, which in
itself in a little sad.
In this patch we install a condition handler that knows how to deal with
retry-batch failing, so that pgloader may try and continue rather than
appear locked to the user, when I suspect that the debugger is waiting for
input.
See #836, where that's the first half of the fix. The real fix is to handle
foreign key errors correctly of course.
In some cases we have to quote column names and it's not been done yet, for
instance when dealing with PostgreSQL as a source database.
Patch mostly from @m0n5t3r, only cosmetic changes applied. Thanks!
Fixes#905.
When migrating from PostgreSQL, pgloader takes the index and foreign key
definitions from the server directly, using pg_get_indexdef() and other
catalog functions. That's very useful in that it embeds all the necessary
quoting of the objects, and schema-qualify them.
Of course we can't use the SQL definition as on the source system when we
target a schema name that is different from the source system, which the
code didn't realize before this patch. Here we simply invalidate the
pre-computed SQL statement and resort to using the classic machinery to
build the statement from pieces again.
Fixes#903.
When using a CSV header, we might find fields in a different order than the
target table columns, and maybe not all of the fields are going to be read.
Take account of the header we read rather than expecting the header to look
like the target table definition.
Fix#888.
Killing tasks in the error handling must be done carefully, and given this
testing session it seems better to refrain from doing it when erroring out
at COPY init time (missing column is an example of that). The approach
around that is still very much ad-hoc rather than systematic.
In passing improve the `make save` option to producing a binary image: have
the make recipe respect the CL variable. The command line options
differences were already accounted for.
This allows creating tables in any target tablespace rather than the default
one, and is supported for the various sources having support for the ALTER
TABLE clause already.
Some MySQL schema level features (on update current_timestamp) are migrated
to stored procedures and triggers. We would log the CREATE PROCEDURE
statements as LOG level entries instead of SQL level entries, most likely a
stray devel/debug choice.
It turns out that SQLite only creates an entry in its sqlite_sequence
catalogs when some data make it to a table using a sequence, not at create
table time. It means that pgloader must do some more catalog querying to
figure out if a column is "autoincrement", and apparently the only way to
get to the information is to parse the SQL statement given in the
sqlite_master table.
Fixes#882.
It's fair game to handle errors and issue logs instead when using the
pgloader binary image, as it distracts users a lot. That said, as a
developer the interactive debugger is very useful.
In passing install some experimental thread killing behavior in case of
errors and using on-error-stop setting (default for database migrations).
Materialized views without an explicit schema name are supported, but then
would raise an error when trying to use destructuring-bind on a string
rather than the (cons schema-name table-name). This patch fixes that.
That helps having both an overview of what pgloader is capable of doing with
a database migration, and also documenting that some sources don't have the
full support for some features yet.
The latter is not tested yet, but should have no impact if not used. Given
how rare it is that I get a chance to play around with a MS SQL instance
anyway, it might be better to push blind changes for it when it doesn't
impact existing features…
We have a lot of new features to document. This is a first patch about that,
some more work is to be done. That said, it's better than nothing already.
The previous fix was wrong for missing the point: rather than unquote column
names in the table definition when matching the column names in the index
definition, we should in the first place have quoted the index column names
when needed.
Fixes#872 for real this time.