It turns out recent changes broke tne SQLite index support (from adding
support for MS SQL partial/filtered indexes), so fix it by using the
pgsql-index structure rather than the specific sqlite-idx one.
In passing, improve detection of PRIMARY KEY indexes, which was still
lacking. This work showed that the introspection done by pgloader was
wrong, it's way more crazy that we though, so adjust the code to loop
over PRAGMA calls for each object we inspect.
While adding PRAGMA calls, add support for foreign keys too, we have the
code infrastructure that makes it easy now.
Make it work on the second run, when the triggers and functions have
already been deplyed, by doing the DROP function and trigger before we
CREATE the table, then CREATE them again: we need to split the list
again.
The newid() function seems to be equivalent to the newsequentialid() one
if I'm to believe issue #204, so let's just add that assumption in the
code.
Fix#204.
The WIP branch about better error messages made its way through the main
code, so switch back to the mainline as available directly in Quicklisp.
See https://github.com/nikodemus/esrap/issues/26.
The first error of a batch was lost somewhere in the recent changes. My
current best guess is that the rewrite of the copy-batch function made
the handler-bind form setup by the handling-pgsql-notices macro
ineffective, but I can't see why that is.
See #85.
The common lisp default printer is nice enough to know how to print
symbols as strings, but that won't cut it when the symbol :is-not-null
needs to be printed out "is not null", without the dashes.
See #365.
We should not block any processing just because we can't parse an index.
The best we can do just tonight is to try creating the index without the
filter, ideally we would have to skip building the index entirely.
That's for a later effort though, it's running late here.
See #365.
The only case with a test is the "([deleted]=(0))" case, which showed a
tad too much in the current implementation of the MS SQL index filters
parsing. Try to prepare better for next filters.
Next step: adding some test cases.
See #365.
MS SQL has a notion of a "filtered index" that matches the notion of a
PostgreSQL partial index: the index only applies to the rows matching
the index WHERE clause, or filter.
The WHERE clause in both case are limited to simple expressions over a
base table's row at a time, so we implement a limited WHERE clause
parser for MS SQL filters and a transformation routine to rewrite the
clause in PostgreSQL slang.
In passing, we transform the filter constants using the same
transformation functions as in the CAST rules, so that e.g. a MS SQL
bit(1) value that got transformed into a PostgreSQL boolean is properly
translated, as in the following example:
MS SQL: "([deleted]=(0))" (that's from the catalogs)
PostgreSQL: deleted = 'f'
Of course the parser is still very badly tested, let's see what happens
in the wild now.
(Should) Fix#365.
The implementation uses the dynamic binding *on-error-stop* so it's also
available when pgloader is used as Common Lisp librairy.
The (not-all-that-) recent changes made to the error handling make that
implementation straightforward enough, so let's finally do it!
Fix#85.
The PostgreSQL search_path allows multiple schemas and might even need
it to be able to reference types and other tables. Allow setting more
than one schema by using the fact that PostgreSQL schema names don't
need to be individually quoted, and passing down the exact content of
the SET search_path value down to PostgreSQL.
Fix#359.
The previous code required non-zero data length for all MS SQL returned
values, where it makes no sense for text like values (an empty string is
ok). Also, the code was trimming spaces from both ends on syb-char data,
and in testing that return type is used for varchar too.
Fix#366. Fix#368.
Once more we can't use an aggregate over a text column in MS SQL to
build the index definition from its catalog structure, so we have to do
that in the lisp part of the code.
Multi-column indexes are now supported, but filtered indexes still are a
problem: the WHERE clause in MS SQL is not compatible with the
PostgreSQL syntax (because of [names] and type casting.
For example we cast MS SQL bit to PostgreSQL boolean, so
WHERE ([deleted]=(0))
should be translated to
WHERE not deleted
And the code to do that is not included yet.
The following documentation page offers more examples of WHERE
expression we might want to support:
https://technet.microsoft.com/en-us/library/cc280372.aspx
WHERE EndDate IS NOT NULL
AND ComponentID = 5
AND StartDate > '01/01/2008'
EndDate IN ('20000825', '20000908', '20000918')
It might be worth automating the translation to PostgreSQL syntax and
operators, but it's not done in this patch.
See #365, where the created index will now be as follows, which is a
problem because of being UNIQUE: some existing data won't reload fine.
CREATE UNIQUE INDEX idx_<oid>_foo_name_unique ON dbo.foo (name, type, deleted);
Have a pretty-print option where we try to be nice for the reader, and
don't use it in the CAST debug messages. Also allow working with the
real maximum length of column names rather than hardcoding 22 cols...
Having been given a test instance of a MS SQL database allows to quickly
fix a series of assorted bugs related to schema handling of MS SQL
databases. As it's the only source with a proper notion of schema that
pgloader supports currently, it's not a surprise we had them.
Fix#343. Fix#349. Fix#354.
It turns out sloppy SQL code made its way to pgloader wherein the GROUP
BY clause of the foreign key listing wasn't reference the whole set of
non aggregated output columns.
Thanks to thiagokronig for the new query, which fixes#345.
The new ALTER TABLE facility allows to act on tables found in the MySQL
database before the migration happens. In this patch the only provided
actions are RENAME TO and SET SCHEMA, which fixes#224.
In order to be able to provide the same option for MS SQL users, we will
have to make it work at the SCHEMA level (ALTER SCHEMA ... RENAME TO
...) and modify the internal schema-struct so that the schema slot of
our table instances are a schema instance rather than its name.
Lacking MS SQL test database and instance, the facility is not yet
provided for that source type.
That means we no longer eagerly load it when we think we will need it,
and also refrain from unloading it from the binary at image saving time.
In my local tests, doing so fix#330 by avoiding the error entirely in
the docker image, where obviously the libs found at build-time are found
again at the same place at run time.
SQLite types include "text nocase" apparently, so add "nocase" as one of
the managed noise words. It might be time we handle those the other way
round, with a whitelist of expected tokens somewhere in the type
definition rather than a blacklist of unknown words to exclude...
Anyway, fix#350.
It should return the fetched catalog rather than the count of objects,
which is only used for statistics purposes. Fix#349.
This problem once again shows that we lack proper testing environment
for MS SQL source :/
We target CURRENT_TIMESTAMP as the PostgreSQL default value for columns
when it was different before on the grounds that the type casting in
PostgreSQL is doing the job, as in the following example:
pgloader# create table test_ts(ts timestamptz(6) not null default CURRENT_TIMESTAMP);
CREATE TABLE
pgloader# insert into test_ts VALUES(DEFAULT);
INSERT 0 1
pgloader# table test_ts;
ts
-------------------------------
2016-02-24 18:32:22.820477+01
(1 row)
pgloader# drop table test_ts;
DROP TABLE
pgloader# create table test_ts(ts timestamptz(0) not null default CURRENT_TIMESTAMP);
CREATE TABLE
pgloader# insert into test_ts VALUES(DEFAULT);
INSERT 0 1
pgloader# table test_ts;
ts
------------------------
2016-02-24 18:32:44+01
(1 row)
Fix#341.
The PostgreSQL COPY protocol requires an explicit initialization phase
that may fail, and in this case the Postmodern driver transaction is
already dead, so there's no way we can even send ABORT to it.
Review the error handling of our copy-batch function to cope with that
fact, and add some logging of non-retryable errors we may have.
Also improve the thread error reporting when using a binary image from
where it might be difficult to open an interactive debugger, while still
having the full blown Common Lisp debugging experience for the project
developers.
Add a test case for a missing column as in issue #339.
Fix#339, see #337.
Someday I should either stop working on pgloader in between other things
or have a better test suite, including MS SQL and all. Probably both.
And read compiler notes and warnings too, while at that...
This fixes#141 again when users are forcing MySQL bigint(20) into
PostgreSQL bigint types so that foreign keys can be installed. To this
effect, as cast rule such as the following is needing:
cast type bigint when (= 20 precision) to bigint drop typemod
Before this patch, this user provided cast rule would also match against
MySQL types "with extra auto_increment", and it should not.
If you're having the problem that this patch fixes on an older pgloader
that you can't or won't upgrade, consider the following user provided
set of cast rules to achieve the same effect:
cast type bigint with extra auto_increment to bigserial drop typemod,
type bigint when (= 20 precision) to bigint drop typemod
It turns out that MySQL catalog always store default value as strings
even when the column itself is of type bytea. In some cases, it's then
impossible to transform the expected bytea from a string.
In passing, move some code around to fix dependencies and make it
possible to issue log warnings from the default value printing code.
We just tagged the repository as version 3.3.0.50 to be able to release
an experimental pgloader bundle, and we did tag the repository. The
first commit after that should then change the version string.
Using Quicklisp bundle facility it is possible to prepare a
self-contained archive of all the code needed to build pgloader.
Doing that should allow users to easily build pgloader when they are
being a restrictive proxy, and packagers to work from a source tarball
that has a very limited build dependencies.
The decision to use lots of different packages in pgloader has quite
strong downsides at times, and the manual managment of dependencies is
one of the, in particular how to avoid circular ones.
In the theory that it's a better service to the user to refuse doing
anything at all rather than ignore his/her commands, print out FATAL
errors when options are used that are incompatible with a load command
file.
See #327 for a case where this did happen.
In passing, tweak our report code to avoid printing the footer when we
didn't print anything at all previously.
See #328 where we are lacking useful stack trace in a --debug run
because of the previous talk-handler-bind coding, that was there to
avoid sinking the users into too many details. Let's try another
approach here.
In the issue #328 the --debug level output is not helpful because of an
encoding error in the logfile. Let's see about forcing the log file
external format to utf-8 then.