That means we no longer eagerly load it when we think we will need it,
and also refrain from unloading it from the binary at image saving time.
In my local tests, doing so fix#330 by avoiding the error entirely in
the docker image, where obviously the libs found at build-time are found
again at the same place at run time.
SQLite types include "text nocase" apparently, so add "nocase" as one of
the managed noise words. It might be time we handle those the other way
round, with a whitelist of expected tokens somewhere in the type
definition rather than a blacklist of unknown words to exclude...
Anyway, fix#350.
It should return the fetched catalog rather than the count of objects,
which is only used for statistics purposes. Fix#349.
This problem once again shows that we lack proper testing environment
for MS SQL source :/
We target CURRENT_TIMESTAMP as the PostgreSQL default value for columns
when it was different before on the grounds that the type casting in
PostgreSQL is doing the job, as in the following example:
pgloader# create table test_ts(ts timestamptz(6) not null default CURRENT_TIMESTAMP);
CREATE TABLE
pgloader# insert into test_ts VALUES(DEFAULT);
INSERT 0 1
pgloader# table test_ts;
ts
-------------------------------
2016-02-24 18:32:22.820477+01
(1 row)
pgloader# drop table test_ts;
DROP TABLE
pgloader# create table test_ts(ts timestamptz(0) not null default CURRENT_TIMESTAMP);
CREATE TABLE
pgloader# insert into test_ts VALUES(DEFAULT);
INSERT 0 1
pgloader# table test_ts;
ts
------------------------
2016-02-24 18:32:44+01
(1 row)
Fix#341.
The PostgreSQL COPY protocol requires an explicit initialization phase
that may fail, and in this case the Postmodern driver transaction is
already dead, so there's no way we can even send ABORT to it.
Review the error handling of our copy-batch function to cope with that
fact, and add some logging of non-retryable errors we may have.
Also improve the thread error reporting when using a binary image from
where it might be difficult to open an interactive debugger, while still
having the full blown Common Lisp debugging experience for the project
developers.
Add a test case for a missing column as in issue #339.
Fix#339, see #337.
Someday I should either stop working on pgloader in between other things
or have a better test suite, including MS SQL and all. Probably both.
And read compiler notes and warnings too, while at that...
This fixes#141 again when users are forcing MySQL bigint(20) into
PostgreSQL bigint types so that foreign keys can be installed. To this
effect, as cast rule such as the following is needing:
cast type bigint when (= 20 precision) to bigint drop typemod
Before this patch, this user provided cast rule would also match against
MySQL types "with extra auto_increment", and it should not.
If you're having the problem that this patch fixes on an older pgloader
that you can't or won't upgrade, consider the following user provided
set of cast rules to achieve the same effect:
cast type bigint with extra auto_increment to bigserial drop typemod,
type bigint when (= 20 precision) to bigint drop typemod
It turns out that MySQL catalog always store default value as strings
even when the column itself is of type bytea. In some cases, it's then
impossible to transform the expected bytea from a string.
In passing, move some code around to fix dependencies and make it
possible to issue log warnings from the default value printing code.
We just tagged the repository as version 3.3.0.50 to be able to release
an experimental pgloader bundle, and we did tag the repository. The
first commit after that should then change the version string.
Using Quicklisp bundle facility it is possible to prepare a
self-contained archive of all the code needed to build pgloader.
Doing that should allow users to easily build pgloader when they are
being a restrictive proxy, and packagers to work from a source tarball
that has a very limited build dependencies.
The decision to use lots of different packages in pgloader has quite
strong downsides at times, and the manual managment of dependencies is
one of the, in particular how to avoid circular ones.
In the theory that it's a better service to the user to refuse doing
anything at all rather than ignore his/her commands, print out FATAL
errors when options are used that are incompatible with a load command
file.
See #327 for a case where this did happen.
In passing, tweak our report code to avoid printing the footer when we
didn't print anything at all previously.
See #328 where we are lacking useful stack trace in a --debug run
because of the previous talk-handler-bind coding, that was there to
avoid sinking the users into too many details. Let's try another
approach here.
In the issue #328 the --debug level output is not helpful because of an
encoding error in the logfile. Let's see about forcing the log file
external format to utf-8 then.
Thanks to a reproducable test case we can see that MySQL default for a
varbinary column is an empty string, so tweak the transform function
byte-vector-to-bytea in order to cope with that.
Rather than trying hard to have PostgreSQL fully qualify the index name
with tricks around search_path setting at the time ::regclass is
executed, simply join on pg_namespace to retrieve that schema in a new
slot in our pgsql-index structure so that we can then reuse it when
needed.
Also add a test case for the scenario, including both a UNIQUE
constraint and a classic index, because the DROP and CREATE/ALTER
instructions differ.
More than the syntax and API tweaks, this patch also make it so that a
multi-file specification (using e.g. ALL FILENAMES IN DIRECTORY) can be
loaded with several files in the group in parallel.
To that effect, tweak again the md-connection and md-copy
implementations.
In the recent refactoring and improvements of parallelism the indexes
creation would kick in before we know that the data is done being copied
over to the target table.
Fix that by maintaining a writers-count hashtable and only starting to
create indexes when that count reaches zero, meaning all the concurrent
tasks started to handle the COPY of the data are now done.
It was worker-count and it's now exposed as the worker in the WITH
clause, but we can actually keep it as worker-count in the internal API,
and it feels better that way.
Add the workers and concurrency settings to the LOAD commands for
database sources so that users can tweak them now, and add mentions of
them in the documentation too.
From the documentation string of the copy-from method as found in
src/sources/common/methods.lisp:
We allow WORKER-COUNT simultaneous workers to be active at the same time
in the context of this COPY object. A single unit of work consist of
several kinds of workers:
- a reader getting raw data from the COPY source with `map-rows',
- N transformers preparing raw data for PostgreSQL COPY protocol,
- N writers sending the data down to PostgreSQL.
The N here is setup to the CONCURRENCY parameter: with a CONCURRENCY of
2, we start (+ 1 2 2) = 5 concurrent tasks, with a CONCURRENCY of 4 we
start (+ 1 4 4) = 9 concurrent tasks, of which only WORKER-COUNT may be
active simultaneously.
Those options should find their way in the remaining sources, that's for
a follow-up patch tho.
Have PostgreSQL always fully qualify the index related objects and SQL
definition statements when fetching the list of indexes of a table, by
playing with an empty search_path.
Also improve the whole index creation by passing the table object as the
context where to derive the table-name from, so that schema qualified
tables are taken into account properly.
In a previous commit the typemod matching code had been broken, and we
failed to notice that until now. Thanks to bug report #322 we just got
the memo...
Add a test case in the local-only MySQL database.
The regression testing facilities should be improved to be able to test
a full database, and then to dynamically create said database from code
or something to ease test coverage of those cases.
When creating the primary keys on top of the unique indexes, we might
still have errors (e.g. with NULL values). Make it so that a failure in
one pkey doesn't fail every other one, by having them all run within a
single connection rather than a single transaction.
In order to avoid all concurrently prepared batches of rows to get sent
to PostgreSQL COPY command at the same time exactly, randomly vary the
size of each batch between -30% and +30% of the batch rows parameter.
pgloader parallel workload is still hardcoded, but at least the code now
uses clear parameters as input so that it will be possible in a later
patch to expose them to the end-user.
The notions of workers and concurrency are now handled as follows:
- concurrency is how many tasks are allowed to happen at once, by
default we have a reader thread, a transformer thread and a COPY
thread all actives for each table being loaded,
- worker-count is how many parallel threads are allowed to run
simultaneously and default to 8 currently, which means that in a
typical migration from a database source and given default
concurrency or 1 (3 threads), we might be loaded up to 3 different
tables at any time.
The idea is to expose those settings to the user in the load file and as
command line options (such as --jobs) and see what it gives us. It might
help e.g. use more cores in loading a single CSV file.
As of this patch, there still can only be only one reader thread and the
number of transformer threads must be the same as the number of COPY
threads.
Finally, the CSV-like files user-defined projections are now handled in
the tranformation threads rather than in the reader thread...
Thanks to Common Lisp character data type, it's easy for pgloader to
enforce always speaking to PostgreSQL in utf-8, and that's what has been
done from the beginning actually.
Now, without good reason for that, the first example of a SET clause
that has been added to the docs where about how to set client_encoding,
which should NOT be done.
Fix that at the use level by removing the bad example from the docs and
adding a WARNING whenever the client_encoding is set to a known bad
value. It's a WARNING because we then simply force 'utf-8' anyway.
Also, review completely the format-vector-row function to avoid doing
double work with the Postmodern facilities we piggyback on. This was
done halfway through and the utf-8 conversion was actually done twice.
Various Linux distributions provide SBCL without core-compression
enabled. On the other hand, Mac OSX (at least via `homebrew`) SBCL with
core-compression enabled. To make installation easier, teach the make
process to detect core-compression, and use it if possible.
We convert the default value call to newsequentialid() into a call to
the PostgreSQL uuid-ossp uuid_generate_v1() which seems like the
equivalent function.
The extension "uuid-ossp" needs to be installed in the target database.
(Blind) Fix#246.
When building from sources within the git environement, the version
number is ok, but it was wrong when building in the docker image. Fix
the version number to 3.3.0.50 to show that we're talking about a
development snapshot that is leading to version 3.3.1.
Yeah, 4 parts version numbers. That happens, apparently.
Apparently it's quite common nowadays for people to use docker to build
and run software in a contained way, so provide users with the facility
they need in order to do that.
Following-up to the recent refactoring effort, the IXF and DB3 source
classes didn't get the memo that they could piggyback on the generic
copy-database implementation. This patch implements that.
In passing, also simplify the instanciate-table-copy-object method for
copy subclasses that need specialization here, by using change-class and
call-next-method so as to reuse the generic code as much as possible.
In the previous refactoring patch that option mistakenly went away,
although it is still needed for MS SQL and it is planned to make use of
it in the other source types too...
See #316 for reference.
In order to share more code in between the different source types,
finally have a go at the quite horrible mess of anonymous data
structures floating around.
Having a catalog and schema instances not only allows for code cleanup,
but will also allow to implement some bug fixes and wishlist items such
as mapping tables from a schema to another one.
Also, supporting database sources having a notion of "schema" (in
between "catalog" and "table") should get easier, including getting
on-par with MySQL in the MS SQL support (materialized views has been
asked for already).
See #320, #316, #224 for references and a notion of progress being made.
In passing, also clean up the copy-databases methods for database source
types, so that they all use a fetch-metadata generic function and a
prepare-pgsql-database and a complete-pgsql-database generic function.
Actually, a single method does the job here.
The responsibility of introspecting the source to populate the internal
catalog/schema representation is now held by the fetch-metadata generic
function, which in turn will call the specialized versions of
list-all-columns and friends implementations. Once the catalog has been
fetched, an explicit CAST call is then needed before we can continue.
Finally, the fields/columns/transforms slots in the copy objects are
still being used by the operative code, so the internal catalog
representation is only used up to starting the data copy step, where the
copy class instances are then all that's used.
This might be refactored again in a follow-up patch.