In the case of targetting an already existing PostgreSQL database,
columns might have been reordered. Add the column name list to the COPY
command we send so that we figure the mapping out automatically.
Fixes#509.
It turns out that it's possible and not too complex, when using the
FreeTDS driver, to enforce the client encoding for MS SQL to be utf-8.
Document how to tweak ~/.freetds.conf to that end.
When the column is known to be non-nullable, refrain from adding a null
default value to it. This also fixes the case of casting from an
[int] IDENTITY(1,1) NOT NULL
That otherwise did get transformed into a
bigserial not null default NULL
Causing then the error
Database error 42601: multiple default values specified for column ... of table ...
When updating the catalog support we forgot to fix the references to the
index and fkey name slots that are now provided centrally for the
catalog of all database source types.
Again, we don't have unit test cases for MS SQL, so that's a blind
fix (but at least that compiles).
See #343.
Avoid double quoting the schema names when used in PostgreSQL catalog
queries, where the identifiers are used as literal values and need to be
single-quoted.
Fix#476, again.
See #476 where it would have been helpful to see the PostgreSQL catalog
queries with `--log-min-messages sql` in the bug report. Also more
generally useful.
The code comment displayed in the release notes for 3.3.1 is reported to
be better at explaining the concurrency control than what we had in the
main documentation, so add it there.
Fix#496.
Now that we have a proper flush system for reporting the summary at the
proper time (see 7c5396f0975be405910d66f4b5aedc89acd75c1d), refrain from
also taking care of the reporting when stopping the monitor.
Adapt the regression driver code to flush the summary after loading the
expected data, which also provides better output.
When the summary output is sent to a file, that would also create a
backup file and replace our summary with an empty new file at monitor
stop...
Fixes#499.
This sits between NOTICE and INFO, allowing to have a complete log of
the SQL queries sent to the server while avoiding the very verbose
trafic of the DEBUG log level.
See #498.
This pgloader command allows to migrate tables while changing the schema
they are found into in between their MySQL source database and their
PostgreSQL target database.
This changes the default behavior of pgloader with MySQL from always
targetting the 'public' schema to targetting by default a schema named
the same as the MySQL database. You can revert to the old behavior by
adding a rule:
ALTER SCHEMA 'dbname' RENAME TO 'public
We might want to add a patch to re-install the default behavior later.
Also see #489 where it used not to be possible to rename the schema at
migration time, causing strange errors (you need to spot NIL as the
schema name in the "failed to find target table" messages.
A PostgreSQL index is always created in the same schema as the table it
is defined against, and the CREATE INDEX command doesn't accept schema
qualified index names.
Make it so that fatal errors are printed only once, and when possible
included in the usual log format as handled by our monitoring thread.
Also, improve error and summary reporting when we load from several
sources on the same command line.
All this work has been triggered because of an edge case where the OS
return value of the pgloader command was 0 (zero, success) although the
given file on the command line does not exists.
Fixes#486.
When the option "drop indexes" is in use in loading data from a file, we
collect the indexes from the PostgreSQL catalogs and then issue DROP
commands against them before the load, then CREATE commands when it's
done.
The CREATE is done in parallel, and we create an lparallel kernel for
that. The kernel must have a worker-count of at least 1, and we where
not considering the case of 0 indexes on the target table.
Fix#484.
As shown in #476, it is sometimes needed to be able to quote the
identifier names even when loading from a file, that is when specifying
the target table name in the database uri.
To that ends, allow the option "identifier case" to be used in the file
based cases too. Fixes#476.
The example was still using a very old syntax for per-field options, and
even the current debian package doesn't support this syntax anymore...
Update the docs to use current syntax.
Fix#475.
I'm not sure if anyone is using those scripts anymore, but I suppose
keeping them known broken isn't helping anyone either. This is a blind
fix in reaction to latest comment in bug #131.
Introduced recently when refactoring the match rules, forgot to update
all call sites, and the bug went unnoticed for a while, oops. Not sure
the fix is all we need to get back a working feature (alter schema
rename to), but it allows to compile and that's all I have the time to
handle today.
See #466.
We added some confution about who's responsible to quote the SQL obejct
names in between src/utils/quoting.lisp and src/pgsql/pgsql-ddl.lisp and
as a result some migrations from MySQL with identifier case set to quote
where broken, as in #439.
To fix, remove any use of the format directive ~s in the PostgreSQL ddl
output methods: we consider that the quoting of ~s is to be decided in
apply-identifier-case. We then use ~a instead of ~s.
Fix#439.
In the MySQL source we have explicit support for both string equality
and regexps for the INCLUDING and EXCLUDING clauses. This got broken
when moved to be shared with the ALTER TABLE implementation, because
we were no longer using the type system in the same way in all places.
To fix, create new abstractions for strings and regexps and use those
new structs in the proper way (thanks to defstruct and CLOS).
Fixes#441.
In cases where we have a WITH include drop option, we are generating
lots of SQL DROP statements. We may be running an empty target database
or in other situations where the target object of the DROP command might
not exists. Add support for that case.
In the FILENAME MATCHING case it might be good to have the information,
which can also explain some of the timing spent. The example in
test/bossa.load currently loads data from 296 files total...
The internal catalog representation are deeply recursive in order to
make it easy to traverse the catalog both downwards (catalog to schema
to tables) and upward (table to its schema to its catalog).
In consequence we need to set *print-circles* to non-nil when we're
going to log the catalogs, so turn it to non-nil before generating the
log messages.
While at it, add logging of such catalogs in the :data log verbosity
mode. The catalog output is very verbose, but it's easy to copy/paste it
from a bug report into being a live object we can inspect in the REPL,
thanks to Common Lisp notion of a reader and readable printer!
Calling a -with-timing from within a with-stats-collection macro is
redundant and will have the numbers counted twice. Which in this case
didn't happen because the stats label was manually copied, but borked
with a typo in one copy.
When loading data into an existing PostgreSQL catalog, we DROP the
indexes for better performance of the data loading. Some of the indexes
are UNIQUE or even PRIMARY KEYS, and some FOREIGN KEYS might depend on
them in the PostgreSQL dependency tracking of the catalog.
We used to use the CASCADE option when dropping the indexes, which hides
a bug: if we exclude from the load tables with foreign keys pointing to
tables we target, then we would DROP those foreign keys because of the
CASCADE option, but fail to install them again at the end of the load.
To prevent that from happening, pgloader now query the PostgreSQL
pg_depend system catalog to list the “missing” foreign keys and add them
to our internal catalog representation, from which we know to DROP then
CREATE the SQL object at the proper times.
See #400 as this was an oversight in fixing this issue.
When we do have a condef (constraint definition in the PostgreSQL
catalog slang), use it rather than trying to invent it again from the
bits and pieces. See #400, which it actually fixes now...
We used to force overly strict rules for a quoted field name in a CSV
load file, now accept any character but a quote to be part of the field
name.
Fixes#416.