When updating the catalog support we forgot to fix the references to the
index and fkey name slots that are now provided centrally for the
catalog of all database source types.
Again, we don't have unit test cases for MS SQL, so that's a blind
fix (but at least that compiles).
See #343.
Avoid double quoting the schema names when used in PostgreSQL catalog
queries, where the identifiers are used as literal values and need to be
single-quoted.
Fix#476, again.
See #476 where it would have been helpful to see the PostgreSQL catalog
queries with `--log-min-messages sql` in the bug report. Also more
generally useful.
The code comment displayed in the release notes for 3.3.1 is reported to
be better at explaining the concurrency control than what we had in the
main documentation, so add it there.
Fix#496.
Now that we have a proper flush system for reporting the summary at the
proper time (see 7c5396f097), refrain from
also taking care of the reporting when stopping the monitor.
Adapt the regression driver code to flush the summary after loading the
expected data, which also provides better output.
When the summary output is sent to a file, that would also create a
backup file and replace our summary with an empty new file at monitor
stop...
Fixes#499.
This sits between NOTICE and INFO, allowing to have a complete log of
the SQL queries sent to the server while avoiding the very verbose
trafic of the DEBUG log level.
See #498.
This pgloader command allows to migrate tables while changing the schema
they are found into in between their MySQL source database and their
PostgreSQL target database.
This changes the default behavior of pgloader with MySQL from always
targetting the 'public' schema to targetting by default a schema named
the same as the MySQL database. You can revert to the old behavior by
adding a rule:
ALTER SCHEMA 'dbname' RENAME TO 'public
We might want to add a patch to re-install the default behavior later.
Also see #489 where it used not to be possible to rename the schema at
migration time, causing strange errors (you need to spot NIL as the
schema name in the "failed to find target table" messages.
A PostgreSQL index is always created in the same schema as the table it
is defined against, and the CREATE INDEX command doesn't accept schema
qualified index names.
Make it so that fatal errors are printed only once, and when possible
included in the usual log format as handled by our monitoring thread.
Also, improve error and summary reporting when we load from several
sources on the same command line.
All this work has been triggered because of an edge case where the OS
return value of the pgloader command was 0 (zero, success) although the
given file on the command line does not exists.
Fixes#486.
When the option "drop indexes" is in use in loading data from a file, we
collect the indexes from the PostgreSQL catalogs and then issue DROP
commands against them before the load, then CREATE commands when it's
done.
The CREATE is done in parallel, and we create an lparallel kernel for
that. The kernel must have a worker-count of at least 1, and we where
not considering the case of 0 indexes on the target table.
Fix#484.
As shown in #476, it is sometimes needed to be able to quote the
identifier names even when loading from a file, that is when specifying
the target table name in the database uri.
To that ends, allow the option "identifier case" to be used in the file
based cases too. Fixes#476.
The example was still using a very old syntax for per-field options, and
even the current debian package doesn't support this syntax anymore...
Update the docs to use current syntax.
Fix#475.
I'm not sure if anyone is using those scripts anymore, but I suppose
keeping them known broken isn't helping anyone either. This is a blind
fix in reaction to latest comment in bug #131.
Introduced recently when refactoring the match rules, forgot to update
all call sites, and the bug went unnoticed for a while, oops. Not sure
the fix is all we need to get back a working feature (alter schema
rename to), but it allows to compile and that's all I have the time to
handle today.
See #466.
We added some confution about who's responsible to quote the SQL obejct
names in between src/utils/quoting.lisp and src/pgsql/pgsql-ddl.lisp and
as a result some migrations from MySQL with identifier case set to quote
where broken, as in #439.
To fix, remove any use of the format directive ~s in the PostgreSQL ddl
output methods: we consider that the quoting of ~s is to be decided in
apply-identifier-case. We then use ~a instead of ~s.
Fix#439.
In the MySQL source we have explicit support for both string equality
and regexps for the INCLUDING and EXCLUDING clauses. This got broken
when moved to be shared with the ALTER TABLE implementation, because
we were no longer using the type system in the same way in all places.
To fix, create new abstractions for strings and regexps and use those
new structs in the proper way (thanks to defstruct and CLOS).
Fixes#441.
In cases where we have a WITH include drop option, we are generating
lots of SQL DROP statements. We may be running an empty target database
or in other situations where the target object of the DROP command might
not exists. Add support for that case.
In the FILENAME MATCHING case it might be good to have the information,
which can also explain some of the timing spent. The example in
test/bossa.load currently loads data from 296 files total...
The internal catalog representation are deeply recursive in order to
make it easy to traverse the catalog both downwards (catalog to schema
to tables) and upward (table to its schema to its catalog).
In consequence we need to set *print-circles* to non-nil when we're
going to log the catalogs, so turn it to non-nil before generating the
log messages.
While at it, add logging of such catalogs in the :data log verbosity
mode. The catalog output is very verbose, but it's easy to copy/paste it
from a bug report into being a live object we can inspect in the REPL,
thanks to Common Lisp notion of a reader and readable printer!
Calling a -with-timing from within a with-stats-collection macro is
redundant and will have the numbers counted twice. Which in this case
didn't happen because the stats label was manually copied, but borked
with a typo in one copy.
When loading data into an existing PostgreSQL catalog, we DROP the
indexes for better performance of the data loading. Some of the indexes
are UNIQUE or even PRIMARY KEYS, and some FOREIGN KEYS might depend on
them in the PostgreSQL dependency tracking of the catalog.
We used to use the CASCADE option when dropping the indexes, which hides
a bug: if we exclude from the load tables with foreign keys pointing to
tables we target, then we would DROP those foreign keys because of the
CASCADE option, but fail to install them again at the end of the load.
To prevent that from happening, pgloader now query the PostgreSQL
pg_depend system catalog to list the “missing” foreign keys and add them
to our internal catalog representation, from which we know to DROP then
CREATE the SQL object at the proper times.
See #400 as this was an oversight in fixing this issue.
When we do have a condef (constraint definition in the PostgreSQL
catalog slang), use it rather than trying to invent it again from the
bits and pieces. See #400, which it actually fixes now...
We used to force overly strict rules for a quoted field name in a CSV
load file, now accept any character but a quote to be part of the field
name.
Fixes#416.
Also known as the ORM case, it happens that other tools are used to
create the target schema. In that case pgloader job is to fill in the
exiting target tables with the data from the source tables.
We still focus on load speed and pgloader will now DROP the
constraints (Primary Key, Unique, Foreign Keys) and indexes before
running the COPY statements, and re-install the schema it found in the
target database once the data load is done.
This behavior is activated when using the “create no tables” option as
in the following test-case setup:
with create no tables, include drop, truncate
Fixes#400, for which I got a test-case to play with!
Replace the ad-hoc code that was used before in the load from file code
path to use our full internal catalog representation, and adjust APIs to
that end.
The goal is to use catalogs everywhere in the PostgreSQL target API and
allowing to process reason explicitely about source and target catalogs,
see #400 for the main use case.
First, add index and foreign keys to the list of objects supported by
the shared catalog facility, where is was only found in the pgsql schema
specific package for historical raisons.
Then also add to our catalog internal structures the notion of a trigger
and a stored procedure, allowing for cleaner advanced default values
support in the MySQL cast functions.
Once we now have a proper and complete catalog, review the pgsql module
DDL output function in terms of the catalog and rewrite the schema
creation support so that it takes direct benefit of our internal
catalogs representation.
In passing, clean-up the code organisation of the pgsql target support
module to be easier to work with.
Next step consists of getting rid of src/pgsql/queries.lisp: this
facility should be replaced by the usage of a target catalog that we
fetch the usual way, thanks to the new src/pgsql/pgsql-schema.lisp file
and list-all-* functions.
That will in turn allow for an explicit step of merging the pre-existing
PostgreSQL catalog when it's been created by other tools than pgloader,
that is when migrating with the help of an ORM. See #400 for details.
The MSSQL index filters parser needs to parse digits and keep them as
text, but was piggybacking on the main parsers and the fixed file format
positions parser by re-using the rule name "number".
My understanding was that by calling `defrule' in different packages one
would create a separate set of rules. It might have been wrong from the
beginning or just changed in newer versions of esrap. Will have to
investigate more.
This fixes#434 while not applying suggested code: the comment about
where to fix the bug is spot on.
Also, it should be noted that the regression tests framework seems to be
failing us and returns success in that error case, despite code
installed to properly handle the situation. This will also need to be
investigated.
The other user-provided names (schema and table) were already quoted
using the quote_ident() PostgreSQL functio, but the column name (attname
in the catalogs) were not.
Blind attempt to fix#425.