The max function requires at least 1 argument to be given, and in the
case where we have no table to load it then fails badly, as show here:
CL-USER> (handler-case
(reduce #'max nil)
(condition (c)
(format nil "~a" c)))
"invalid number of arguments: 0"
Of course Common Lisp comes with a very easy way around that problem:
CL-USER> (reduce #'max nil :initial-value 0)
0
Fix#381.
It's been broken by a recent commit where we did force the internal
table representation to always be an instance of the table structure,
which wasn't yet true for regression testing.
In passing, re-indent a large portion of the function, which accounts
for most of the diff.
Once more we can't use an aggregate over a text column in MS SQL to
build the index definition from its catalog structure, so we have to do
that in the lisp part of the code.
Multi-column indexes are now supported, but filtered indexes still are a
problem: the WHERE clause in MS SQL is not compatible with the
PostgreSQL syntax (because of [names] and type casting.
For example we cast MS SQL bit to PostgreSQL boolean, so
WHERE ([deleted]=(0))
should be translated to
WHERE not deleted
And the code to do that is not included yet.
The following documentation page offers more examples of WHERE
expression we might want to support:
https://technet.microsoft.com/en-us/library/cc280372.aspx
WHERE EndDate IS NOT NULL
AND ComponentID = 5
AND StartDate > '01/01/2008'
EndDate IN ('20000825', '20000908', '20000918')
It might be worth automating the translation to PostgreSQL syntax and
operators, but it's not done in this patch.
See #365, where the created index will now be as follows, which is a
problem because of being UNIQUE: some existing data won't reload fine.
CREATE UNIQUE INDEX idx_<oid>_foo_name_unique ON dbo.foo (name, type, deleted);
Having been given a test instance of a MS SQL database allows to quickly
fix a series of assorted bugs related to schema handling of MS SQL
databases. As it's the only source with a proper notion of schema that
pgloader supports currently, it's not a surprise we had them.
Fix#343. Fix#349. Fix#354.
It turns out that MySQL catalog always store default value as strings
even when the column itself is of type bytea. In some cases, it's then
impossible to transform the expected bytea from a string.
In passing, move some code around to fix dependencies and make it
possible to issue log warnings from the default value printing code.
The decision to use lots of different packages in pgloader has quite
strong downsides at times, and the manual managment of dependencies is
one of the, in particular how to avoid circular ones.
In the recent refactoring and improvements of parallelism the indexes
creation would kick in before we know that the data is done being copied
over to the target table.
Fix that by maintaining a writers-count hashtable and only starting to
create indexes when that count reaches zero, meaning all the concurrent
tasks started to handle the COPY of the data are now done.
We convert the default value call to newsequentialid() into a call to
the PostgreSQL uuid-ossp uuid_generate_v1() which seems like the
equivalent function.
The extension "uuid-ossp" needs to be installed in the target database.
(Blind) Fix#246.
In order to share more code in between the different source types,
finally have a go at the quite horrible mess of anonymous data
structures floating around.
Having a catalog and schema instances not only allows for code cleanup,
but will also allow to implement some bug fixes and wishlist items such
as mapping tables from a schema to another one.
Also, supporting database sources having a notion of "schema" (in
between "catalog" and "table") should get easier, including getting
on-par with MySQL in the MS SQL support (materialized views has been
asked for already).
See #320, #316, #224 for references and a notion of progress being made.
In passing, also clean up the copy-databases methods for database source
types, so that they all use a fetch-metadata generic function and a
prepare-pgsql-database and a complete-pgsql-database generic function.
Actually, a single method does the job here.
The responsibility of introspecting the source to populate the internal
catalog/schema representation is now held by the fetch-metadata generic
function, which in turn will call the specialized versions of
list-all-columns and friends implementations. Once the catalog has been
fetched, an explicit CAST call is then needed before we can continue.
Finally, the fields/columns/transforms slots in the copy objects are
still being used by the operative code, so the internal catalog
representation is only used up to starting the data copy step, where the
copy class instances are then all that's used.
This might be refactored again in a follow-up patch.