As per PostgreSQL documentation on connection strings, allow overriding
of main URI components in the options parts, with a percent-encoded
syntax for parameters. It allows to bypass the main URI parser
limitations as seen in #199 (how to have a password start with a
colon?).
See:
http://www.postgresql.org/docs/9.3/interactive/libpq-connect.html#LIBPQ-CONNSTRING
To allow for importing JSON one-liners as-is in the database it can be
interesting to leverage the CSV parser in a compatible setup. That setup
requires being able to use any separator character as the escape
character.
Some CSV files are given with an header line containing the list of
their column names, use that when given the option "csv header".
Note that when both "skip header" and "csv header" options are used,
pgloader first skip as many required lines and then uses the next one as
the csv header.
Because of temporary failure to install the `ronn` documentation tool,
this patch only commits the changes to the source docs and omits to
update the man page (pgloader.1). A following patch is intended to be
pushed that fixed that.
See #236 which is using shell tricks to retrieve the field list from the
CSV file itself and motivated this patch to finally get written.
The database connection code needed to switch to the "new" connection
facilities, and there was a bug in the processing of template sections
wherein the template user would inherit the template property.
It turns out that SQLite3 data type handling is back to kick us wherever
it hurts, this time by the driver deciding to return blob data (a vector
of unsigned bytes) when we expect properly encoded text data.
In the wikipedia data test case used to reproduce the bug, we're lucky
enough that the byte vectors actually map to properly encoded strings.
Of course doing the proper thing costs some performances.
I'd like to be able to decide if I should blame the SQLite driver or the
whole product on this one. The per-value data type handling still is a
disaster in my book, tho, which means it's crucially important for
pgloader to get it right and allow users to seemlessly migrate away from
using such a system.
pgloader used to have a single database name parsing rule that is
supposed to be compliant with PostgreSQL identifier rules. Of course it
turns out that MySQL naming rules are different, so adjust the parser so
that the following connection string is accepted:
mysql://root@localhost/3scale_system_development
MS SQL default values can be quite... sophisticated, so get around with
using a more complex expression in the SQL query that retrieve the
default values.
The query and implementation has been largely provided by luqelinux and
jstans github users, and I finally merged manually their cumulated
efforts on this front.
When given a file in the COPY format, we should expect that its content
is already properly escaped as expected by PostgreSQL. Rather than
unescape the data then escape it again, add a new more of operation to
format-vector-row in which it won't even try to reformat the data.
In passing, fix an off-by-one bug in dealing with non-ascii characters.
We used to parse qualified table names as a simple string, which then
breaks attempts to be smart about how to quote idenfifiers. Some sources
are known to accept dots in quoted table names and we need to be able to
process that properly without tripping on qualified table names too
late.
Current code might not be the best approach as it's just using either a
cons or a string for table names internally, rather than defining a
proper data structure with a schema and a name slot.
Well, that's for a later cleanup patch, I happen to be lazy tonight.
Define a bunch of OS return codes and use them wisely, or at least in a
better way than just doing (uiop:quit) whenever there's something wrong,
without any difference whatsover to the caller.
Now we return a non-zero error code when we know something wrong did
happen. Which is more useful.
Per gripe from Marcos, who argues that for a human readable format
breaking when table names are wider than expected at compile time is
quite a strange position to defend.
See test/parse/hans.goeuro.load for an example usage of the new option.
In passing, any error when creating indexes is now properly reported and
logged, which was missing previously. Oops.
This option is dangerous and allows to skip ALL triggers when loading
data against PostgreSQL. This includes foreign key constraints
definitions and will allow loading data out of order.
When using both the options "create no table" and "disable triggers" it
will be possible to load data into a schema prepared by your favorite
external tool, at the cost of not validating FK constraints. Use with
care.
Fix#167.
The default for MS SQL float types is to only have a precision defined,
as described in https://msdn.microsoft.com/en-us/library/ms173773.aspx,
but the pgloader code didn't know what to do with a float without scale.
It appears that db3 files are not limited to the ASCII character
encoding that they were designed with, so let's clue pgloader about
that.
This commit build
770cbe3526
and the pgloader Makefile has been updated to momentarily fetch cl-db3
from github rather than Quicklisp so that it's possible to enjoy the new
feature immediately.