When declaring types of arguments (mainly done for hinting the Common
Lisp compiler into generating more efficient code), it's important to
account for the possibility of the arguments being NIL, of NULL type.
That's been made clear in the way the projection function is now
generated in src/sources/source.lisp in project-fields function, with
all the arguments now being &optional so that we are able to cope with
ragged CSV files.
The only expected change from this patch is missing warnings in some
test cases, such as test/reformat.load, test/fixed.load and
test/archive.load.
For the generated binary to be really portable, we need to be able to
open openssl 1.0.1 even when we've been built against openssl 1.0.0.
A way to achieve that with SBCL is by forcing the unloading of the lib
at image saving time and register a hook to load it again at image init
time. Using the proper API, CFFI will happily load the available file
for the lib rather than insisting on loading the exact same one than
found on the build machine.
Try harder not to mix Common Lisp Developers profiles with main pgloader
expected audience who will not have a Quicklisp system ready, by doing
our magic Quicklisp calls in a pgloader specific directory.
That might impact #45 badly as the homebrew formula should need to now
install its Quicklisp distributions elsewhere, but it should be easy
enough to fix.
Also, systematically `git pull` from the local-projects dependencies we
need to work with to avoid some bad surprises to our users.
Allows packaging systems like Homebrew to install manual pages
automatically for users without introducing a dependency on the manual
page build system.
The babel character-decoding-error condition is exposing both its
internal BUFFER and the current OCTETS, and it seems we should refer to
the BUFFER in our error reporting...
When it's not possible to decode a MySQL value in the proper given
encoding, automatically replace the value with nil and be quite verbose
about it by logging an error.
Not everyone has SBCL installed in /usr/bin/sbcl and sometimes one wants to test other versions of SBCL or just not install the official Debian/Ubuntu packages. I have tested this on Ubuntu precise and it worked for me.
I got the info this would work from here:
https://bugs.launchpad.net/sbcl/+bug/1284148
This rule has overridden the default rule for `tinyint(1)` and instead of placing `boolean`, it kept the typemod and placed `boolean(1)` into the resulting query.
The patch from pull request #30 was hard-coding the PostgreSQL side quoting,
we are using the quote_ident() function instead, as it's now available in
every PostgreSQL production release (8.4 included).
With the new internal setting *copy-batch-size* it's now possible to
instruct pgloader to close batches early (before *copy-batch-rows* limit)
when crossing the byte count threshold.
When set to 20 MB it allows the new test case (exhausted) to pass under SBCL
and CCL, and there's no measurable cost when *copy-batch-size* is set to
nil (its default value) in the testing done.
This patch is published without any way to tune the values from the command
language yet, that's the next step once its been proven effective.
With this patch, the whole data massaging and final formating into the
PostgreSQL COPY TEXT format is done by the reader thread, which publishes a
batch at a time in the communication channel: a lparallel.queue object.
Before that, the raw vectors where pushed directly in the queue, offering
more flexibility to adjust to the reader and writer IO rates and
capabilities, but impeding the ability of the Garbage Collector: data still
in the queue was not collected even if not needed anymore.
The new model also uses less memory, and allows a better control over what
amount of data stays in memory. The new *concurrent-batches* parameter
should be key to being able to process huge rows.
The intent is to offering a way for the users to tune *concurrent-batches*
down to 1 for sources with massive per-row memory footprint. Even better
would be to find a way to automatically adjust the setting without spending
too much time counting the bytes we're batching.
Preliminary tests show no sensible impact on performances from this patch,
even some improvements in cases.