mirror of
https://github.com/dimitri/pgloader.git
synced 2025-08-07 23:07:00 +02:00
few typos in docs
This commit is contained in:
parent
999791d013
commit
9754a809c6
@ -21,7 +21,7 @@ You will note in particular:
|
||||
freetds-dev
|
||||
|
||||
We need a recent enough [SBCL](http://sbcl.org/) version and that means
|
||||
backporting the one found in `sid` rather than using the very old one found
|
||||
back-porting the one found in `sid` rather than using the very old one found
|
||||
in current *stable* debian release. See `bootstrap-debian.sh` for details
|
||||
about how to backport a recent enough SBCL here (1.2.5 or newer).
|
||||
|
||||
@ -86,7 +86,7 @@ they can be loaded correctly.
|
||||
|
||||
### Compiling SBCL by yourself
|
||||
|
||||
If you ended up building SBCL yourself or you just want to do that, you can
|
||||
If you ended up building SBCL yourself, or you just want to do that, you can
|
||||
download the source from http://www.sbcl.org/ .
|
||||
|
||||
You will need to build SBCL with the following command and options:
|
||||
@ -98,7 +98,7 @@ NOTE: You could also remove the --compress-core option.
|
||||
|
||||
## Building pgloader
|
||||
|
||||
Now that the dependences are installed, just type make.
|
||||
Now that the dependencies are installed, just type make.
|
||||
|
||||
make
|
||||
|
||||
|
@ -3,12 +3,12 @@ Reporting Bugs
|
||||
|
||||
pgloader is a software and as such contains bugs. Most bugs are easy to
|
||||
solve and taken care of in a short delay. For this to be possible though,
|
||||
bug reports need to follow those recommandations:
|
||||
bug reports need to follow those recommendations:
|
||||
|
||||
- include pgloader version,
|
||||
- include problematic input and output,
|
||||
- include a description of the output you expected,
|
||||
- explain the difference between the ouput you have and the one you expected,
|
||||
- explain the difference between the output you have and the one you expected,
|
||||
- include a self-reproducing test-case
|
||||
|
||||
Test Cases to Reproduce Bugs
|
||||
|
@ -114,7 +114,7 @@ Schema discovery
|
||||
|
||||
User defined casting rules
|
||||
Some source database have ideas about their data types that might not be
|
||||
compatible with PostgreSQL implementaion of equivalent data types.
|
||||
compatible with PostgreSQL implementation of equivalent data types.
|
||||
|
||||
For instance, SQLite since version 3 has a `Dynamic Type System
|
||||
<https://www.sqlite.org/datatype3.html>`_ which of course isn't
|
||||
@ -240,7 +240,7 @@ PostgreSQL <http://mysqltopgsql.com/project/>`_ webpage.
|
||||
2. Fork a Continuous Integration environment that uses PostgreSQL
|
||||
3. Migrate the data over and over again every night, from production
|
||||
4. As soon as the CI is all green using PostgreSQL, schedule the D-Day
|
||||
5. Migrate without suprise and enjoy!
|
||||
5. Migrate without surprise and enjoy!
|
||||
|
||||
In order to be able to follow this great methodology, you need tooling to
|
||||
implement the third step in a fully automated way. That's pgloader.
|
||||
|
@ -175,7 +175,7 @@ the support for that Operating System:
|
||||
|
||||
__ https://github.com/dimitri/pgloader/issues?utf8=✓&q=label%3A%22Windows%20support%22%20>
|
||||
|
||||
If you need ``pgloader.exe`` on windows please condider contributing fixes
|
||||
If you need ``pgloader.exe`` on windows please consider contributing fixes
|
||||
for that environment and maybe longer term support then. Specifically, a CI
|
||||
integration with a windows build host would allow ensuring that we continue
|
||||
to support that target.
|
||||
|
@ -44,7 +44,7 @@ Also note that some file formats require describing some implementation
|
||||
details such as columns to be read and delimiters and quoting when loading
|
||||
from csv.
|
||||
|
||||
For more complex loading scenarios, you will need to write a full fledge
|
||||
For more complex loading scenarios, you will need to write a full fledged
|
||||
load command in the syntax described later in this document.
|
||||
|
||||
Target Connection String
|
||||
|
@ -216,7 +216,7 @@ keys*, *downcase identifiers*, *uniquify index names*.
|
||||
index name by prefixing it with `idx_OID` where `OID` is the internal
|
||||
numeric identifier of the table the index is built against.
|
||||
|
||||
In somes cases like when the DDL are entirely left to a framework it
|
||||
In some cases like when the DDL are entirely left to a framework it
|
||||
might be sensible for pgloader to refrain from handling index unique
|
||||
names, that is achieved by using the *preserve index names* option.
|
||||
|
||||
|
@ -32,7 +32,7 @@ another:
|
||||
;
|
||||
|
||||
Everything works exactly the same way as when doing a PostgreSQL to
|
||||
PostgreSQL migration, with the added fonctionality of this new `distribute`
|
||||
PostgreSQL migration, with the added functionality of this new `distribute`
|
||||
command.
|
||||
|
||||
Distribute Command
|
||||
@ -140,7 +140,7 @@ The ``impressions`` table has an indirect foreign key reference to the
|
||||
``company`` table, which is the table where the distribution key is
|
||||
specified. pgloader will discover that itself from walking the PostgreSQL
|
||||
catalogs, and you may also use the following specification in the pgloader
|
||||
command to explicitely add the indirect dependency:
|
||||
command to explicitly add the indirect dependency:
|
||||
|
||||
::
|
||||
|
||||
|
@ -50,7 +50,7 @@ This command allows loading the following CSV file content::
|
||||
Loading the data
|
||||
^^^^^^^^^^^^^^^^
|
||||
|
||||
Here's how to start loading the data. Note that the ouput here has been
|
||||
Here's how to start loading the data. Note that the output here has been
|
||||
edited so as to facilitate its browsing online::
|
||||
|
||||
$ pgloader csv.load
|
||||
|
@ -3,7 +3,7 @@ Loading MaxMind Geolite Data with pgloader
|
||||
|
||||
`MaxMind <http://www.maxmind.com/>`_ provides a free dataset for
|
||||
geolocation, which is quite popular. Using pgloader you can download the
|
||||
lastest version of it, extract the CSV files from the archive and load their
|
||||
latest version of it, extract the CSV files from the archive and load their
|
||||
content into your database directly.
|
||||
|
||||
The Command
|
||||
@ -94,7 +94,7 @@ in some details. Here's our example for loading the Geolite data::
|
||||
$$ create index blocks_ip4r_idx on geolite.blocks using gist(iprange); $$;
|
||||
|
||||
Note that while the *Geolite* data is using a pair of integers (*start*,
|
||||
*end*) to represent *ipv4* data, we use the very poweful `ip4r
|
||||
*end*) to represent *ipv4* data, we use the very powerful `ip4r
|
||||
<https://github.com/RhodiumToad/ip4r>`_ PostgreSQL Extension instead.
|
||||
|
||||
The transformation from a pair of integers into an IP is done dynamically by
|
||||
@ -109,7 +109,7 @@ the fly to use the appropriate data type and its input representation.
|
||||
Loading the data
|
||||
^^^^^^^^^^^^^^^^
|
||||
|
||||
Here's how to start loading the data. Note that the ouput here has been
|
||||
Here's how to start loading the data. Note that the output here has been
|
||||
edited so as to facilitate its browsing online::
|
||||
|
||||
$ pgloader archive.load
|
||||
|
Loading…
Reference in New Issue
Block a user