Fix minor typo in docs (#2119)

* Fix minor typo

* Some spelling and grammar fixes
This commit is contained in:
James McMahon 2022-02-18 00:13:55 +00:00 committed by GitHub
parent 383252d3cd
commit cd30f20296
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
6 changed files with 62 additions and 62 deletions

View File

@ -9,7 +9,7 @@
Also, please create PRs to DEV branch, !!NOT!! MASTER branch.
If necessary, when we review or such,
make a comment if you feel this needs to go to master directly,
otherwise we will merge to master when necesssary.
otherwise, we will merge to master when necessary.
-->
## Breaking change
<!--
@ -17,7 +17,7 @@
to tell them what breaks, how to make it work again and why we did this.
This piece of text is published with the release notes, so it helps if you
write it towards our users, not us.
Note: Leave this section emtpy if this PR is NOT a breaking change.
Note: Leave this section empty if this PR is NOT a breaking change.
-->
```txt
<placeholder>
@ -45,13 +45,13 @@
-->
- [ ] Bugfix (non-breaking change which fixes an issue)
- [ ] New feature (which adds functionality to a this container)
- [ ] New feature (which adds functionality to this container)
- [ ] Breaking change (fix/feature causing existing functionality to break)
## Additional information
<!--
Details are important, and help maintainers processing your PR.
Details are important, and help maintainers process your PR.
Please be sure to fill out additional details, if applicable.
-->

View File

@ -12,7 +12,7 @@ OpenVPN configs to make it run as fast and secure as possible.
## It goes like this
To understand how it works, this is the most important events
To understand how it works, these are the most important events
and who/what starts them.
1. You start the container
@ -21,7 +21,7 @@ and who/what starts them.
When you start the container it is instructed to run a script
to start OpenVPN. This is defined in [the Dockerfile](https://github.com/haugene/docker-transmission-openvpn/blob/master/Dockerfile).
This script is responsible for doing initial setup and prepare what is needed for OpenVPN to run successfully.
This script is responsible for doing initial setup and preparing what is needed for OpenVPN to run successfully.
## Starting OpenVPN
@ -30,12 +30,12 @@ OpenVPN itself can be started with a single argument, and that is the config fil
We also add a few more to tell it to start Transmission when the VPN tunnel is
started and to stop Transmission when OpenVPN is stopped. That's it.
Apart from that the script does some firewall config, vpn interface setup and possibly other
Apart from that, the script does some firewall config, vpn interface setup and possibly other
things based on your settings. There are also some reserved script names that a user can mount/add to
the container to include their own scripts as a part of the setup or teardown of the container.
Anyways! You have probably seen the docker run and docker-compose configuration examples
and you've put two and two together: This is where environment variables comes in.
and you've put two and two together: This is where environment variables come in.
Setting environment variables is a common way to pass configuration options to containers
and it is the way we have chosen to do it here.
So far we've explained the need for `OPENVPN_PROVIDER` and `OPENVPN_CONFIG`. We use the
@ -67,13 +67,13 @@ script which in turn will call the start scripts for
The up script will be called with a number of parameters from OpenVPN, and among them is the IP of the tunnel interface.
This IP is the one we've been assigned by DHCP from the OpenVPN server we're connecting to.
We use this value to override Transmissions bind address, so we'll only listen for traffic from peers on the VPN interface.
We use this value to override Transmissions bind address, so we'll only listen for traffic from peers on the VPN interface.
The startup script checks to see if one of the [alternative web ui's](config-options.md#alternative_web_uis) should be used for Transmission.
The startup script checks to see if one of the [alternative web UIs](config-options.md#alternative_web_uis) should be used for Transmission.
It also sets up the user that Transmission should be run as, based on the PUID and PGID passed by the user
along with selecting preferred logging output and a few other tweaks.
Before starting Transmission we also need to see if there are any settings that should be overridden.
Before starting Transmission we also need to see if any settings should be overridden.
One example of this is binding Transmission to the IP we've gotten from our VPN provider.
Here we check if we find any environment variables that match a setting that we also see in settings.json.
This is described in the [config section](config-options/#transmission_configuration_options).
@ -89,9 +89,9 @@ After starting Transmission there is an optional step that some providers have;
to get an open port and set it in Transmission. **Opening a port in your local router does not work**.
I made that bold because it's a recurring theme. It's not intuitive until it is I guess.
Since all your traffic is going through the VPN, which is kind of the point, the port you have to open is not on your router.
Your router's external IP address is the destination of those packets. It is on your VPN providers end that it has to be opened.
Some providers support this, other don't. We try to write scripts for those that do and that script will be executed
Your router's external IP address is the destination of those packets. It is on your VPN providers end that it has to be opened.
Some providers support this, others don't. We try to write scripts for those that do and that script will be executed
after starting Transmission if it exists for your provider.
At this point Transmission is running and everything is great!
But you might not be able to access it, and that's the topic of the [networking section](vpn-networking.md).
At this point, Transmission is running and everything is great!
But you might not be able to access it, and that's the topic of the [networking section](vpn-networking.md).

View File

@ -31,7 +31,7 @@ secrets:
| `OPENVPN_CONFIG` | Sets the OpenVPN endpoint to connect to. | `OPENVPN_CONFIG=UK Southampton` |
| `OPENVPN_OPTS` | Will be passed to OpenVPN on startup | See [OpenVPN doc](https://openvpn.net/index.php/open-source/documentation/manuals/65-openvpn-20x-manpage.html) |
| `LOCAL_NETWORK` | Sets the local network that should have access. Accepts comma separated list. | `LOCAL_NETWORK=192.168.0.0/24` |
| `CREATE_TUN_DEVICE` | Creates /dev/net/tun device inside the container, mitigates the need mount the device from the host | `CREATE_TUN_DEVICE=true` |
| `CREATE_TUN_DEVICE` | Creates /dev/net/tun device inside the container, mitigates the need to mount the device from the host | `CREATE_TUN_DEVICE=true` |
| `PEER_DNS` | Controls whether to use the DNS provided by the OpenVPN endpoint. | To use your host DNS rather than what is provided by OpenVPN, set `PEER_DNS=false`. This allows for potential DNS leakage. |
| `PEER_DNS_PIN_ROUTES` | Controls whether to force traffic to peer DNS through the OpenVPN tunnel. | To disable this default, set `PEER_DNS_PIN_ROUTES=false`. |
@ -66,7 +66,7 @@ Because your VPN connection can sometimes fail, Docker will run a health check o
### Permission configuration options
By default the startup script applies a default set of permissions and ownership on the transmission download, watch and incomplete directories. The GLOBAL_APPLY_PERMISSIONS directive can be used to disable this functionality.
By default, the startup script applies a default set of permissions and ownership on the transmission download, watch and incomplete directories. The GLOBAL_APPLY_PERMISSIONS directive can be used to disable this functionality.
| Variable | Function | Example |
| -------------------------- | -------------------------------------- | -------------------------------- |
@ -91,8 +91,8 @@ to either `combustion`, `kettu`, `transmission-web-control`, `flood-for-transmis
### User configuration options
By default everything will run as the root user. However, it is possible to change who runs the transmission process.
You may set the following parameters to customize the user id that runs transmission.
By default, everything will run as the root user. However, it is possible to change who runs the transmission process.
You may set the following parameters to customize the user id that runs Transmission.
| Variable | Function | Example |
| -------- | ------------------------------------------- | ----------- |
@ -127,12 +127,12 @@ A full list of variables can be found in the Transmission documentation [here](h
All variables overridden by environment variables will be logged during startup.
PS: `TRANSMISSION_BIND_ADDRESS_IPV4` will automatically be overridden to the IP assigned to your OpenVPN tunnel interface.
This ensures that Transmission only listens for torrent traffic on the VPN interface and is part of the fail safe mechanisms.
This ensures that Transmission only listens for torrent traffic on the VPN interface and is part of the fail-safe mechanisms.
### Dropping default route from iptables (advanced)
Some VPNs do not override the default route, but rather set other routes with a lower metric.
This might lead to the default route (your untunneled connection) to be used.
This might lead to the default route (your untunneled connection) being used.
To drop the default route set the environment variable `DROP_DEFAULT_ROUTE` to `true`.
@ -140,15 +140,15 @@ _Note_: This is not compatible with all VPNs. You can check your iptables routin
### Changing logging locations
By default Transmission will log to a file in `TRANSMISSION_HOME/transmission.log`.
By default, Transmission will log to a file in `TRANSMISSION_HOME/transmission.log`.
To log to stdout instead set the environment variable `LOG_TO_STDOUT` to `true`.
_Note_: By default stdout is what container engines read logs from. Set this to true to have Tranmission logs in commands like `docker logs` and `kubectl logs`. OpenVPN currently only logs to stdout.
_Note_: By default, stdout is what container engines read logs from. Set this to true to have Tranmission logs in commands like `docker logs` and `kubectl logs`. OpenVPN currently only logs to stdout.
### Custom scripts
If you ever need to run custom code before or after transmission is executed or stopped, you can use the custom scripts feature.
If you ever need to run custom code before or after Transmission is executed or stopped, you can use the custom scripts feature.
Custom scripts are located in the /scripts directory which is empty by default.
To enable this feature, you'll need to mount the /scripts directory.

View File

@ -1,12 +1,12 @@
# Debugging your setup
The goal of this page is to provide a common set of tests that can be run to try to narrow down
an issue with the container before you actually create a new issue for it. We see a lot of repeat
an issue with the container before you create a new issue for it. We see a lot of repeat
business in the issues section and spending time answering questions for individual setups takes
away from improving the container and making it more stable in the first place.
This guide should be improved over time but can hopefully help you point out the most common errors
and provide some pointers on how to proceed. A short summary of what you've tried should be added to
and provide some pointers on how to proceed. A summary of what you've tried should be added to
the description if you can't figure out what's wrong with your setup and create an issue for it.
## Introduction and assumptions
@ -16,13 +16,13 @@ commands to test it. If you're a docker-compose user then you can make a similar
If you are using any of the NAS container orchestration UIs then you just have to mimic this behaviour
as best you can. Note that you can ssh into the NAS and run commands directly.
NOTE: The commands listed here uses the --rm flag which will remove the container from the host when it
NOTE: The commands listed here use the --rm flag which will remove the container from the host when it
shuts down. And as we're not mounting any volumes here, your host system will not be altered from running
any of these commands. If any command breaks with this principle it will be noted.
## Checking that Docker works properly
In order for this container to work you have to have a working Docker installation on your host.
For this container to work, you have to have a working Docker installation on your host.
We'll begin very simple with this command that will print a welcome message if Docker is properly installed.
```
@ -30,17 +30,17 @@ docker run --rm hello-world
```
Then we can try to run an alpine image, install curl and run curl to get your public IP.
This verifies that Docker containers on your host has a working internet access
This verifies that Docker containers on your host have working internet access
and that they can look up hostnames with DNS.
```
docker run --rm -it alpine sh -c "apk add curl && curl ipecho.net/plain"
```
If you get an error with "Could not resolve host" then you have to look at the dns options in
If you get an error with "Could not resolve host" then you have to look at the DNS options in
[the Docker run reference](https://docs.docker.com/engine/reference/run/#network-settings).
Finally we will check that your Docker daemon runs with a bridge network as the default network driver.
The following command runs an alpine container and prints it's iptable routes. It probably outputs two
Finally, we will check that your Docker daemon runs with a bridge network as the default network driver.
The following command runs an alpine container and prints its iptable routes. It probably outputs two
lines and one of them starts with `172.x.0.0/16 dev eth0` and the other one also references `172.x.0.1`.
The 172 addresses are a sign that you're on a Docker bridge network. If your local IP like `192.168.x.y`
shows up your container is running with host networking and the VPN container would affect the entire host
@ -54,16 +54,16 @@ help getting Docker to work.
## Try running the container with an invalid setup
We'll keep this brief because it's not the most useful step, but you can actually verify a bit anyways.
We'll keep this brief because it's not the most useful step, but you can verify a bit anyways.
Run this command (even if PIA is not your provider) and do not insert your real username/password:
```
docker run --rm -it -e OPENVPN_PROVIDER=PIA -e OPENVPN_CONFIG=france -e OPENVPN_USERNAME=donald -e OPENVPN_PASSWORD=duck haugene/transmission-openvpn
```
At this point the commands are getting longer. I'll start breaking them up into lines using \ to escape the line
At this point, the commands are getting longer. I'll start breaking them up into lines using \ to escape the line
breaks. For those that are new to shell commands; a \ at the end of the line will tell the shell to keep on
reading as if it was on the same line. You can copy-paste this somewhere and put everythin on the same line
reading as if it was on the same line. You can copy-paste this somewhere and put everything on the same line
and remove the \ characters if you want to. The same command then becomes:
```
docker run --rm -it \
@ -123,7 +123,7 @@ LOCAL_NETWORK variable you cannot access Transmission that is running in the con
any ports on your host either so Transmission is not reachable from outside of the container as of now.
You don't need to expose Transmission outside of the container to contact it though. You can get another shell
inside the same container that you are running and try to curl Transmission web ui from there.
inside the same container that you are running and try to curl Transmission web UI from there.
If this gets too complicated then you can skip to the next point, but please try to come back here
if the next point fails. One thing is not being able to access Transmission which might be network related,
@ -141,14 +141,14 @@ apparently doing well.
## Accessing Transmission Web UI
If you've come this far we hopefully will be able to connect to the Transmission Web UI from your browser.
In order to do this we have to know what LAN IP your system is on. The reason for this is a bit complex and
is described in the [VPN networking](vpn-networking.md) section. The short version is that OpenVPN need to
To do this we have to know what LAN IP your system is on. The reason for this is a bit complex and
is described in the [VPN networking](vpn-networking.md) section. The short version is that OpenVPN needs to
be able to differentiate between what traffic to tunnel and what to let go. Since the VPN is running on
the Docker bridge network it is not able to detect computers on your LAN as actually being local devices.
We'll base ourselves on the command from the previous sections, but to access Transmission we need to
We'll base ourselves on the command from the previous sections, but to access Transmission, we need to
expose the 9091 port to the host and tell the containers what IP ranges NOT to tunnel. Whatever you put
in LOCAL_NETWORK will be trusted as a local network and traffic to those IPs will not be tunneled.
in LOCAL_NETWORK will be trusted as a local network and traffic to those IPs will not be tunnelled.
Here we will assume that you're on one of the common 192.168.x.y subnets.
The command then becomes:
@ -164,16 +164,16 @@ docker run --rm -it --cap-add=NET_ADMIN \
haugene/transmission-openvpn
```
With any luck you should now be able to access Transmission at [http://localhost:9091](http://localhost:9091)
With any luck, you should now be able to access Transmission at [http://localhost:9091](http://localhost:9091)
or whatever server IP where you have started the container.
NOTE: If you're trying to run this beside another container you can use `-p 9092:9091` to bind 9092
NOTE: If you're trying to run this alongside another container you can use `-p 9092:9091` to bind 9092
on the host instead of 9091 and avoid port conflict.
## Now what?
If this guide has failed at some point then you should create an issue for it. Please add the command
that you ran and the logs that was produced.
that you ran and the logs that were produced.
If you're now able to access Transmission and it seems to work correctly then you should add a volume mount
to the `/data` folder in the container. You'll then have a setup like what's shown on the
@ -183,11 +183,11 @@ If you have another setup that does not work then you now have two versions to c
that will lead you to find the error in your old setup. If the setup is the same but this version works then
the error is in your state. Transmission stores its state in /data/transmission-home by default and
it might have gotten corrupt. One simple thing to try is to delete the settings.json file that is found here.
We do mess with that file and we might have corrupted it. Apart from that we do not change anything within
We do mess with that file and we might have corrupted it. Apart from that, we do not change anything within
the Transmission folder and any issues should be asked in Transmission forums.
## Conclusion
I hope this has helped you to solve your problem or at least narrow down where it's coming from.
If you have suggestions for improvements do not hesitate to create an issue or even better open
a PR with your proposed changes.
a PR with your proposed changes.

View File

@ -1,6 +1,6 @@
* [The container runs, but I can't access the web ui](#the_container_runs_but_i_cant_access_the_web_ui)
* [How do I enable authentication in the web ui](#how_do_i_enable_authentication_in_the_web_ui)
* [The container runs, but I can't access the web ui](#the_container_runs_but_i_cant_access_the_web_UI)
* [How do I enable authentication in the web ui](#how_do_i_enable_authentication_in_the_web_UI)
* [How do I verify that my traffic is using VPN](#how_do_i_verify_that_my_traffic_is_using_vpn)
* [RTNETLINK answers: File exists](#rtnetlink_answers_file_exists)
* [RTNETLINK answers: Invalid argument](#rtnetlink_answers_invalid_argument)
@ -12,11 +12,11 @@
* [Send Username Password via file](#send_username_password_via_file)
* [AUTH: Received control message: AUTH_FAILED](#auth_received_control_message_auth_failed)
## The container runs, but I can't access the web ui
## The container runs, but I can't access the web UI
[TODO](https://github.com/haugene/docker-transmission-openvpn/issues/1558): Short explanation and link to [networking](vpn-networking.md)
## How do I enable authentication in the web ui
## How do I enable authentication in the web UI
You can do this either by setting the appropriate fields in `settings.json` which is
found in TRANSMISSION_HOME which defaults to `/config/transmission-home` so it will be available
@ -88,11 +88,11 @@ This is an error where we haven't got too much information. If the hints above g
## Error resolving host address
This error can happen multiple places in the scripts. The most common is that it happens with `curl` trying to download the latest .ovpn
config bundle for those providers that has an update script, or that OpenVPN throws the error when trying to connect to the VPN server.
This error can happen at multiple places in the scripts. The most common is that it happens with `curl` trying to download the latest .ovpn
config bundle for those providers that have an update script, or that OpenVPN throws the error when trying to connect to the VPN server.
The curl error looks something like `curl: (6) Could not resolve host: ...` and OpenVPN says `RESOLVE: Cannot resolve host address: ...`.
Either way the problem is that your container does not have a valid DNS setup. We have two recommended ways of addressing this.
Either way, the problem is that your container does not have a valid DNS setup. We have two recommended ways of addressing this.
The first solution is to use the `dns` option offered by Docker. This is available in
[Docker run](https://docs.docker.com/engine/reference/run/#network-settings) as well as
@ -117,7 +117,7 @@ or more of these servers and they will be sorted alphabetically.
**What is the difference between these solutions?**
A good question as they both seem to override what DNS servers the container should use. However they are not equal.
A good question as they both seem to override what DNS servers the container should use. However, they are not equal.
The first solution uses the dns flags from Docker. This will mean that we instruct Docker to use these DNS servers for the container,
but the resolv.conf file in the container will still point to the Docker DNS service. Docker might have many reasons for this but one of
@ -126,8 +126,8 @@ and you want to be able to lookup the other containers based on their service na
By using the `--dns` flags you should have both control of what DNS servers are used for external requests as well as container DNS lookup.
The second solution is more direct. It rewrites the resolv.conf file so that it no longer refers to the Docker DNS service.
The effects of this is that you lose Docker service discovery from the container (other containers in the same network can still resolve it)
but you have cut out a middleman and potential point of error. I'm not sure why this some times is necessary but it has proven to fix
The effect of this is that you lose Docker service discovery from the container (other containers in the same network can still resolve it)
but you have cut out a middleman and potential point of error. I'm not sure why this sometimes is necessary but it has proven to fix
the issue in some cases.
**A possible third option**
@ -139,7 +139,7 @@ solve the problem for you if it is your local network that in some way is blocki
## Container loses connection after some time
For some users, on some platforms, apparently this is an issue. I have not encountered this myself - but there is no doubt that it's recurring.
For some users, on some platforms, apparently, this is an issue. I have not encountered this myself - but there is no doubt that it's recurring.
Why does the container lose connectivity? That we don't know and it could be many different reasons that manifest the same symptoms.
We do however have some possible solutions.
@ -150,7 +150,7 @@ The problem is that if the container has lost internet connection restarting Ope
this option using `OPENVPN_OPTS=--inactive 3600 --ping 10 --ping-exit 60`. This will tell OpenVPN to exit when it cannot ping the server for 1 minute.
When OpenVPN exits, the container will exit. And if you've then set `restart=always` or `restart=unless-stopped` in your Docker config then Docker will
restart the container and that could/should restore connectivity. VPN providers sometime push options to their clients after they connect. This is visible
restart the container and that could/should restore connectivity. VPN providers sometimes push options to their clients after they connect. This is visible
in the logs if they do. If they push ping-restart that can override your settings. So you could consider adding `--pull-filter ignore ping` to the options above.
This approach will probably work, especially if you're seeing logs like these from before:
@ -159,7 +159,7 @@ Inactivity timeout (--ping-restart), restarting
SIGUSR1[soft,ping-restart] received, process restarting
```
### Use a third party tool to monitor and restart the container
### Use a third-party tool to monitor and restart the container
The container has a health check script that is run periodically. It will report the health status to Docker and the container will show as "unhealthy"
if basic network connectivity is broken. You can write your own script and add it to cron, or you can use a tool like [https://github.com/willfarrell/docker-autoheal](https://github.com/willfarrell/docker-autoheal) to look for and restart unhealthy containers.
@ -168,7 +168,7 @@ This container has the `autoheal` label by default so it is compatible with the
## Send Username Password via file
Depending on your setup, you may not want to send your vpn user/pass via environment variables (main reason being, it is accessible via docker inspect). If you prefer, there is a way to configure the container to use a file instead.
Depending on your setup, you may not want to send your VPN user/pass via environment variables (the main reason being, it is accessible via docker inspect). If you prefer, there is a way to configure the container to use a file instead.
*Procedure*
1. create a text file with username and password in it, each on a separate line: eg:
@ -237,9 +237,9 @@ up your credentials. We have had challenges with special characters. Having "?=
**NOTE** Some providers have multiple sets of credentials. Some for OpenVPN, others for web login, proxy solutions, etc.
Make sure that you use the ones intended for OpenVPN. **PIA users:** this has recently changed. It used to be a separate pair, but now
you should use the same login as you do in the web control panel. Before you were supposed to use a username like x12345, now its the p12345 one. There is also a 99 character limit on password length.
you should use the same login as you do in the web control panel. Before you were supposed to use a username like x12345, now it's the p12345 one. There is also a 99 character limit on password length.
First check that your credentials are correct. Some providers have separate credentials for OpenVPN so it might not be the same as for their apps.
First, check that your credentials are correct. Some providers have separate credentials for OpenVPN so it might not be the same as for their apps.
Secondly, test a few different servers just to make sure that it's not just a faulty server. If this doesn't resolve it, it's probably the container.
To verify this you can mount a volume to `/config` in the container. So for example `/temporary/folder:/config`. Your credentials will be written to

View File

@ -12,7 +12,7 @@ If the VPN connection fails or the container for any other reason loses connecti
[TODO](https://github.com/haugene/docker-transmission-openvpn/issues/1558): Relevant issues...
#### Reach sleep or hybernation on your host if no torrents are active
By befault Transmission will always [scrape](https://en.wikipedia.org/wiki/Tracker_scrape) trackers, even if all torrents have completed their activities, or they have been paused manually. This will cause Transmission to be always active, therefore never allow your host server to be inactive and go to sleep/hybernation/whatever. If this is something you want, you can add the following variable when creating the container. It will turn off a hidden setting in Tranmsission which will stop the application to scrape trackers for paused torrents. Transmission will become inactive, and your host will reach the desidered state.
By default Transmission will always [scrape](https://en.wikipedia.org/wiki/Tracker_scrape) trackers, even if all torrents have completed their activities, or they have been paused manually. This will cause Transmission to be always active, therefore never allow your host server to be inactive and go to sleep/hybernation/whatever. If this is something you want, you can add the following variable when creating the container. It will turn off a hidden setting in Tranmsission which will stop the application to scrape trackers for paused torrents. Transmission will become inactive, and your host will reach the desidered state.
```
-e "TRANSMISSION_SCRAPE_PAUSED_TORRENTS_ENABLED=false"
```