- Download | Bitwarden
- Bitwarden_rs Digitalocean
- SELF HOSTED Password Manager - The Digital Life
- Bitwarden Touch Id Mac
- Bitwarden Downloads
One of the key requirements of pursuing Good Digital Hygiene is using strong passwords, and a different strong password for every application. This is relatively easy to do in theory, with the aid of clever software, but it's something desperately few people do well in practice. I'm going to explain how I've addressed this issue of digital hygiene for myself, and how you can do it for yourself, and your entire family, social circle, or community.
Password Managers (or keepers or safes) have emerged as that 'clever software'. A good password manager has to do a bunch of things to be really useful:
- Bitwarden is a free and open source password management solution for individuals, teams, and business organizations. Providing developers.
- Bitwarden Web Vault.
- It needs to store your passwords somewhere in an encrypted form (so if someone gets your password database, they can't work out your entire collection of passwords). You only need to remember one really strong password/phrase to unlock all of them.
- It needs to work in whatever context you need a password. Like
- your desktop/laptop, where you need to remember logins for a variety of apps and services,
- in your browser (for web apps that require authentication), and
- on your mobile platforms (because most services you use via apps or browsers require authentication)
- It needs to be cross platform
- must support Windows, MacOS, and Linux OSs,
- must support extensions for many browsers like Firefox, Chrome/Chromium, Safari, and others, and
- must support mobile OSs like iOS and Android.
- It needs to sync data in a timely manner among all the different contexts in which a given user needs it.
Jun 22, 2020 I don't believe that's an issue with Bitwarden, but more so digitalocean, verify you entered the correct TOTP code from Digital Ocean into bitwarden, by disabling 2FA with DigitalOcean and re enabling it, to get a new code.
That's a lot of requirements. There're quite a few efforts that have had a crack at solving this.
The KeePassX community has been addressing this for ages and has created a comprehensive (if variable) ecosystem of apps which work across all of the required platforms, but only with a lot of work.
In the proprietary world, there're many options, with a few front runners like 1Password and LastPass. The former doesn't work on Linux, so it only gets a passing reference and no link :) (update 2019-05-31 - 1Password has added Linux support). The latter, which I used (grudgingly, mostly because I couldn't get KeePassX to work for me) for a few years, works across all the platforms relevant to me, but it was becoming progressively more invasive and annoying to use. Also, because it has a lot of users, and stores everything (albeit, encrypted) in a centralised cloud repository, it's a big target. Also, with its largely proprietary code, I wasn't happy trusting it.
Then I heard about BitWarden. They offered a commercial service (with a free tier) that I could quickly try... they supported all the OSs, mobile and desktop, and browsers that I use... and they release their entire codebase (server and clients) under open source licenses. I tried it, it worked for me, I was sold!
Update 2020-12-20: here's a nice explanation of why you'd want a password manager and even a comparison between widely used (proprietary) LastPass and (open source) BitWarden. People reading this might also be interested in learning how websites check your password... without storing a copy of your password! Thanks for providing your CC-BY-SA licensed works for us all Kev!
Then I decided I wanted to run my own BitWarden server, rather than use their commercial centralised cloud platform (because, as with LastPass, it's a tempting target). That's when I found out the server of BitWarden was written using Microsoft technologies, C# (yeah, it's mostly open source, but it's dirty to me due to its Microsoft legacy), and MS SQL Server, which is a nasty proprietary dependency (especially given how basic the database requirements for this sort of application are).
So I was devastated that I couldn't set up my own server without compromising my iron-clad anti-Microsoft position (I've managed to maintain it for the past 25 years)... until another Free and Open Source Software aficionado pointed me at Daniel Garcia's work! Daniel has implemented a full (unofficial) BitWarden work-alike using a fully FOSS stack: the Rust language, storing data in SQLite, and (quite thoughtfully) re-using other open source licensed components of the BitWarden system that don't have proprietary dependencies, including the website code and layout (which is part of the server).
Daniel's server implementation also unlocks all the 'premium' services that BitWarden offers through their hosted service, too... so that's a nice bonus.
Another open source developer, mpasil, has created a 'fork' of Daniel's project from which he maintains an up-to-date Docker container on hub.docker.com. Thanks to both Daniel Garcia and mpasil's efforts, it turns out to be quite straightforward to set up your own Docker-based BitWarden-compatible service! Here's how...
Creating your own BitWarden Service
Set up a Virtual Server
The first step is to get yourself an entry-level virtual server or compute instance somewhere. I generally use DigitalOcean (I have no affiliation with the company), but there are many other commodity hosting services (check out Vultr or Linode, for example) around the world which offer comparably (or better) spec'd servers for USD5.00/month, or USD60.00/year - I encourage you to do a bit of research. For that you get a Gigabyte (GB) of RAM, a processor, and 40GB of SSD (Static Storage Device = faster) storage. That's oodles of grunt for what this application requires.
I suggest you create an account for yourself (and I encourage you to use Two Factor Authentication, aka 2FA) and create an Ubuntu 18.04 (or the most recent LTS version - the next will be 20.04, in April 2020 :) ) in the zone nearest to you. You'll need to note the server's IP address (it'll be a series of 4 numbers, 0-254, separated by full stops, e.g. 103.99.72.244). With that, you can log into it via SSH.
Get your Domain lined up
You will want to have a domain to point at your server, so you don't have to remember the IP number. There're are thousands of domain 'registrars' in the world who'll help you do that... You just need to 'register' a name, and you pay yearly fee (usually between USD10-30 depending on the country and the 'TLD' (Top Level Domain. There're national ones like .nz, .au, .uk, .tv, .sa, .za, etc., or international domains (mostly associated with the US) like .com, .org, .net, and a myriad of others. Countries decide on how much their domains wholesale for and registrars add a margin for the registration service).
Here in NZ, I use the services of Metaname (they're local to me in Christchurch, and I know them personally and trust their technical capabilities). If you're not sure who to use, ask your friends. Someone's bound to have recommendations (either positive or negative, in which case you'll know who to avoid).
If you want to use your domain for other things besides your BitWarden instance, I'd encourage you to use a subdomain, like (my usual choice) is 'safe.domainname', namely the subdomain 'safe' of 'domainname'.
Once you have selected and registered your domain, you can set up (usually through a web interface provided by the registrar) an 'A Record' which associates your website's name to the IP address of your server. So you should just be able to enter your server's IP address, the domain name (or sub-domain) you want to use for your BitWarden service, and that's it. For a password safe, I tend to use the subdomain 'safe', so, for example, safe.mydomain.nz or similar.
You might be asked to set a 'Time-to-live' (which has to do with the length of time Domain Name Servers are asked to 'cache' the association that the A Record specifies) in which case you can put in 3600 seconds or an hour depending on the time units your interface requests... but in most cases that'll be set to a default of an hour automatically.
You should be able to test that your A Record has been set correctly by SSHing to your domain name rather than the IP address. It should (after you accept the SSH warning that the server's name has changed) work the same way your original SSH login did.
Set up a Docker Server
Once I've first logged into it as the 'root' (full admin) user, here's what I usually do:
- I create an 'unprivileged user', either with my name 'dave' or sometimes an 'ubuntu' user (some hosting providers create a default unprivileged user of 'ubuntu' when you create an Ubuntu-based virtual machine. Some create a 'debian' user for Debian-based VMs, etc.) via
adduser ubuntu
- I install a few core applications: my preferred editor vim (nano is another easy option and comes pre-installed on Ubuntu), version control system, git, and a very handy configuration tracker, etckeeper:
apt-get update && apt-get install vim git etckeeper
- I do some basic configuration of git (replace the [tokens] with the real values for you, minus the []):
git config --global user.email '[your email]'
git config --global user.name '[your full name, e.g. Jane Doe]' - Initialise etckeeper - it will track configuration changes you make to your system which can be invaluable in replicating a server or working out what's changed if something breaks.
etckeeper init
etckeeper commit -m 'initial commit of BitWarden host' - Install Docker dependencies:
apt-get install apt-transport-https ca-certificates curl software-properties-common pwgen
Install secure key needed to add the docker.com package repository to your systemcurl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
Confirm the key is validapt-key fingerprint 0EBFCD88
(you should see something like 'uid [ unknown] Docker Release (CE deb)
' among the 4 lines) - Add the repository for your Ubuntu version (this will pick it automatically)
add-apt-repository 'deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable'
- Update the package repository to include the packages from docker.com
apt-get update
- Install the Community Edition of the Docker service
apt-get install docker-ce
- Add your unprivileged user ('ubuntu' in this case - substitute the unprivileged user you created!) to a new 'docker' group and add that user to other useful groups:
groupadd docker
adduser ubuntu
adduser ubuntu sudoers
adduser ubuntu adminadduser ubuntu docker
- Create an SSH key for your unprivileged user and allow logins for that user from external connection:
sudo -Hu ubuntu ssh-keygen -t rsa
cp /root/.ssh/authorized_keys /home/ubuntu/.ssh/
chown ubuntu:ubuntu /home/ubuntu/.ssh/
adduser ubuntu ssh - Install the Python packaging system, 'pip' to allow you to install and maintain the Docker Compose framework for managing collections of Docker containers:
apt install python-pip
pip install -U pip
pip install docker-compose - Set a convenience variable for [your domain] here (note: it'll only be recognised for this session, i.e. until you log out):
DOMAIN=[your domain]
USER=[unprivileged user, e.g. ubuntu]
Below, anytime you see $DOMAIN in a command, it'll be replaced by whatever you put in for [your domain] and similarly $USER... - Create directories to hold both the Docker Compose configurations and the persistent data you don't want to lose if you remove your Docker containers (namely your password database and configuration information):
mkdir -p /home/docker/$DOMAIN && mkdir -p /home/data/$DOMAIN
chown -R ${USER}:${USER} /home/data /home/docker/ - Install the NGINX (pronounced 'Engine X') webserver which will act as a reverse proxy for the BitWarden service and terminate the encryption via HTTPS:
apt-get install nginx-full
- Configure the server's firewill and make an exception for SSH and NGINX services
ufw allow OpenSSH
ufw allow 'Nginx Full'
ufw enable
Check that its running viaufw status
- Create a directory for including files for NGINX
cd /etc/nginx
mkdir includes
Choose your text editor for editing files. Here're options for Vim or Nano - you can install and select others. Setting the EDIT shall variable allows you to copy and paste these commands regardless of which editor you prefer as it'll replace the value of $EDIT with the full path to your preferred editor.EDIT=`which nano`
orEDIT=`which vim`
- To support encrypted data transfer between external devices and your server using HTTPS, you need a valid SSL certificate. Until recently, these were costly and hard to get. With Let's Encrypt, they've become a straightforward and essential part of any good (user-respecting) web site or service. To facilitate getting and periodically renewing your SSL certificate, you need to create the file letsencrypt.conf:
$EDIT includes/letsencrypt.conf
and enter the following content:#############################################################################
# Configuration file for Let's Encrypt ACME Challenge location
# This file is already included in listen_xxx.conf files.
# Do NOT include it separately!
#############################################################################
#
# This config enables to access /.well-known/acme-challenge/xxxxxxxxxxx
# on all our sites (HTTP), including all subdomains.
# This is required by ACME Challenge (webroot authentication).
# You can check that this location is working by placing ping.txt here:
# /var/www/letsencrypt/.well-known/acme-challenge/ping.txt
# And pointing your browser to:
# http://xxx.domain.tld/.well-known/acme-challenge/ping.txt
#
# Sources:
# https://community.letsencrypt.org/t/howto-easy-cert-generation-and-renewal-with-nginx/3491
#
# Rule for legitimate ACME Challenge requests
location ^~ /.well-known/acme-challenge/ {
default_type 'text/plain';
# this can be any directory, but this name keeps it clear
root /var/www/letsencrypt;
}
# Hide /acme-challenge subdirectory and return 404 on all requests.
# It is somewhat more secure than letting Nginx return 403.
# Ending slash is important!
location = /.well-known/acme-challenge/ {
return 404;
} Now you need to create the directory described in the letsencrypt.conf file:
mkdir /var/www/letsencrypt
Create 'forward secrecy & Diffie Hellman ephemeral parameters' to make your server more secure... The result will be a secure signing key stored in
/etc/ssl/certs/dhparam.pem
(note, getting enough 'entropy' to generate sufficient randomness to calculate this will take a few minutes!):
openssl dhparam -out /etc/ssl/certs/dhparam.pem 4096
and then you need to create the reverse proxy configuration file as follows:
cd ../sites-available
and fill it with this content, replacing all [tokens] with your relevant values:#
# HTTP does *soft* redirect to HTTPS
#
server {
# add [IP-Address:]80 in the next line if you want to limit this to a single interface
listen 0.0.0.0:80;server_name [your domain];
root /home/data/[your domain];
index index.php;
# change the file name of these logs to include your server name
# if hosting many services...
access_log /var/log/nginx/[your domain]_access.log;
error_log /var/log/nginx/[your domain]_error.log;
include includes/letsencrypt.conf;# redirect all HTTP traffic to HTTPS.
location / {
return 302 https://[your domain]$request_uri;
}
}
and make the configuration available to NGINX by linking the file from sites-available into sites-enabled (you can disable the site by removing the link and reloading NGINX)ln -sf sites-available/bitwarden sites-enabled/bitwarden
Check to make sure NGINX is happy with the configurationnginx -t
If you don't get any errors, you can restart NGINXservice nginx restart
and it should be configured properly to respond to requests athttp://[your domain]/.well-known/acme-challenge/
which is required for creating a Let's Encrypt certificate.$EDIT sites-available/bitwarden
So now we can create the certificate. You'll need to install the letscencrypt scripts:
apt-get install letsencrypt
You will be asked to enter some information about yourself, including an email address - this is necessary so that the letsencrypt service can email you if any of your certificates are not successfully updated (they need to be renewed every few weeks - normally this happens automatically!) so that you site and users aren't affected by an expired SSL certificate (a bad look!). Trust me, these folks are the good guys.
You create a certificate for [your domain] with the following command (with relevant substitutions):letsencrypt certonly --webroot -w /var/www/letsencrypt -d $DOMAIN
If the process works, you should see a 'Congratulations!' message.Edit the nginx configuration file for the BitWarden service again
$EDIT sites-available/bitwarden
and add the following to the bottom offile (starting the line below the final '}')
#
# HTTPS
#
# This assumes you're using Let's Encrypt for your SSL certs (and why wouldn't
# you!?)... https://letsencrypt.org
server {
# add [IP-Address:]443 ssl in the next line if you want to limit this to a single interface
listen 0.0.0.0:443 ssl;
ssl on;
ssl_certificate /etc/letsencrypt/live/[your domain]/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/[your domain]/privkey.pem;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
# to create this, see https://raymii.org/s/tutorials/Strong_SSL_Security_On_nginx.html
ssl_dhparam /etc/ssl/certs/dhparam.pem;
keepalive_timeout 20s;server_name [your domain];
root /home/data/[your domain];
index index.php;# change the file name of these logs to include your server name
# if hosting many services...
access_log /var/log/nginx/[your domain]_access.log;
error_log /var/log/nginx/[your domain]_error.log;location /notifications/hub/negotiate {
proxy_pass http://127.0.0.1:8080;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
proxy_set_header X-Forwarded-Proto https;
proxy_connect_timeout 2400;
proxy_read_timeout 2400;
proxy_send_timeout 2400;
}location / {
proxy_pass http://127.0.0.1:8080;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
proxy_set_header X-Forwarded-Proto https;
proxy_connect_timeout 2400;
proxy_read_timeout 2400;
proxy_send_timeout 2400;
}location /notifications/hub {
proxy_pass http://127.0.0.1:3012;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
}
#
# These 'harden' your security
add_header 'Access-Control-Allow-Origin' '*';
}- You should now be able to run
nginx -t
again, and it you haven't got an accidental errors in the files, it should return no errors. You can restart nginx to make sure it picks up your SSL certificates...service nginx restart
Now everything is read to set up your BitWarden Docker containers!
Setting up your BitWarden 'rust' service
Before we start this part, you'll need a few bits of information. First, you'll need a 64 character random string to be your 'admin token'... you can create that like this:pwgen -y 64 1
copy the result (highlight the text and hit CTRL+SHIFT+C) and paste it somewhere so you can copy-and-paste it into the file below later.
Also, if you want your BitWarden server to be able to send out emails, like for password recovery, you'll need to have an 'authenticating SMTP email account'... I would recommend setting one up specifically for this purpose. You can use a random gmail account or any other email account that lets you send mail by logging into an SMTP (Simple Mail Transfer Protocol) server, i.e. most mail servers. You'll need to know the SMTP [host name], the [port] (usually 465 or 587), the [login security] (usually 'true' or 'TLS'), and your authenticating [username] (possibly this is also the email address) and [password]. You'll also need a '[from email] like bitwarden@[your domain] or similar, which will be the sender of email from your server.
You're going to be setting up your configuration in the directory we created earlier, so runcd /home/docker/$DOMAIN
and there$EDIT docker-compose.yml
copy-and-pasting in the following, replacing the [tokens] appropriately:
version: '3'
services:
app:
image: bitwardenrs/server
environment:
- DOMAIN=https://[your domain]
- WEBSOCKET_ENABLED=true
- SIGNUPS_ALLOWED=false
- LOG_FILE='/data/bitwarden.log'
- INVITATIONS_ALLOWED=true
- ADMIN_TOKEN=[admin token]
- SMTP_HOST=[host name]
- SMTP_FROM=[from email]
- SMTP_PORT=[port]
- SMTP_SSL=[login security]
- SMTP_USERNAME=[username]
- SMTP_PASSWORD=[password]
volumes:
- /home/data/[your domain]/data/:/data/
ports:
- '127.0.0.1:8080:80'
- '127.0.0.1:3012:3012'
restart:
unless-stopped
Note that the indentation has to be exact in this file - Docker Compose will complain otherwise.
With the docker-compose file completed, you're ready to 'pull' your package!
docker-compose pull
This will download the BitWarden Docker container from hub.docker.com. Then all you need to do is start it:
Download | Bitwarden
docker-compose up -d && docker-compose logs -f
the 'up -d' option actually starts the container called 'app' which is actually your BitWarden rust server in 'daemon' mode, which means it'll keep running unless you tell it to stop. If that's successful, it automatically then shows you the logs of that container. You can exit at any time with CTRL-C which will put you back on the command prompt. If you do want the container to stop, just run
docker-compose stop
If your start up was successful, you should see a message like this (albeit your version number could be higher - 1.9.0 is the current version of the Rust implementation at the time of writing):
/--------------------------------------------------------------------
| Starting Bitwarden_RS |
| Version 1.9.0 |
|--------------------------------------------------------------------|
| This is an *unofficial* Bitwarden implementation, DO NOT use the |
| official channels to report bugs/features, regardless of client. |
| Report URL: https://github.com/dani-garcia/bitwarden_rs/issues/new |
--------------------------------------------------------------------/
You should now be able to point your browser at http://[your domain]
which, in turn, should automatically redirect you to https://[your domain]
and you should see the BitWarden web front end similar to that shown in the attached screen shot!
First Login!
To do your initial login by going to https://[your domain]/admin/ and
you'll be asked to provide your 'admin token' (a random string you created earlier for your docker-compose.yml file, where you should be able to find it) to create a first user with administration privileges. That will allow you to create your initial personal user and other useful stuff.
For additional info on setting up these services - and new options as Daniel and his co-developers add them in - consult the repository pages and issues and for Docker-specific questions, look at mpasil's pages.
Sending Emails
It'll be worth testing if your email services work, like by requesting a password hint! You should be able to see what the server's doing via the
docker-compose logs -f
Tips
I recommend not including your login credentials to your BitWarden instance in your BitWarden database ;) that's the one thing you need to remember. If you need to write it down somewhere, then do so (but make sure you don't include all the info needed to log in on the same piece of paper, that's just asking for trouble).
Also, you can easily configure all the BitWarden clients - browser plugins, mobile apps, or the desktop app - to use your server rather than BitWarden's default hosted service. Just click the 'gear' settings icon on each app's interface, and set the 'Self-Hosted Environment' Server URL to be your server, i.e. https://[your domain]
Backing it all up
I've created a SQLite backup script (which maintains automatic versioned hourly, daily, weekly, monthly, and yearly database dumps, the content in which is encrypted) described in more detail in another post...
Two Factor Authentication
This configuration should allow you to simply turn on Two Factor Authentication for any given BitWarden user.
Keeping it up-to-date
One of the best things about this Docker configuration is that it's very straightforward to upgrade your installation to Daniel's (via mpasil's Docker work) latest server version. Just log into the server as your unprivileged user,
cd /home/docker/[your domain]
docker-compose pull
docker-compose up -d && docker-compose logs -f
The whole process shouldn't take much more than a minute, with a few seconds downtime only as your new Docker BitWarden container is being created...
Hope this helps a few folks! If you find any of the above doesn't work, please let me know in the comments. I'll do my best to make sure this how-to is accurate and up-to-date, and I'll do my best to assist people if they're having trouble.
Have (secure and private) fun!t
Blog comments
Bitwarden_rs Digitalocean
To do your initial login, I believe (I'll test this and update this howto!) you'll be asked to provide your 'admin token' to create a first user with administration privileges.
&
I'll add information on my SQLite backup scripts (which maintain automatic versioned hourly, daily, weekly, monthly, and yearly database dumps, the content in which is encrypted)...
Thanks!
Tutorial
Introduction
Docker is a great tool, but to really take full advantage of its potential it's best if each component of your application runs in its own container. For complex applications with a lot of components, orchestrating all the containers to start up and shut down together (not to mention talk to each other) can quickly become unwieldy.
The Docker community came up with a popular solution called Fig, which allowed you to use a single YAML file to orchestrate all your Docker containers and configurations. This became so popular that the Docker team eventually decided to make their own version based on the Fig source. They called it Docker Compose. In short, it makes dealing with the orchestration processes of Docker containers (such as starting up, shutting down, and setting up intra-container linking and volumes) really easy.
By the end of this article, you will have Docker and Docker Compose installed and have a basic understanding of how Docker Compose works.
Docker and Docker Compose Concepts
Using Docker Compose requires a combination of a bunch of different Docker concepts in one, so before we get started let's take a minute to review the various concepts involved. If you're already familiar with Docker concepts like volumes, links, and port forwarding then you might want to go ahead and skip on to the next section.
Docker Images
Each Docker container is a local instance of a Docker image. You can think of a Docker image as a complete Linux installation. Usually a minimal installation contains only the bare minimum of packages needed to run the image. These images use the kernel of the host system, but since they are running inside a Docker container and only see their own file system, it's perfectly possible to run a distribution like CentOS on an Ubuntu host (or vice-versa).
Most Docker images are distributed via the Docker Hub, which is maintained by the Docker team. Most popular open source projects have a corresponding image uploaded to the Docker Registry, which you can use to deploy the software. When possible it's best to grab 'official' images, since they are guaranteed by the Docker team to follow Docker best practices.
Communication Between Docker Images
Docker containers are isolated from the host machine by default, meaning that by default the host machine has no access to the file system inside the Docker container, nor any means of communicating with it via the network. Needless to say, this makes configuring and working with the image running inside a Docker container difficult by default.
Docker has three primary ways to work around this. The first and most common is to have Docker specify environment variables that will be set inside the Docker container. The code running inside the Docker container will then check the values of these environment variables on startup and use them to configure itself properly.
Another commonly used method is a Docker data volume. Docker volumes come in two flavors — internal and shared.
Specifying an internal volume just means that for a folder you specify for a particular Docker container, the data will be persisted when the container is removed. For example if you wanted to make sure your log files hung around you might specify an internal /var/log
volume.
A shared volume maps a folder inside a Docker container onto a folder on the host machine. This allows you to easily share files between the Docker container and the host machine, which we'll explore in the Docker data volume article.
The third way to communicate with a Docker container is via the network. Docker allows communication between different Docker containers via links
, as well as port forwarding, allowing you to forward ports from inside the Docker container to ports on the host server. For example, you can create a link to allow your WordPress and MariaDB Docker containers to talk to each other and port-forwarding to expose WordPress to the outside world so that users can connect to it.
Prerequisites
To follow this article, you will need the following:
- Ubuntu 14.04 Droplet
- A non-root user with sudo privileges (Initial Server Setup with Ubuntu 14.04 explains how to set this up.)
Step 1 — Installing Docker
First, install Docker if you haven't already. The quickest way to install Docker is to download and install their installation script (you'll be prompted for a sudo password).
The above command downloads and executes a small installation script written by the Docker team. If you don't trust third party scripts or want more details about what the script is doing check out the instructions in the DigitalOcean Docker tutorial or Docker's own installation documentation.
Working with Docker is a pain if your user is not configured correctly, so add your user to the docker
group with the following command.
Log out and log in from your server to activate your new groups.
Note: To learn more about how to use Docker, read the How to Use Docker section of How To Install and Use Docker: Getting Started.
Step 2 — Installing Docker Compose
Now that you have Docker installed, let's go ahead and install Docker Compose. First, install python-pip
as prerequisite:
Then you can install Docker Compose:
Step 3 — Running a Container with Docker Compose
The public Docker registry, Docker Hub, includes a simple Hello World image. Now that we have Docker Compose installed, let's test it with this really simple example.
First, create a directory for our YAML file:
Then change into the directory:
Now create the YAML file using your favorite text editor (we will use nano):
Put the following contents into the file, save the file, and exit the text editor:
The first line will be used as part of the container name. The second line specifies which image to use to create the container. The image will be downloaded from the official Docker Hub repository.
While still in the ~/hello-world
directory, execute the following command to create the container:
The output should start with the following:
The output then explains what Docker is doing:
- The Docker client contacted the Docker daemon.
- The Docker daemon pulled the 'hello-world' image from the Docker Hub.
- The Docker daemon created a new container from that image which runs the executable that produces the output you are currently reading.
- The Docker daemon streamed that output to the Docker client, which sent it to your terminal.
If the process doesn't exit on its own, press CTRL-C
.
This simple test does not show one of the main benefits of Docker Compose — being able to bring a group of Docker containers up and down all at the same time. The How To Install Wordpress and PhpMyAdmin with Docker Compose on Ubuntu 14.04 articles show how to use Docker Compose to run three containers as one application group.
Step 4 — Learning Docker Compose Commands
Let's go over the commands the docker-compose
tool supports.
The docker-compose
command works on a per-directory basis. You can have multiple groups of Docker containers running on one machine — just make one directory for each container and one docker-compose.yml
file for each container inside its directory.
So far we've been running docker-compose up
on our own and using CTRL-C
to shut it down. This allows debug messages to be displayed in the terminal window. This isn't ideal though, when running in production you'll want to have docker-compose
act more like a service. One simple way to do this is to just add the -d
option when you up
your session:
SELF HOSTED Password Manager - The Digital Life
docker-compose
will now fork to the background.
To show your group of Docker containers (both stopped and currently running), use the following command:
For example, the following shows that the helloworld_my-test_1
container is stopped:
A running container will show the Up
state:
To stop all running Docker containers for an application group, issue the following command in the same directory as the docker-compose.yml
file used to start the Docker group:
Note:docker-compose kill
is also available if you need to shut things down more forcefully.
In some cases, Docker containers will store their old information in an internal volume. If you want to start from scratch you can use the rm
command to fully delete all the containers that make up your container group:
If you try any of these commands from a directory other than the directory that contains a Docker container and .yml
file, it will complain and not show you your containers:
Step 5 — Accessing the Docker Container Filesystem (Optional)
If you need to work on the command prompt inside a container, you can use the docker exec
command.
The Hello World! example exits after it is run, so we need to start a container that will keep running so we can then use docker exec
to access the filesystem for the container. Let's take a look at the Nginx image from Docker Hub.
Create a new directory for it and change into it:
Create a docker-compose.yml
file in our new directory:
and paste in the following:
You should be able to test that your A Record has been set correctly by SSHing to your domain name rather than the IP address. It should (after you accept the SSH warning that the server's name has changed) work the same way your original SSH login did.
Set up a Docker Server
Once I've first logged into it as the 'root' (full admin) user, here's what I usually do:
- I create an 'unprivileged user', either with my name 'dave' or sometimes an 'ubuntu' user (some hosting providers create a default unprivileged user of 'ubuntu' when you create an Ubuntu-based virtual machine. Some create a 'debian' user for Debian-based VMs, etc.) via
adduser ubuntu
- I install a few core applications: my preferred editor vim (nano is another easy option and comes pre-installed on Ubuntu), version control system, git, and a very handy configuration tracker, etckeeper:
apt-get update && apt-get install vim git etckeeper
- I do some basic configuration of git (replace the [tokens] with the real values for you, minus the []):
git config --global user.email '[your email]'
git config --global user.name '[your full name, e.g. Jane Doe]' - Initialise etckeeper - it will track configuration changes you make to your system which can be invaluable in replicating a server or working out what's changed if something breaks.
etckeeper init
etckeeper commit -m 'initial commit of BitWarden host' - Install Docker dependencies:
apt-get install apt-transport-https ca-certificates curl software-properties-common pwgen
Install secure key needed to add the docker.com package repository to your systemcurl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
Confirm the key is validapt-key fingerprint 0EBFCD88
(you should see something like 'uid [ unknown] Docker Release (CE deb)
' among the 4 lines) - Add the repository for your Ubuntu version (this will pick it automatically)
add-apt-repository 'deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable'
- Update the package repository to include the packages from docker.com
apt-get update
- Install the Community Edition of the Docker service
apt-get install docker-ce
- Add your unprivileged user ('ubuntu' in this case - substitute the unprivileged user you created!) to a new 'docker' group and add that user to other useful groups:
groupadd docker
adduser ubuntu
adduser ubuntu sudoers
adduser ubuntu adminadduser ubuntu docker
- Create an SSH key for your unprivileged user and allow logins for that user from external connection:
sudo -Hu ubuntu ssh-keygen -t rsa
cp /root/.ssh/authorized_keys /home/ubuntu/.ssh/
chown ubuntu:ubuntu /home/ubuntu/.ssh/
adduser ubuntu ssh - Install the Python packaging system, 'pip' to allow you to install and maintain the Docker Compose framework for managing collections of Docker containers:
apt install python-pip
pip install -U pip
pip install docker-compose - Set a convenience variable for [your domain] here (note: it'll only be recognised for this session, i.e. until you log out):
DOMAIN=[your domain]
USER=[unprivileged user, e.g. ubuntu]
Below, anytime you see $DOMAIN in a command, it'll be replaced by whatever you put in for [your domain] and similarly $USER... - Create directories to hold both the Docker Compose configurations and the persistent data you don't want to lose if you remove your Docker containers (namely your password database and configuration information):
mkdir -p /home/docker/$DOMAIN && mkdir -p /home/data/$DOMAIN
chown -R ${USER}:${USER} /home/data /home/docker/ - Install the NGINX (pronounced 'Engine X') webserver which will act as a reverse proxy for the BitWarden service and terminate the encryption via HTTPS:
apt-get install nginx-full
- Configure the server's firewill and make an exception for SSH and NGINX services
ufw allow OpenSSH
ufw allow 'Nginx Full'
ufw enable
Check that its running viaufw status
- Create a directory for including files for NGINX
cd /etc/nginx
mkdir includes
Choose your text editor for editing files. Here're options for Vim or Nano - you can install and select others. Setting the EDIT shall variable allows you to copy and paste these commands regardless of which editor you prefer as it'll replace the value of $EDIT with the full path to your preferred editor.EDIT=`which nano`
orEDIT=`which vim`
- To support encrypted data transfer between external devices and your server using HTTPS, you need a valid SSL certificate. Until recently, these were costly and hard to get. With Let's Encrypt, they've become a straightforward and essential part of any good (user-respecting) web site or service. To facilitate getting and periodically renewing your SSL certificate, you need to create the file letsencrypt.conf:
$EDIT includes/letsencrypt.conf
and enter the following content:#############################################################################
# Configuration file for Let's Encrypt ACME Challenge location
# This file is already included in listen_xxx.conf files.
# Do NOT include it separately!
#############################################################################
#
# This config enables to access /.well-known/acme-challenge/xxxxxxxxxxx
# on all our sites (HTTP), including all subdomains.
# This is required by ACME Challenge (webroot authentication).
# You can check that this location is working by placing ping.txt here:
# /var/www/letsencrypt/.well-known/acme-challenge/ping.txt
# And pointing your browser to:
# http://xxx.domain.tld/.well-known/acme-challenge/ping.txt
#
# Sources:
# https://community.letsencrypt.org/t/howto-easy-cert-generation-and-renewal-with-nginx/3491
#
# Rule for legitimate ACME Challenge requests
location ^~ /.well-known/acme-challenge/ {
default_type 'text/plain';
# this can be any directory, but this name keeps it clear
root /var/www/letsencrypt;
}
# Hide /acme-challenge subdirectory and return 404 on all requests.
# It is somewhat more secure than letting Nginx return 403.
# Ending slash is important!
location = /.well-known/acme-challenge/ {
return 404;
} Now you need to create the directory described in the letsencrypt.conf file:
mkdir /var/www/letsencrypt
Create 'forward secrecy & Diffie Hellman ephemeral parameters' to make your server more secure... The result will be a secure signing key stored in
/etc/ssl/certs/dhparam.pem
(note, getting enough 'entropy' to generate sufficient randomness to calculate this will take a few minutes!):
openssl dhparam -out /etc/ssl/certs/dhparam.pem 4096
and then you need to create the reverse proxy configuration file as follows:
cd ../sites-available
and fill it with this content, replacing all [tokens] with your relevant values:#
# HTTP does *soft* redirect to HTTPS
#
server {
# add [IP-Address:]80 in the next line if you want to limit this to a single interface
listen 0.0.0.0:80;server_name [your domain];
root /home/data/[your domain];
index index.php;
# change the file name of these logs to include your server name
# if hosting many services...
access_log /var/log/nginx/[your domain]_access.log;
error_log /var/log/nginx/[your domain]_error.log;
include includes/letsencrypt.conf;# redirect all HTTP traffic to HTTPS.
location / {
return 302 https://[your domain]$request_uri;
}
}
and make the configuration available to NGINX by linking the file from sites-available into sites-enabled (you can disable the site by removing the link and reloading NGINX)ln -sf sites-available/bitwarden sites-enabled/bitwarden
Check to make sure NGINX is happy with the configurationnginx -t
If you don't get any errors, you can restart NGINXservice nginx restart
and it should be configured properly to respond to requests athttp://[your domain]/.well-known/acme-challenge/
which is required for creating a Let's Encrypt certificate.$EDIT sites-available/bitwarden
So now we can create the certificate. You'll need to install the letscencrypt scripts:
apt-get install letsencrypt
You will be asked to enter some information about yourself, including an email address - this is necessary so that the letsencrypt service can email you if any of your certificates are not successfully updated (they need to be renewed every few weeks - normally this happens automatically!) so that you site and users aren't affected by an expired SSL certificate (a bad look!). Trust me, these folks are the good guys.
You create a certificate for [your domain] with the following command (with relevant substitutions):letsencrypt certonly --webroot -w /var/www/letsencrypt -d $DOMAIN
If the process works, you should see a 'Congratulations!' message.Edit the nginx configuration file for the BitWarden service again
$EDIT sites-available/bitwarden
and add the following to the bottom offile (starting the line below the final '}')
#
# HTTPS
#
# This assumes you're using Let's Encrypt for your SSL certs (and why wouldn't
# you!?)... https://letsencrypt.org
server {
# add [IP-Address:]443 ssl in the next line if you want to limit this to a single interface
listen 0.0.0.0:443 ssl;
ssl on;
ssl_certificate /etc/letsencrypt/live/[your domain]/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/[your domain]/privkey.pem;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
# to create this, see https://raymii.org/s/tutorials/Strong_SSL_Security_On_nginx.html
ssl_dhparam /etc/ssl/certs/dhparam.pem;
keepalive_timeout 20s;server_name [your domain];
root /home/data/[your domain];
index index.php;# change the file name of these logs to include your server name
# if hosting many services...
access_log /var/log/nginx/[your domain]_access.log;
error_log /var/log/nginx/[your domain]_error.log;location /notifications/hub/negotiate {
proxy_pass http://127.0.0.1:8080;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
proxy_set_header X-Forwarded-Proto https;
proxy_connect_timeout 2400;
proxy_read_timeout 2400;
proxy_send_timeout 2400;
}location / {
proxy_pass http://127.0.0.1:8080;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
proxy_set_header X-Forwarded-Proto https;
proxy_connect_timeout 2400;
proxy_read_timeout 2400;
proxy_send_timeout 2400;
}location /notifications/hub {
proxy_pass http://127.0.0.1:3012;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
}
#
# These 'harden' your security
add_header 'Access-Control-Allow-Origin' '*';
}- You should now be able to run
nginx -t
again, and it you haven't got an accidental errors in the files, it should return no errors. You can restart nginx to make sure it picks up your SSL certificates...service nginx restart
Now everything is read to set up your BitWarden Docker containers!
Setting up your BitWarden 'rust' service
Before we start this part, you'll need a few bits of information. First, you'll need a 64 character random string to be your 'admin token'... you can create that like this:pwgen -y 64 1
copy the result (highlight the text and hit CTRL+SHIFT+C) and paste it somewhere so you can copy-and-paste it into the file below later.
Also, if you want your BitWarden server to be able to send out emails, like for password recovery, you'll need to have an 'authenticating SMTP email account'... I would recommend setting one up specifically for this purpose. You can use a random gmail account or any other email account that lets you send mail by logging into an SMTP (Simple Mail Transfer Protocol) server, i.e. most mail servers. You'll need to know the SMTP [host name], the [port] (usually 465 or 587), the [login security] (usually 'true' or 'TLS'), and your authenticating [username] (possibly this is also the email address) and [password]. You'll also need a '[from email] like bitwarden@[your domain] or similar, which will be the sender of email from your server.
You're going to be setting up your configuration in the directory we created earlier, so runcd /home/docker/$DOMAIN
and there$EDIT docker-compose.yml
copy-and-pasting in the following, replacing the [tokens] appropriately:
version: '3'
services:
app:
image: bitwardenrs/server
environment:
- DOMAIN=https://[your domain]
- WEBSOCKET_ENABLED=true
- SIGNUPS_ALLOWED=false
- LOG_FILE='/data/bitwarden.log'
- INVITATIONS_ALLOWED=true
- ADMIN_TOKEN=[admin token]
- SMTP_HOST=[host name]
- SMTP_FROM=[from email]
- SMTP_PORT=[port]
- SMTP_SSL=[login security]
- SMTP_USERNAME=[username]
- SMTP_PASSWORD=[password]
volumes:
- /home/data/[your domain]/data/:/data/
ports:
- '127.0.0.1:8080:80'
- '127.0.0.1:3012:3012'
restart:
unless-stopped
Note that the indentation has to be exact in this file - Docker Compose will complain otherwise.
With the docker-compose file completed, you're ready to 'pull' your package!
docker-compose pull
This will download the BitWarden Docker container from hub.docker.com. Then all you need to do is start it:
Download | Bitwarden
docker-compose up -d && docker-compose logs -f
the 'up -d' option actually starts the container called 'app' which is actually your BitWarden rust server in 'daemon' mode, which means it'll keep running unless you tell it to stop. If that's successful, it automatically then shows you the logs of that container. You can exit at any time with CTRL-C which will put you back on the command prompt. If you do want the container to stop, just run
docker-compose stop
If your start up was successful, you should see a message like this (albeit your version number could be higher - 1.9.0 is the current version of the Rust implementation at the time of writing):
/--------------------------------------------------------------------
| Starting Bitwarden_RS |
| Version 1.9.0 |
|--------------------------------------------------------------------|
| This is an *unofficial* Bitwarden implementation, DO NOT use the |
| official channels to report bugs/features, regardless of client. |
| Report URL: https://github.com/dani-garcia/bitwarden_rs/issues/new |
--------------------------------------------------------------------/
You should now be able to point your browser at http://[your domain]
which, in turn, should automatically redirect you to https://[your domain]
and you should see the BitWarden web front end similar to that shown in the attached screen shot!
First Login!
To do your initial login by going to https://[your domain]/admin/ and
you'll be asked to provide your 'admin token' (a random string you created earlier for your docker-compose.yml file, where you should be able to find it) to create a first user with administration privileges. That will allow you to create your initial personal user and other useful stuff.
For additional info on setting up these services - and new options as Daniel and his co-developers add them in - consult the repository pages and issues and for Docker-specific questions, look at mpasil's pages.
Sending Emails
It'll be worth testing if your email services work, like by requesting a password hint! You should be able to see what the server's doing via the
docker-compose logs -f
Tips
I recommend not including your login credentials to your BitWarden instance in your BitWarden database ;) that's the one thing you need to remember. If you need to write it down somewhere, then do so (but make sure you don't include all the info needed to log in on the same piece of paper, that's just asking for trouble).
Also, you can easily configure all the BitWarden clients - browser plugins, mobile apps, or the desktop app - to use your server rather than BitWarden's default hosted service. Just click the 'gear' settings icon on each app's interface, and set the 'Self-Hosted Environment' Server URL to be your server, i.e. https://[your domain]
Backing it all up
I've created a SQLite backup script (which maintains automatic versioned hourly, daily, weekly, monthly, and yearly database dumps, the content in which is encrypted) described in more detail in another post...
Two Factor Authentication
This configuration should allow you to simply turn on Two Factor Authentication for any given BitWarden user.
Keeping it up-to-date
One of the best things about this Docker configuration is that it's very straightforward to upgrade your installation to Daniel's (via mpasil's Docker work) latest server version. Just log into the server as your unprivileged user,
cd /home/docker/[your domain]
docker-compose pull
docker-compose up -d && docker-compose logs -f
The whole process shouldn't take much more than a minute, with a few seconds downtime only as your new Docker BitWarden container is being created...
Hope this helps a few folks! If you find any of the above doesn't work, please let me know in the comments. I'll do my best to make sure this how-to is accurate and up-to-date, and I'll do my best to assist people if they're having trouble.
Have (secure and private) fun!t
Blog comments
Bitwarden_rs Digitalocean
To do your initial login, I believe (I'll test this and update this howto!) you'll be asked to provide your 'admin token' to create a first user with administration privileges.
&
I'll add information on my SQLite backup scripts (which maintain automatic versioned hourly, daily, weekly, monthly, and yearly database dumps, the content in which is encrypted)...
Thanks!
Tutorial
Introduction
Docker is a great tool, but to really take full advantage of its potential it's best if each component of your application runs in its own container. For complex applications with a lot of components, orchestrating all the containers to start up and shut down together (not to mention talk to each other) can quickly become unwieldy.
The Docker community came up with a popular solution called Fig, which allowed you to use a single YAML file to orchestrate all your Docker containers and configurations. This became so popular that the Docker team eventually decided to make their own version based on the Fig source. They called it Docker Compose. In short, it makes dealing with the orchestration processes of Docker containers (such as starting up, shutting down, and setting up intra-container linking and volumes) really easy.
By the end of this article, you will have Docker and Docker Compose installed and have a basic understanding of how Docker Compose works.
Docker and Docker Compose Concepts
Using Docker Compose requires a combination of a bunch of different Docker concepts in one, so before we get started let's take a minute to review the various concepts involved. If you're already familiar with Docker concepts like volumes, links, and port forwarding then you might want to go ahead and skip on to the next section.
Docker Images
Each Docker container is a local instance of a Docker image. You can think of a Docker image as a complete Linux installation. Usually a minimal installation contains only the bare minimum of packages needed to run the image. These images use the kernel of the host system, but since they are running inside a Docker container and only see their own file system, it's perfectly possible to run a distribution like CentOS on an Ubuntu host (or vice-versa).
Most Docker images are distributed via the Docker Hub, which is maintained by the Docker team. Most popular open source projects have a corresponding image uploaded to the Docker Registry, which you can use to deploy the software. When possible it's best to grab 'official' images, since they are guaranteed by the Docker team to follow Docker best practices.
Communication Between Docker Images
Docker containers are isolated from the host machine by default, meaning that by default the host machine has no access to the file system inside the Docker container, nor any means of communicating with it via the network. Needless to say, this makes configuring and working with the image running inside a Docker container difficult by default.
Docker has three primary ways to work around this. The first and most common is to have Docker specify environment variables that will be set inside the Docker container. The code running inside the Docker container will then check the values of these environment variables on startup and use them to configure itself properly.
Another commonly used method is a Docker data volume. Docker volumes come in two flavors — internal and shared.
Specifying an internal volume just means that for a folder you specify for a particular Docker container, the data will be persisted when the container is removed. For example if you wanted to make sure your log files hung around you might specify an internal /var/log
volume.
A shared volume maps a folder inside a Docker container onto a folder on the host machine. This allows you to easily share files between the Docker container and the host machine, which we'll explore in the Docker data volume article.
The third way to communicate with a Docker container is via the network. Docker allows communication between different Docker containers via links
, as well as port forwarding, allowing you to forward ports from inside the Docker container to ports on the host server. For example, you can create a link to allow your WordPress and MariaDB Docker containers to talk to each other and port-forwarding to expose WordPress to the outside world so that users can connect to it.
Prerequisites
To follow this article, you will need the following:
- Ubuntu 14.04 Droplet
- A non-root user with sudo privileges (Initial Server Setup with Ubuntu 14.04 explains how to set this up.)
Step 1 — Installing Docker
First, install Docker if you haven't already. The quickest way to install Docker is to download and install their installation script (you'll be prompted for a sudo password).
The above command downloads and executes a small installation script written by the Docker team. If you don't trust third party scripts or want more details about what the script is doing check out the instructions in the DigitalOcean Docker tutorial or Docker's own installation documentation.
Working with Docker is a pain if your user is not configured correctly, so add your user to the docker
group with the following command.
Log out and log in from your server to activate your new groups.
Note: To learn more about how to use Docker, read the How to Use Docker section of How To Install and Use Docker: Getting Started.
Step 2 — Installing Docker Compose
Now that you have Docker installed, let's go ahead and install Docker Compose. First, install python-pip
as prerequisite:
Then you can install Docker Compose:
Step 3 — Running a Container with Docker Compose
The public Docker registry, Docker Hub, includes a simple Hello World image. Now that we have Docker Compose installed, let's test it with this really simple example.
First, create a directory for our YAML file:
Then change into the directory:
Now create the YAML file using your favorite text editor (we will use nano):
Put the following contents into the file, save the file, and exit the text editor:
The first line will be used as part of the container name. The second line specifies which image to use to create the container. The image will be downloaded from the official Docker Hub repository.
While still in the ~/hello-world
directory, execute the following command to create the container:
The output should start with the following:
The output then explains what Docker is doing:
- The Docker client contacted the Docker daemon.
- The Docker daemon pulled the 'hello-world' image from the Docker Hub.
- The Docker daemon created a new container from that image which runs the executable that produces the output you are currently reading.
- The Docker daemon streamed that output to the Docker client, which sent it to your terminal.
If the process doesn't exit on its own, press CTRL-C
.
This simple test does not show one of the main benefits of Docker Compose — being able to bring a group of Docker containers up and down all at the same time. The How To Install Wordpress and PhpMyAdmin with Docker Compose on Ubuntu 14.04 articles show how to use Docker Compose to run three containers as one application group.
Step 4 — Learning Docker Compose Commands
Let's go over the commands the docker-compose
tool supports.
The docker-compose
command works on a per-directory basis. You can have multiple groups of Docker containers running on one machine — just make one directory for each container and one docker-compose.yml
file for each container inside its directory.
So far we've been running docker-compose up
on our own and using CTRL-C
to shut it down. This allows debug messages to be displayed in the terminal window. This isn't ideal though, when running in production you'll want to have docker-compose
act more like a service. One simple way to do this is to just add the -d
option when you up
your session:
SELF HOSTED Password Manager - The Digital Life
docker-compose
will now fork to the background.
To show your group of Docker containers (both stopped and currently running), use the following command:
For example, the following shows that the helloworld_my-test_1
container is stopped:
A running container will show the Up
state:
To stop all running Docker containers for an application group, issue the following command in the same directory as the docker-compose.yml
file used to start the Docker group:
Note:docker-compose kill
is also available if you need to shut things down more forcefully.
In some cases, Docker containers will store their old information in an internal volume. If you want to start from scratch you can use the rm
command to fully delete all the containers that make up your container group:
If you try any of these commands from a directory other than the directory that contains a Docker container and .yml
file, it will complain and not show you your containers:
Step 5 — Accessing the Docker Container Filesystem (Optional)
If you need to work on the command prompt inside a container, you can use the docker exec
command.
The Hello World! example exits after it is run, so we need to start a container that will keep running so we can then use docker exec
to access the filesystem for the container. Let's take a look at the Nginx image from Docker Hub.
Create a new directory for it and change into it:
Create a docker-compose.yml
file in our new directory:
and paste in the following:
Save the file and exit. We just need to start the Nginx container as a background process with the following command:
The Nginx image will be downloaded and then the container will be started in the background.
Now we need the CONTAINER ID
for the container. List of all the containers that are running:
You will see something similar to the following:
Note: Only running containers are listed with the docker ps
command.
If we wanted to make a change to the filesystem inside this container, we'd take its ID (in this example e90e12f70418
) and use docker exec
to start a shell inside the container:
The -t
option opens up a terminal, and the -i
option makes it interactive. The /bin/bash
options opens a bash shell to the running container. Be sure to use the ID for your container.
You will see a bash prompt for the container similar to:
From here, you can work from the command prompt. Keep in mind, however, that unless you are in a directory that is saved as part of a data volume, your changes will disappear as soon as the container is restarted. Another caveat is that most Docker images are created with very minimal Linux installs, so some of the command line utilities and tools you are used to may not be present.
Conclusion
Bitwarden Touch Id Mac
Great, so that covers the basic concepts of Docker Compose and how to get it installed and running. Check out the Deploying Wordpress and PHPMyAdmin with Docker Compose on Ubuntu 14.04 tutorial for a more complicated example of how to deploy an application with Docker Compose.
Bitwarden Downloads
For a complete list of configuration options for the docker-compose.yml
file refer to the Compose file reference.