1. Multiple containers (recommended), each with a single process inside, using a Compose file. Allows to easily choose which services you want to run, and simplifies scaling and monitoring aspects.
2. One container with all the processes inside. Easy but not recommended for production. This is the legacy behaviour.
After some time, you will be able to access OpenProject on http://localhost:8080. The default username and password is login: `admin`, and password: `admin`.
Note that the official `docker-compose.yml` file present in the repository can be adjusted to your convenience. For instance you could mount specific configuration files, override environment variables, or switch off services you don't need. Please refer to the official docker-compose documentation for more details.
**Note**: Make sure to replace `secret` with a random string. One way to generate one is to run `head /dev/urandom | tr -dc A-Za-z0-9 | head -c 32 ; echo ''` if you are on Linux.
If you need to serve a very large number of users it's time to scale up horizontally.
One way to do that is to use your orchestration tool of choice such as [Kubernetes](../kubernetes/) or [Swarm](https://docs.docker.com/engine/swarm/).
Here we'll cover how to scale up using the latter.
### 1) Setup Swarm
Here will go through a simple setup of a Swarm with a single manager.
For more advanced setups and more information please consult the [docker swarm documentation](https://docs.docker.com/engine/swarm/swarm-tutorial/create-swarm/).
First [initialize your swarm](https://docs.docker.com/get-started/swarm-deploy/) on the host you wish to be the swarm manager(s).
```bash
docker swarm init
# You may need or want to specify the advertise address.
# Say your node manager host's IP is 10.0.2.77:
#
# docker swarm init --advertise-addr=10.0.2.77
```
The host will automatically also join the swarm as a node to host containers.
**Add nodes**
To add worker nodes run `docker swarm join-token worker`.
This will print the necessary command (which includes the join token) which you need to run
on the host you wish to add as a worker. For instance:
Where `10.0.2.77` is your swarm manager's (advertise) IP address.
### 2) Setup shared storage
**Note:** This is only relevant if you have more than 1 node in your swarm.
If your containers run distributed on multiple nodes you will need a shared network storage to store OpenProject's attachments.
The easiest way for this would be to setup an NFS drive that is shared among all nodes and mounted to the same path on each of them.
Say `/mnt/openproject/attachments`.
Alternatively, if using S3 is an option, you can use S3 attachments instead.
We will show both possibilities later in the configuration.
### 3) Create stack
To create a stack you need a stack file. The easiest way is to just copy OpenProject's [docker-compose.yml](https://github.com/opf/openproject/blob/release/10.6/docker-compose.yml). Just download it and save it as, say, `openproject-stack.yml`.
#### Configuring storage
**Note:** This is only necessary if your swarm runs on multiple nodes.
**NFS**
If you are using NFS to share attachments use a mounted docker volume to share the attachments folder.
Per default the YAML file will start with the following section:
```
version: "3.7"
networks:
frontend:
backend:
volumes:
pgdata:
opdata:
```
Adjust this so that the previously created, mounted NFS drive is used.
```
version: "3.7"
networks:
frontend:
backend:
volumes:
pgdata:
opdata:/mnt/openproject/attachments
```
**S3**
If you want to use S3 you will have to add the respective configuration to the `stack.yml`'s environment section for the `app`.
Docker swarm handles the networking necessary to distribute the load among the nodes.
The application will still be accessible as before simply under `http://0.0.0.0:8080` on each node, e.g. `http://10.0.2.77:8080`, the manager node's IP.
#### Load balancer setup
Now as mentioned earlier you can simply use the manager node's endpoint in a reverse proxy setup and the load will be balanced among the nodes.
But that will be a single point of failure if the manager node goes down.
To make this more redunant you can use the load balancer directive in your proxy configuration.
For instance for apache this could look like this: