Move development docker-compose file to the main directory (#8798)

* Move development docker-compose file to the main directory

This requires us to move some other files around, like the pullpreview and example docker-compose file for production
setups. This commit also does some housekeeping, like removing some old files and deduping configuration.

* Updated to selenium grid

* Fix in-Docker Selenium tests

The Selenium tests now run inside a Docker Chrome Container. Backwards compatability with non-docker setups is not
guaranteed, though it should not be hard to implement with a couple of small fixes.

* Updated docker development documentation

* Improved test timings, changed the documentation

* Updated docker testing again

* Run npm in the frontend directory

* Really run npm in the frontend directory

* Also run npm in frontend when setting up travis cache

* Change directory for one command only

* Change default test driver name

* CI test change fixes

* Fixed syntax error

* Added dev check

* Trying to fix firefox resizing

* Trying to get tests running

* Stop resizing firefox

* Fixed apple icon spec

* fix host in url helpers for omniauth spec

* Fix omniauth specs

* Fix docs

* Small fixes to docker tests

* Added package.json back in

* Change env variables

Co-authored-by: Markus Kahl <machisuji@gmail.com>
pull/8845/head
Benjamin Bädorf 4 years ago committed by GitHub
parent 1453e28a6d
commit 5f45ee07ab
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
  1. 37
      .env.sample
  2. 3
      .github/workflows/docker.yml
  3. 4
      .github/workflows/pullpreview.yml
  4. 9
      .gitignore
  5. 2
      app/helpers/frontend_asset_helper.rb
  6. 8
      bin/compose
  7. 2
      bin/cucumber
  8. 2
      bin/setup_dev
  9. 2
      config/routes.rb
  10. 218
      docker-compose.yml
  11. 21
      docker/dev/backend/Dockerfile
  12. 14
      docker/dev/backend/scripts/run-test
  13. 19
      docker/dev/backend/scripts/setup-tests
  14. 125
      docker/dev/compose.yml
  15. 5
      docker/dev/frontend/Dockerfile
  16. 12
      docker/prod/Dockerfile
  17. 0
      docker/prod/console
  18. 0
      docker/prod/cron
  19. 4
      docker/prod/entrypoint.sh
  20. 0
      docker/prod/gosu
  21. 0
      docker/prod/mysql-to-postgres/Dockerfile
  22. 0
      docker/prod/mysql-to-postgres/bin/build
  23. 0
      docker/prod/mysql-to-postgres/bin/migrate-mysql-to-postgres
  24. 0
      docker/prod/proxy
  25. 0
      docker/prod/proxy.conf.erb
  26. 0
      docker/prod/seeder
  27. 0
      docker/prod/setup/postinstall-common.sh
  28. 0
      docker/prod/setup/postinstall.sh
  29. 0
      docker/prod/setup/preinstall-common.sh
  30. 0
      docker/prod/setup/preinstall-on-prem.sh
  31. 0
      docker/prod/setup/preinstall.sh
  32. 2
      docker/prod/supervisord
  33. 8
      docker/prod/supervisord.conf
  34. 0
      docker/prod/web
  35. 0
      docker/prod/webpack-watch
  36. 2
      docker/prod/worker
  37. 0
      docker/pullpreview/docker-compose.yml
  38. 54
      docs/development/development-environment-docker/README.md
  39. 2
      docs/development/development-environment-osx/README.md
  40. 2
      docs/development/development-environment-ubuntu/README.md
  41. 2
      features/support/env.rb
  42. 0
      frontend/browserslist
  43. 2
      frontend/cli_to_rails_proxy.js
  44. 9
      frontend/package.json
  45. 0
      frontend/tslint.json
  46. 5
      lib/tasks/assets.rake
  47. 2
      modules/bim/spec/features/bim_revit_add_in_navigation_spec.rb
  48. 2
      script/ci/cache_prepare.sh
  49. 2
      script/ci/setup.sh
  50. 2
      spec/features/admin/enterprise/enterprise_trial_spec.rb
  51. 7
      spec/features/auth/omniauth_spec.rb
  52. 2
      spec/features/work_packages/details/custom_fields/custom_field_spec.rb
  53. 2
      spec/features/work_packages/table/configuration_modal/column_spec.rb
  54. 2
      spec/features/wysiwyg/macros/code_block_macro_spec.rb
  55. 56
      spec/support/browsers/chrome.rb
  56. 65
      spec/support/browsers/firefox.rb
  57. 14
      spec/support/capybara.rb
  58. 2
      spec/support/pages/page.rb
  59. 2
      spec/support/puffing_billy_proxy.rb
  60. 2
      spec/support/shared/with_direct_uploads.rb
  61. 2
      spec/views/layouts/base.html.erb_spec.rb
  62. BIN
      table formatting.png

@ -1,37 +0,0 @@
#-- copyright
# OpenProject is an open source project management software.
# Copyright (C) 2012-2020 the OpenProject GmbH
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License version 3.
#
# OpenProject is a fork of ChiliProject, which is a fork of Redmine. The copyright follows:
# Copyright (C) 2006-2013 Jean-Philippe Lang
# Copyright (C) 2010-2013 the ChiliProject Team
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
#
# See docs/COPYRIGHT.rdoc for more details.
#++
# If you place a .env file into the root directory of OpenProject
# you can override some default settings that foreman will use
# to start OpenProject
# override the default bind address
HOST=0.0.0.0
# override the default port
PORT=1337

@ -14,6 +14,9 @@ jobs:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@master
- name: Prepare docker files
run: |
cp ./docker/prod/Dockerfile ./Dockerfile
- name: Build & Push
id: build_and_push
uses: elgohr/Publish-Docker-Github-Action@master

@ -22,6 +22,10 @@ jobs:
if: contains(github.ref, 'bim/') || contains(github.head_ref, 'bim/')
run: |
echo "OPENPROJECT_EDITION=bim" >> .env.pullpreview
- name: Prepare docker-compose files
run: |
cp ./docker/pullpreview/docker-compose.yml ./docker-compose.pullpreview.yml
cp ./docker/prod/Dockerfile ./Dockerfile
- uses: pullpreview/action@v4
with:
admins: crohr,HDinger,machisuji,oliverguenther,ulferts,wielinde

9
.gitignore vendored

@ -51,6 +51,8 @@ npm-debug.log*
/backup
/.project
/.loadpath
# Generated files
/app/assets/javascripts/editor/*
/app/assets/javascripts/locales/*.*
/frontend/src/locales/*.js
@ -87,16 +89,23 @@ npm-debug.log*
/.env*
.DS_Store
.rspec
# coverage in plugins
/lib/plugins/*/coverage
# asset cache
/.sass-cache/
# Frontend debug log
/frontend/npm-debug.log*
/frontend/dist/
/frontend/tests/*.gif
node_modules/
# Ignore global package-lock.json that generates
/package-lock.json
plaintext.yml
structure.sql
# Local development docker
/.env

@ -31,7 +31,7 @@ module FrontendAssetHelper
CLI_DEFAULT_PROXY = 'http://localhost:4200'.freeze
def self.assets_proxied?
!Rails.env.production? && cli_proxy?
!ENV['OPENPROJECT_DISABLE_DEV_ASSET_PROXY'].present? && !Rails.env.production? && cli_proxy?
end
def self.cli_proxy

@ -1,8 +0,0 @@
#!/bin/sh
set -e
export DEV_UID=$(id -u)
export DEV_GID=$(id -g)
docker-compose -f ./docker/dev/compose.yml $*

@ -1,4 +1,4 @@
#!/bin/sh
#!/usr/bin/env bash
#
# Runs cucumber while requiring all plugin feature folders to
# make sure all steps are defined. Using this you can then run

@ -1,4 +1,4 @@
#!/bin/bash
#!/usr/bin/env bash
#
# Deletes bundled javascript assets and rebuilds them.
# Useful for when your frontend doesn't work (jQuery not defined etc.) for seemingly no reason at all.

@ -62,7 +62,7 @@ OpenProject::Application.routes.draw do
# forward requests to the proxy
if FrontendAssetHelper.assets_proxied?
match '/assets/frontend/*appendix',
to: redirect("http://localhost:4200/assets/frontend/%{appendix}", status: 307),
to: redirect(FrontendAssetHelper.cli_proxy + "/assets/frontend/%{appendix}", status: 307),
format: false,
via: :all
end

@ -1,30 +1,35 @@
version: "3.7"
networks:
frontend:
backend:
network:
testing:
volumes:
pgdata:
tmp:
opdata:
bundle:
pgdata-test:
tmp-test:
fedata-test:
x-op-restart-policy: &restart_policy
restart: unless-stopped
x-op-build: &build
context: .
dockerfile: ./docker/dev/backend/Dockerfile
args:
DEV_UID: $DEV_UID
DEV_GID: $DEV_GID
x-op-image: &image
image: openproject/community:${TAG:-11}
x-op-app: &app
<<: *image
<<: *restart_policy
environment:
- "RAILS_CACHE_STORE=memcache"
- "OPENPROJECT_CACHE__MEMCACHE__SERVER=cache:11211"
- "OPENPROJECT_RAILS__RELATIVE__URL__ROOT=${OPENPROJECT_RAILS__RELATIVE__URL__ROOT:-}"
- "DATABASE_URL=postgres://postgres:p4ssw0rd@db/openproject"
- "USE_PUMA=true"
# set to true to enable the email receiving feature. See ./docker/cron for more options
- "IMAP_ENABLED=false"
volumes:
- "opdata:/var/openproject/assets"
image:
openproject/dev:latest
x-op-frontend-build: &frontend-build
context: .
dockerfile: ./docker/dev/frontend/Dockerfile
args:
DEV_UID: $DEV_UID
DEV_GID: $DEV_GID
services:
db:
@ -34,65 +39,168 @@ services:
volumes:
- "pgdata:/var/lib/postgresql/data"
environment:
- POSTGRES_PASSWORD=p4ssw0rd
- POSTGRES_DB=openproject
POSTGRES_USER: ${DB_USERNAME}
POSTGRES_PASSWORD: ${DB_PASSWORD}
POSTGRES_DB: ${DB_DATABASE}
networks:
- backend
- network
cache:
image: memcached
<<: *restart_policy
networks:
- backend
- network
proxy:
backend:
build:
<<: *build
target: develop
<<: *image
<<: *restart_policy
command: "./docker/proxy"
command: run-app
ports:
- "8080:80"
- "3000:3000"
environment:
- APP_HOST=web
- "OPENPROJECT_RAILS__RELATIVE__URL__ROOT=${OPENPROJECT_RAILS__RELATIVE__URL__ROOT:-}"
LOCAL_DEV_CHECK: "${LOCAL_DEV_CHECK:?The docker-compose file for OpenProject has moved to https://github.com/opf/openproject-deploy}"
RAILS_ENV: development
RAILS_CACHE_STORE: memcache
OPENPROJECT_CACHE__MEMCACHE__SERVER: cache:11211
OPENPROJECT_RAILS__RELATIVE__URL__ROOT: "${OPENPROJECT_RAILS__RELATIVE__URL__ROOT:-}"
DATABASE_URL: postgresql://${DB_USERNAME}:${DB_PASSWORD}@${DB_HOST}:${DB_PORT}/${DB_DATABASE}
volumes:
- ".:/home/dev/openproject"
- "opdata:/var/openproject/assets"
- "bundle:/usr/local/bundle"
- "tmp:/home/dev/openproject/tmp"
depends_on:
- web
- db
- cache
networks:
- frontend
- network
web:
<<: *app
command: "./docker/web"
frontend:
build:
<<: *frontend-build
command: "npm run serve"
volumes:
- ".:/home/dev/openproject"
ports:
- "4200:4200"
environment:
PROXY_HOSTNAME: backend
networks:
- frontend
- backend
- network
depends_on:
- db
- cache
- seeder
- backend
worker:
<<: *app
command: "./docker/worker"
######### Testing stuff below ############
db-test:
image: postgres:10
stop_grace_period: "3s"
volumes:
- "pgdata-test:/var/lib/postgresql/data"
environment:
POSTGRES_DB: openproject
POSTGRES_USER: openproject
POSTGRES_PASSWORD: openproject
networks:
- backend
depends_on:
- db
- cache
- seeder
- testing
cron:
<<: *app
command: "./docker/cron"
frontend-test:
build:
<<: *frontend-build
command: "npm run build-test"
volumes:
- ".:/home/dev/openproject"
- "fedata-test:/home/dev/openproject/public/assets/frontend"
environment:
PROXY_HOSTNAME: backend-test
networks:
- backend
- testing
backend-test:
build:
<<: *build
target: test
command: setup-tests
hostname: backend-test
networks:
- testing
depends_on:
- db
- cache
- seeder
- db-test
- selenium-hub
- frontend-test
environment:
RAILS_ENV: test
OPENPROJECT_RAILS__RELATIVE__URL__ROOT: "${OPENPROJECT_RAILS__RELATIVE__URL__ROOT:-}"
DATABASE_URL: postgresql://openproject:openproject@db-test/openproject
DATABASE_CLEANER_ALLOW_REMOTE_DATABASE_URL: "true"
SELENIUM_GRID_URL: http://selenium-hub:4444/wd/hub
CAPYBARA_SERVER_PORT: 3000
CAPYBARA_DYNAMIC_HOSTNAME: 0
CAPYBARA_APP_HOSTNAME: backend-test
OPENPROJECT_DISABLE_DEV_ASSET_PROXY: 1
OPENPROJECT_TESTING_NO_HEADLESS: "true"
volumes:
- ".:/home/dev/openproject"
- "fedata-test:/home/dev/openproject/public/assets/frontend"
- "opdata:/var/openproject/assets"
- "bundle:/usr/local/bundle"
- "tmp-test:/home/dev/openproject/tmp"
seeder:
<<: *app
command: "./docker/seeder"
restart: on-failure
selenium-hub:
image: selenium/hub:latest
container_name: selenium-hub
hostname: selenium-hub
depends_on:
- chrome
- firefox
- opera
networks:
- backend
- testing
ports:
- "4444:4444"
chrome:
image: selenium/node-chrome-debug:latest
volumes:
- /dev/shm:/dev/shm
networks:
- testing
ports:
- 5900:5900
environment:
HUB_HOST: selenium-hub
HUB_PORT: 4444
SCREEN_WIDTH: 1920
SCREEN_HEIGHT: 1080
firefox:
image: selenium/node-firefox-debug:latest
volumes:
- /dev/shm:/dev/shm
networks:
- testing
ports:
- 5901:5900
environment:
HUB_HOST: selenium-hub
HUB_PORT: 4444
SCREEN_WIDTH: 1920
SCREEN_HEIGHT: 1080
opera:
image: selenium/node-opera-debug:latest
volumes:
- /dev/shm:/dev/shm
networks:
- testing
ports:
- 5902:5900
environment:
HUB_HOST: selenium-hub
HUB_PORT: 4444
SCREEN_WIDTH: 1920
SCREEN_HEIGHT: 1080

@ -15,24 +15,28 @@ RUN groupmod -g $DEV_GID $USER
WORKDIR /home/$USER
RUN gem install bundler --version "${bundler_version}" --no-document
RUN apt-get update -qq && \
DEBIAN_FRONTEND=noninteractive apt-get install -y \
postgresql-client
COPY ./backend/scripts/setup /usr/sbin/setup
COPY ./backend/scripts/run-app /usr/sbin/run-app
COPY ./docker/dev/backend/scripts/setup /usr/sbin/setup
COPY ./docker/dev/backend/scripts/run-app /usr/sbin/run-app
# The following lines are needed to make sure the file permissions are setup correctly after the volumes are mounted
RUN mkdir -p /home/$USER/openproject/tmp
RUN mkdir -p /usr/local/bundle
RUN chown $USER:$USER /usr/local/bundle
RUN chown $USER:$USER /home/$USER/openproject/tmp
EXPOSE 3000
VOLUME ["/usr/local/bundle", "/home/$USER/openproject"]
VOLUME [ "/usr/local/bundle", "/home/$USER/openproject", "/home/$USER/openproject/tmp" ]
WORKDIR /home/$USER/openproject
USER $USER
ENTRYPOINT ["/bin/sh", "-c"]
RUN gem install bundler --version "${bundler_version}" --no-document
####### Testing image below #########
@ -42,6 +46,9 @@ USER root
RUN apt-get update -qq && \
DEBIAN_FRONTEND=noninteractive apt-get install -y \
chromium
jq
USER $USER
COPY ./docker/dev/backend/scripts/run-test /usr/sbin/run-test
COPY ./docker/dev/backend/scripts/setup-tests /usr/sbin/setup-tests
ENTRYPOINT [ "/usr/sbin/run-test" ]

@ -0,0 +1,14 @@
#!/bin/sh
set -e
set -u
cmd="$@"
echo 'Waiting for the Grid...'
while ! curl -sSL "${SELENIUM_GRID_URL}/status" 2>&1 \
| jq -r '.value.ready' 2>&1 | grep "true" > /dev/null; do
sleep 1
done
exec $cmd

@ -0,0 +1,19 @@
#!/bin/sh
set -e
bundle binstubs parallel_tests
bundle exec rake db:migrate
bundle exec rake i18n:js:export openproject:plugins:register_frontend assets:rebuild_manifest assets:clean
cp -rp config/frontend_assets.manifest.json public/assets/frontend_assets.manifest.json
echo ""
echo ""
echo "Ready for tests. Run"
echo " docker-compose exec backend-test bundle exec rspec"
echo "to start the full suite, or "
echo " docker-compose exec backend-test bundle exec rspec $tests"
echo "to run a subset"
# Keep this container online
while true; do sleep 1000; done;

@ -1,125 +0,0 @@
version: "3.7"
networks:
frontend:
backend:
test:
volumes:
pgdata:
tmp:
opdata:
bundle:
pgdata-test:
tmp-test:
x-op-restart-policy: &restart_policy
restart: unless-stopped
x-op-build: &build
context: .
dockerfile: ./backend/Dockerfile
args:
DEV_UID: $DEV_UID
DEV_GID: $DEV_GID
x-op-image: &image
image:
openproject/dev:latest
services:
db:
image: postgres:9
<<: *restart_policy
stop_grace_period: "3s"
volumes:
- "pgdata:/var/lib/postgresql/data"
environment:
POSTGRES_PASSWORD: postgres
POSTGRES_DB: openproject
networks:
- backend
cache:
image: memcached
<<: *restart_policy
networks:
- backend
backend:
build:
<<: *build
target: develop
<<: *image
<<: *restart_policy
command: run-app
ports:
- "3000:3000"
environment:
RAILS_ENV: development
RAILS_CACHE_STORE: memcache
OPENPROJECT_CACHE__MEMCACHE__SERVER: cache:11211
OPENPROJECT_RAILS__RELATIVE__URL__ROOT: "${OPENPROJECT_RAILS__RELATIVE__URL__ROOT:-}"
OPENPROJECT_STORAGE_TMP__PATH: /tmp/op
DATABASE_URL: postgresql://postgres:postgres@db/openproject
volumes:
- "${OPENPROJECT_HOME:?Please set OPENPROJECT_HOME to the OpenProject root folder}:/home/dev/openproject"
- "opdata:/var/openproject/assets"
- "bundle:/usr/local/bundle"
- "tmp:/tmp/op"
depends_on:
- db
- cache
networks:
- backend
frontend:
build:
context: .
dockerfile: ./frontend/Dockerfile
args:
DEV_UID: $DEV_UID
DEV_GID: $DEV_GID
command: "npm run serve"
volumes:
- "${OPENPROJECT_HOME:?Please set OPENPROJECT_HOME to the OpenProject root folder}:/home/dev/openproject"
ports:
- "4200:4200"
environment:
PROXY_HOSTNAME: backend
networks:
- frontend
- backend
depends_on:
- backend
# The containers below are for testing
db-test:
image: postgres:9
stop_grace_period: "3s"
volumes:
- "pgdata-test:/var/lib/postgresql/data"
environment:
POSTGRES_PASSWORD: postgres
POSTGRES_DB: openproject
networks:
- test
backend-test:
build:
<<: *build
target: test
<<: *image
<<: *restart_policy
command: run-app
networks:
- test
environment:
RAILS_ENV: test
OPENPROJECT_RAILS__RELATIVE__URL__ROOT: "${OPENPROJECT_RAILS__RELATIVE__URL__ROOT:-}"
DATABASE_URL: postgresql://postgres:postgres@db-test/openproject
DATABASE_CLEANER_ALLOW_REMOTE_DATABASE_URL: "true"
OPENPROJECT_STORAGE_TMP__PATH: /tmp/op
volumes:
- "${OPENPROJECT_HOME:?Please set OPENPROJECT_HOME to the OpenProject root folder}:/home/dev/openproject"
- "opdata:/var/openproject/assets"
- "bundle:/usr/local/bundle"
- "tmp-test:/tmp/op"

@ -13,7 +13,10 @@ RUN groupmod -g $DEV_GID $USER
EXPOSE 4200
VOLUME ["/home/$USER/openproject"]
RUN mkdir -p /home/$USER/openproject/public/assets/frontend
RUN chown $USER:$USER -R /home/$USER/openproject/public
VOLUME ["/home/$USER/openproject", "/home/$USER/openproject/public/assets/frontend"]
WORKDIR /home/$USER/openproject/frontend

@ -1,6 +1,6 @@
FROM ruby:2.7.1-buster AS pgloader
RUN apt-get update -qq && apt-get install -y libsqlite3-dev make curl gawk freetds-dev libzip-dev
COPY docker/mysql-to-postgres/bin/build /tmp/build-pgloader
COPY docker/prod/mysql-to-postgres/bin/build /tmp/build-pgloader
RUN /tmp/build-pgloader && rm /tmp/build-pgloader
FROM ruby:2.7.1-buster
@ -40,8 +40,8 @@ COPY --from=pgloader /usr/local/bin/pgloader-ccl /usr/local/bin/
WORKDIR $APP_PATH
COPY docker/setup ./docker/setup
RUN ./docker/setup/preinstall.sh
COPY docker/prod/setup ./docker/prod/setup
RUN ./docker/prod/setup/preinstall.sh
COPY Gemfile ./Gemfile
COPY Gemfile.* ./
@ -57,7 +57,7 @@ RUN bundle install --quiet --deployment --path vendor/bundle --no-cache \
# Finally, copy over the whole thing
COPY . .
RUN ./docker/setup/postinstall.sh
RUN ./docker/prod/setup/postinstall.sh
# Expose ports for apache and postgres
EXPOSE 80 5432
@ -66,7 +66,7 @@ EXPOSE 80 5432
VOLUME ["$PGDATA", "$APP_DATA_PATH"]
# Set a custom entrypoint to allow for privilege dropping and one-off commands
ENTRYPOINT ["./docker/entrypoint.sh"]
ENTRYPOINT ["./docker/prod/entrypoint.sh"]
# Set default command to launch the all-in-one configuration supervised by supervisord
CMD ["./docker/supervisord"]
CMD ["./docker/prod/supervisord"]

@ -77,11 +77,11 @@ if [ "$(id -u)" = '0' ]; then
exec "$@"
fi
if [ "$1" = "./docker/supervisord" ] || [ "$1" = "./docker/proxy" ]; then
if [ "$1" = "./docker/prod/supervisord" ] || [ "$1" = "./docker/prod/proxy" ]; then
exec "$@"
fi
exec $APP_PATH/docker/gosu $APP_USER "$BASH_SOURCE" "$@"
exec $APP_PATH/docker/prod/gosu $APP_USER "$BASH_SOURCE" "$@"
fi
exec "$@"

@ -136,4 +136,4 @@ echo "-----> Database setup finished."
echo " On first installation, the default admin credentials are login: admin, password: admin"
echo "-----> Launching supervisord..."
exec /usr/bin/supervisord -c $APP_PATH/docker/supervisord.conf -e ${SUPERVISORD_LOG_LEVEL}
exec /usr/bin/supervisord -c $APP_PATH/docker/prod/supervisord.conf -e ${SUPERVISORD_LOG_LEVEL}

@ -12,7 +12,7 @@ priority=4
user=app
environment=HOME="/home/%(ENV_APP_USER)s",USER="%(ENV_APP_USER)s"
directory=%(ENV_APP_PATH)s
command=./docker/web
command=./docker/prod/web
autorestart=true
stderr_logfile = /dev/stderr
stdout_logfile = /dev/stdout
@ -24,7 +24,7 @@ priority=5
user=app
environment=HOME="/home/%(ENV_APP_USER)s",USER="%(ENV_APP_USER)s"
directory=%(ENV_APP_PATH)s
command=./docker/worker
command=./docker/prod/worker
startretries=10
autorestart=true
stderr_logfile = /dev/stderr
@ -48,7 +48,7 @@ priority=100
user=app
environment=HOME="/home/%(ENV_APP_USER)s",USER="%(ENV_APP_USER)s"
directory=%(ENV_APP_PATH)s
command=./docker/cron
command=./docker/prod/cron
autostart=false
autorestart=true
stderr_logfile = /dev/stderr
@ -59,7 +59,7 @@ stderr_logfile_maxbytes = 0
[program:apache2]
priority=2
directory=%(ENV_APP_PATH)s
command=./docker/proxy
command=./docker/prod/proxy
stderr_logfile = /dev/stderr
stdout_logfile = /dev/stdout
stdout_logfile_maxbytes = 0

@ -2,6 +2,6 @@
if [ "$1" = "--seed" ]; then
shift
$APP_PATH/docker/seeder "$@"
$APP_PATH/docker/prod/seeder "$@"
fi
exec bundle exec rake jobs:work

@ -17,12 +17,42 @@ This will checkout the dev branch in `openproject`. **Change into that directory
If you have OpenProject checked out already make sure that you do not have a `config/database.yml`
as that will interfere with the database connection inside of the docker containers.
### 2) Execute the setup
### 3) Configure environment
Copy the env example to `.env`
```
cp .env.example .env
```
Afterwards, set the environment variables to your liking. `DEV_UID` and `DEV_GID` are required to be set so your project
directory will not end up with files owned by root.
### 2) Setup database and install dependencies
```
# Start the database. It needs to be running to run migrations and seeders
docker-compose up -d db
# Install frontend dependencies
docker-compose run frontend npm i
# Install backend dependencies, migrate, and seed
docker-compose run backend setup
```
### 3) Start the stack
The docker compose file also has the test containers defined. The easiest way to start only the development stack, use
```
export OPENPROJECT_HOME=`pwd`
docker-compose up frontend
```
To see the backend logs as well, use
bin/compose up frontend backend
```
docker-compose up frontend backend
```
This starts only the frontend and backend containers and their dependencies. This excludes the testing containers, which
@ -33,7 +63,7 @@ However, these are cached in a docker volume. Meaning that from the 2nd run onwa
Wait until you see `frontend_1 | : Compiled successfully.` and `backend_1 | => Rails 6.0.2.2 application starting in development http://0.0.0.0:3000` in the logs.
This means both frontend and backend have come up successfully.
You can now access OpenProject under http://localhost:3000.
You can now access OpenProject under http://localhost:3000, and via the live-reloaded under http://localhost:4200.
Again the first request to the server can take some time too.
But subsequent requests will be a lot faster.
@ -62,15 +92,21 @@ If you want to reset the data you can delete the docker volumes via `docker volu
## Running tests
Not all tests are functional within the docker containers yet, so it is recommended to run tests outside of Docker.
However, you can run tests by executing
Start all linked containers and migrate the test database first:
```
export OPENPROJECT_HOME=`pwd`
docker-compose up backend-test
```
Afterwards, you can start the tests in the running `backend-test` container:
./bin/compose up -d
./bin/compose exec backend-test bundle exec rspec
```
docker-compose run backend-test bundle exec rspec
```
Tests are ran within Selenium containers, on a small local Selenium grid. You can connect to the containers via VNC if
you want to see what the browsers are doing. `gvncviewer` on Linux is a good tool for this. Check out the docker-compose
file to see which port each browser container is exposed on. The password is `secret` for all.
## Local files

@ -186,7 +186,7 @@ however most developers end up running the tasks in separate shells for better u
gem install foreman
foreman start -f Procfile.dev
```
The application will be available at `http://127.0.0.1:5000`. To customize bind address and port copy the `.env.sample` provided in the root of this
The application will be available at `http://127.0.0.1:5000`. To customize bind address and port copy the `.env.example` provided in the root of this
project as `.env` and [configure values][foreman-env] as required.
By default a worker process will also be started. In development asynchronous execution of long-running background tasks (sending emails, copying projects,

@ -232,7 +232,7 @@ however most developers end up running the tasks in separate shells for better u
gem install foreman
foreman start -f Procfile.dev
```
The application will be available at `http://127.0.0.1:3000`. To customize bind address and port copy the `.env.sample` provided in the root of this
The application will be available at `http://127.0.0.1:3000`. To customize bind address and port copy the `.env.example` provided in the root of this
project as `.env` and [configure values][foreman-env] as required.
By default a worker process will also be started. In development asynchronous execution of long-running background tasks (sending emails, copying projects,

@ -86,7 +86,7 @@ end
require Rails.root.to_s + '/spec/support/downloaded_file'
require Rails.root.to_s + '/spec/support/browsers/chrome'
Capybara.javascript_driver = :chrome_headless_en
Capybara.javascript_driver = :chrome_en
# By default, any exception happening in your Rails application will bubble up
# to Cucumber so that your scenario will fail. This is a different from how

@ -4,7 +4,7 @@ const PROXY_CONFIG = [
{
"context": ['/**'],
"target": `http://${PROXY_HOSTNAME}:3000`,
"secure": false
"secure": false,
// "bypass": function (req, res, proxyOptions) {
// }
}

@ -135,16 +135,13 @@
"analyze": "ng build --prod --stats-json && webpack-bundle-analyzer -p 9999 ../public/assets/frontend/stats.json",
"prebuild": "./scripts/link_plugin_placeholder.js",
"build": "node --max_old_space_size=2048 ./node_modules/@angular/cli/bin/ng build --prod --named-chunks --extract-css --source-map",
"build-watch": "node --max_old_space_size=2048 ./node_modules/@angular/cli/bin/ng build --watch --named-chunks --extract-css",
"preserve": "./scripts/link_plugin_placeholder.js",
"serve": "node --max_old_space_size=8096 ./node_modules/@angular/cli/bin/ng serve --host 0.0.0.0 --public-host http://localhost:4200",
"serve-test": "node --max_old_space_size=8096 ./node_modules/@angular/cli/bin/ng serve --host 0.0.0.0 --disable-host-check --public-host http://frontend-test:4200",
"pretest": "./scripts/link_plugin_placeholder.js",
"test": "ng test --watch=false",
"tslint_typechecks": "./node_modules/.bin/tslint -p . -c tslint_typechecks.json",
"generate-typings": "tsc -d -p src/tsconfig.app.json"
},
"browserslist": [
"last 2 Chrome versions",
"last 2 Safari versions",
"last 2 Firefox versions"
]
}
}

@ -67,6 +67,11 @@ namespace :assets do
end
end
Rake::Task['assets:rebuild_manifest'].invoke
end
desc 'Write angular assets manifest'
task :rebuild_manifest do
puts "Writing angular assets manifest"
OpenProject::Assets.rebuild_manifest!
end

@ -32,7 +32,7 @@ describe 'BIM Revit Add-in navigation spec',
type: :feature,
with_config: { edition: 'bim' },
js: true,
driver: :chrome_headless_revit_add_in do
driver: :chrome_revit_add_in do
let(:project) { FactoryBot.create :project, enabled_module_names: %i[bim work_package_tracking] }
let!(:work_package) { FactoryBot.create(:work_package, project: project) }
let(:role) { FactoryBot.create(:role, permissions: %i[view_ifc_models manage_ifc_models add_work_packages edit_work_packages view_work_packages]) }

@ -40,7 +40,7 @@ run() {
run "bundle exec rake db:migrate webdrivers:chromedriver:update webdrivers:geckodriver:update"
run "for i in {1..3}; do npm install && break || sleep 15; done"
run "for i in {1..3}; do (cd frontend; npm install && break || sleep 15;) done"
run "bundle exec rake assets:precompile assets:clean"

@ -53,7 +53,7 @@ if [ $1 != 'npm' ]; then
fi
if [ $1 = 'npm' ]; then
run "for i in {1..3}; do npm install && break || sleep 15; done"
run "for i in {1..3}; do (cd frontend; npm install && break || sleep 15;) done"
echo "No asset compilation required"
fi

@ -30,7 +30,7 @@ require 'spec_helper'
describe 'Enterprise trial management',
type: :feature,
driver: :headless_firefox_billy do
driver: :firefox_billy do
let(:admin) { FactoryBot.create(:admin) }

@ -29,6 +29,13 @@
require 'spec_helper'
describe 'Omniauth authentication', type: :feature do
# Running the tests inside docker changes the hostname. To accomodate that we changed
# the Capybara app_host, however this change was not being reflected in the Rails host,
# causing the redirect checks to fail below.
def self.default_url_options
{ host: Capybara.app_host.sub(/https?\/\//, "") }
end
let(:user) do
FactoryBot.create(:user,
force_password_change: false,

@ -212,7 +212,7 @@ describe 'custom field inplace editor', js: true do
end
context 'with german locale',
driver: :firefox_headless_de do
driver: :firefox_de do
let(:user) { FactoryBot.create :admin, language: 'de' }
it 'displays the float with german locale and allows editing' do

@ -47,7 +47,7 @@ describe 'Work Package table configuration modal columns spec', js: true do
it_behaves_like 'add and remove columns'
context 'with three columns', driver: :firefox_headless_de do
context 'with three columns', driver: :firefox_de do
let!(:query) do
query = FactoryBot.build(:query, user: user, project: project)
query.column_names = %w[id project subject]

@ -30,7 +30,7 @@ require 'spec_helper'
describe 'Wysiwyg code block macro',
type: :feature,
driver: :firefox_headless_en,
driver: :firefox_en,
js: true do
using_shared_fixtures :admin
let(:user) { admin }

@ -1,30 +1,31 @@
# Force the latest version of chromedriver using the webdriver gem
require 'webdrivers/chromedriver'
::Webdrivers.logger.level = :DEBUG
if ENV['CI']
::Webdrivers.logger.level = :DEBUG
::Webdrivers::Chromedriver.update
end
def register_chrome_headless(language, name: :"chrome_headless_#{language}")
def register_chrome(language, name: :"chrome_#{language}")
Capybara.register_driver name do |app|
options = Selenium::WebDriver::Chrome::Options.new
if ActiveRecord::Type::Boolean.new.cast(ENV['OPENPROJECT_TESTING_NO_HEADLESS'])
# Maximize the window however large the available space is
options.add_argument('--start-maximized')
options.add_argument('start-maximized')
# Open dev tools for quick access
options.add_argument('--auto-open-devtools-for-tabs')
options.add_argument('auto-open-devtools-for-tabs')
else
options.add_argument('--window-size=1920,1080')
options.add_argument('--headless')
options.add_argument('--disable-gpu')
options.add_argument('window-size=1920,1080')
options.add_argument('headless')
options.add_argument('disable-gpu')
end
options.add_argument('--no-sandbox')
options.add_argument('--disable-gpu')
options.add_argument('--disable-popup-blocking')
options.add_argument("--lang=#{language}")
options.add_argument('no-sandbox')
options.add_argument('disable-gpu')
options.add_argument('disable-popup-blocking')
options.add_argument("lang=#{language}")
options.add_preference(:download,
directory_upgrade: true,
@ -45,20 +46,23 @@ def register_chrome_headless(language, name: :"chrome_headless_#{language}")
driver = Capybara::Selenium::Driver.new(
app,
browser: :chrome,
browser: ENV['SELENIUM_GRID_URL'] ? :remote : :chrome,
url: ENV['SELENIUM_GRID_URL'],
desired_capabilities: capabilities,
http_client: client,
options: options
)
# Enable file downloads in headless mode
# https://bugs.chromium.org/p/chromium/issues/detail?id=696481
bridge = driver.browser.send :bridge
if !ENV['SELENIUM_GRID_URL']
# Enable file downloads in headless mode
# https://bugs.chromium.org/p/chromium/issues/detail?id=696481
bridge = driver.browser.send :bridge
bridge.http.call :post,
"/session/#{bridge.session_id}/chromium/send_command",
cmd: 'Page.setDownloadBehavior',
params: { behavior: 'allow', downloadPath: DownloadedFile::PATH.to_s }
bridge.http.call :post,
"/session/#{bridge.session_id}/chromium/send_command",
cmd: 'Page.setDownloadBehavior',
params: { behavior: 'allow', downloadPath: DownloadedFile::PATH.to_s }
end
driver
end
@ -68,20 +72,20 @@ def register_chrome_headless(language, name: :"chrome_headless_#{language}")
end
end
register_chrome_headless 'en'
register_chrome 'en'
# Register german locale for custom field decimal test
register_chrome_headless 'de'
register_chrome 'de'
# Register mocking proxy driver
register_chrome_headless 'en', name: :headless_chrome_billy do |options, capabilities|
options.add_argument("--proxy-server=#{Billy.proxy.host}:#{Billy.proxy.port}")
options.add_argument('--proxy-bypass-list=127.0.0.1;localhost')
register_chrome 'en', name: :chrome_billy do |options, capabilities|
options.add_argument("proxy-server=#{Billy.proxy.host}:#{Billy.proxy.port}")
options.add_argument('proxy-bypass-list=127.0.0.1;localhost')
capabilities[:acceptInsecureCerts] = true
end
# Register Revit add in
register_chrome_headless 'en', name: :chrome_headless_revit_add_in do |options, capabilities|
options.add_argument("--user-agent='foo bar Revit'")
register_chrome 'en', name: :chrome_revit_add_in do |options, capabilities|
options.add_argument("user-agent='foo bar Revit'")
end

@ -1,21 +1,22 @@
# Force the latest version of geckodriver using the webdriver gem
require 'webdrivers/geckodriver'
require 'socket'
::Webdrivers.logger.level = :DEBUG
if ENV['CI']
::Webdrivers.logger.level = :DEBUG
::Webdrivers::Geckodriver.update
end
def register_firefox_headless(language, name: :"firefox_headless_#{language}")
def register_firefox(language, name: :"firefox_#{language}")
require 'selenium/webdriver'
Capybara.register_driver name do |app|
Selenium::WebDriver::Firefox::Binary.path = ENV['FIREFOX_BINARY_PATH'] ||
Selenium::WebDriver::Firefox::Binary.path
client = Selenium::WebDriver::Remote::Http::Default.new
client.timeout = 180
if ENV['CI']
client = Selenium::WebDriver::Remote::Http::Default.new
client.timeout = 180
end
profile = Selenium::WebDriver::Firefox::Profile.new
profile['intl.accept_languages'] = language
@ -46,17 +47,23 @@ def register_firefox_headless(language, name: :"firefox_headless_#{language}")
options.args << "--headless"
end
# If you need to trace the webdriver commands, un-comment this line
# Selenium::WebDriver.logger.level = :info
driver = Capybara::Selenium::Driver.new(
app,
browser: :firefox,
options: options,
desired_capabilities: capabilities,
http_client: client,
)
if ENV['SELENIUM_GRID_URL']
driver = Capybara::Selenium::Driver.new(
app,
browser: :remote,
url: ENV['SELENIUM_GRID_URL'],
desired_capabilities: capabilities,
options: options
)
else
driver = Capybara::Selenium::Driver.new(
app,
browser: :firefox,
desired_capabilities: capabilities,
options: options,
http_client: client
)
end
Capybara::Screenshot.register_driver(name) do |driver, path|
driver.browser.save_screenshot(path)
@ -66,24 +73,20 @@ def register_firefox_headless(language, name: :"firefox_headless_#{language}")
end
end
register_firefox_headless 'en'
register_firefox 'en'
# Register german locale for custom field decimal test
register_firefox_headless 'de'
register_firefox 'de'
# Register mocking proxy driver
register_firefox_headless 'en', name: :headless_firefox_billy do |profile, options, capabilities|
register_firefox 'en', name: :firefox_billy do |profile, options, capabilities|
profile.assume_untrusted_certificate_issuer = false
profile.proxy = Selenium::WebDriver::Proxy.new(
http: "#{Billy.proxy.host}:#{Billy.proxy.port}",
ssl: "#{Billy.proxy.host}:#{Billy.proxy.port}")
ip_address = Socket.ip_address_list.find { |ai| ai.ipv4? && !ai.ipv4_loopback? }.ip_address
hostname = ENV['CAPYBARA_DYNAMIC_HOSTNAME'].present? ? ip_address : ENV.fetch('CAPYBARA_APP_HOSTNAME', Billy.proxy.host)
capabilities[:accept_insecure_certs] = true
end
profile.proxy = Selenium::WebDriver::Proxy.new(
http: "#{hostname}:#{Billy.proxy.port}",
ssl: "#{hostname}:#{Billy.proxy.port}")
# Resize window if firefox
RSpec.configure do |config|
config.before(:each, driver: Proc.new { |val| val.to_s.start_with? 'firefox_headless_' }) do
Capybara.page.driver.browser.manage.window.maximize
end
capabilities[:accept_insecure_certs] = true
end

@ -1,3 +1,4 @@
require 'socket'
require 'capybara/rspec'
require 'capybara-screenshot'
require 'capybara-screenshot/rspec'
@ -6,7 +7,18 @@ require 'action_dispatch'
RSpec.configure do |config|
Capybara.default_max_wait_time = 4
Capybara.javascript_driver = :chrome_headless_en
Capybara.javascript_driver = :chrome_en
port = ENV.fetch('CAPYBARA_SERVER_PORT', '0').to_i
if port > 0
Capybara.server_port = port
end
Capybara.always_include_port = true
ip_address = Socket.ip_address_list.find { |ai| ai.ipv4? && !ai.ipv4_loopback? }.ip_address
hostname = ENV['CAPYBARA_DYNAMIC_HOSTNAME'].present? ? ip_address : ENV.fetch('CAPYBARA_APP_HOSTNAME', 'localhost')
Capybara.server_host = '0.0.0.0'
Capybara.app_host = "http://#{hostname}"
end
##

@ -67,7 +67,7 @@ module Pages
end
def selenium_driver?
Capybara.current_driver.to_s.include?('headless')
Capybara.current_session.driver.is_a?(Capybara::Selenium::Driver)
end
def set_items_per_page!(n)

@ -33,7 +33,7 @@
# This allows us to stub requests to external APIs to guarantee responses regardless of
# their availability.
#
# In order to use the proxied server, you need to use `driver: headless_firefox_billy` in your examples
# In order to use the proxied server, you need to use `driver: firefox_billy` in your examples
#
# See https://github.com/oesmith/puffing-billy for more information
require 'billy/capybara/rspec'

@ -63,7 +63,7 @@ class WithDirectUploads
end
def around(example)
example.metadata[:driver] = :headless_firefox_billy
example.metadata[:driver] = :firefox_billy
csp_config = SecureHeaders::Configuration.instance_variable_get("@default_config").csp
csp_config.connect_src = ["'self'", "test-bucket.s3.amazonaws.com"]

@ -158,7 +158,7 @@ describe 'layouts/base', type: :view do
visit 'assets/favicon.ico'
expect(page.status_code).to eq(200)
visit 'apple-touch-icon-120x120.png'
visit 'assets/apple-touch-icon-120x120.png'
expect(page.status_code).to eq(200)
end
end

Binary file not shown.

Before

Width:  |  Height:  |  Size: 268 KiB

Loading…
Cancel
Save