404 Not Found
  • Introduction
  • Monitoring related
    • K8s cluster monitoring
    • Monitor Jenkins with G.A.P on K8s cluster
    • Monitoring tools | projects
      • Grafana
      • AlertManager
      • Prometheus
      • Wavefront
  • Logging related
    • BOSH logs
    • How to gather systemd log
    • K8s cluster logging
    • Logging tools | projects
      • vRealize Log Insight
      • Fluentd
      • syslog vs fluentd
  • Having fun with docker
    • Using docker-compose for redmine
    • Customize Fluentd docker image
  • K8S or Apache Mesos
  • K8S Related
    • Main Architecture
      • Master components
        • API Server
        • etcd
        • Controller Manager
        • Kube Scheduler
      • Worker components
        • kubelet
        • kube-proxy
    • K8S Storage
      • Volume Provisioning
      • Understand CSI
      • How to write CSI
      • VMware CNS
      • K8S storage e2e experiment under VMware vSphere
      • Experiment on Persistent Volume Access Mode
      • Design: Storage in Cluster-API architecture
    • K8S Networking
      • Ingress
      • Endpoints
    • K8S Policies
      • Resource Quotas
    • K8S Management Platform
    • K8S Tests Tool
    • K8S Extension
      • CRDs
        • Custom Resources
        • Custom Controllers
        • How to user code-generator
        • K8S Operators
        • Operators Development Tools
          • Kubebuilder
          • Metacontroller
          • Operator SDK
      • Custom API Server
    • K8S Resource CRUD Workflow
    • K8S Garbage Collection
  • K8S CONTROLLER RELATED
    • IsController: true
    • Controller clients
  • PKS RELATED
    • How to Access VMs and Databases related to PKS
    • PKS Basics
    • BOSH Director
    • Backup and Restore on Ent. PKS with Velero
  • CICD RELATED
    • Configure Jenkins to run on K8S
    • Customize Jenkins JNLP slave image
    • Jenkins global shared libs
  • Google Anthos
    • Google Anthos Day from KubeCon 2019 San Diego
    • Migrate for Anthos
    • Config Connector
  • SYSTEM DESIGN RELATED
    • Design Data Intensive Application - Notes
      • RSM
        • Reliability
        • Scalability
      • Data models and Query Languages
      • Storage and Retrieval
    • How Alibaba Ensure K8S Performance At Large Scale
  • Miscellaneous
    • Knative
    • Serverless
    • Service Mesh
    • gRPC
    • Local persistent volumes
    • ownerReferences in K8S
    • File(NAS) vs Block(SAN) vs Object storage
    • KubeVirt
    • Why K8S HA chooses 3 instead of 5..6..7 as the size of masters?
    • goroutine & go channel
    • How to make docker images smaller
Powered by GitBook
On this page
  • What is docker-compose
  • What is Redmine
  • Why use docker-compose for Redmine
  • Getting started
  • Backup and Restoration the Redmine

Was this helpful?

  1. Having fun with docker

Using docker-compose for redmine

PreviousHaving fun with dockerNextCustomize Fluentd docker image

Last updated 6 years ago

Was this helpful?

What is docker-compose

Compose is a tool for defining and running multi-container Docker applications. With Compose, you use a YAML file to configure your application’s services. Then, with a single command, you create and start all the services from your configuration. To learn more about all the features of Compose, see . More details could be found .

What is Redmine

Redmine is a free and open source, web-based project management and issue tracking tool. It allows users to manage multiple projects and associated subprojects. It features per project wikis and forums, time tracking, and flexible role based access control. It includes a calendar and Gantt charts to aid visual representation of projects and their deadlines. Redmine integrates with various version control systems and includes a repository browser and diff viewer.

logo

Why use docker-compose for Redmine

Redmine has two components, one is database service and another one is Redmine web service. Those two components are tightly tied together, so it would be better to specify them in a single YAML file and use one single command to start the entire Redmine service.

Getting started

  • Prepare the YAML file named docker-compose.yaml .

docker-compose.yaml
version: "3.7"
services:
  redmine:
    image: redmine
    restart: always
    volumes:
      - "/home/daniel/redmine-bk/files:/usr/src/redmine/files"
    ports:
      - 3000
:3000
    environment:
      REDMINE_DB_POSTGRES: redmine-db
      REDMINE_DB_USERNAME: redmine
      REDMINE_DB_PASSWORD: mypassword

  redmine-db:
    image: postgres
    restart: always
    volumes:
      - "/home/daniel/redmine-bk/postgresql:/root"
    environment:
      POSTGRES_PASSWORD: mypassword
      POSTGRES_USER: redmine
  • We have two services defined in docker-compose.yaml. One named redmine, another one named redmine-db.

  • restart: always - Always restart the service if it dies.

  • volumes - Mount the container directory to host for data persistence.

    • Note that the passing password in as environment variable as plain text is very ugly. Redmine supports REDMINE_DB_PASSWORD_FILE, and need further research on how to pass in secret to postgres. I am just using the plain text for a demo in this article.

  • Once you have the docker-compose.yaml file ready, you could just run docker-compose up -d to bring up those services. -d means detached mode.

docker-compose up -d
Creating network "docker-compose-files_default" with the default driver
Creating docker-compose-files_redmine_1_56451004a67e    ... done
Creating docker-compose-files_redmine-db_1_1741b449c554 ... done
  • docker-compose ps to check the status.

docker-compose ps
                     Name                                   Command               State           Ports
----------------------------------------------------------------------------------------------------------------
docker-compose-files_redmine-db_1_ac764554b7a4   docker-entrypoint.sh postgres    Up      5432/tcp
docker-compose-files_redmine_1_dea3427ee7a5      /docker-entrypoint.sh rail ...   Up      0.0.0.0:3000->3000/tcp

Backup and Restoration the Redmine

Backup

Redmine backups should include:

  • Database

  • Attachments (stored in the files directory under the installation directory by default)

For attachments, we have the attachments folder mounted on host, so we could easily backup the attachments under files directory.

For database(PostgreSQL in our example), the following commands need to be executed:

/usr/bin/pg_dump -U <username> -h <hostname> -Fc --file=/root/redmine.sqlc <redmine_database>

You can find <username>, <hostname>, and <redmine_database> in the file config/database.yml. <hostname> may not be required depending on your installation of the database. The pg_dumpcommand will prompt you to enter the password when necessary.

The reason to use --file=/root/redmine.sqlc is because /root has been mounted on host, so it could easily be persistent if container restarts

Restoration

For attachments, you do not need to do anything specific since those files have been mounted on host. If container restarts, they will be re-mounted into container.

For database, the following commands need to be executed:

pg_restore -U <username> -h <hostname> -d <redmine_database> -c redmine.sqlc

Online help

root@4c8329782e15:/# pg_dump --help
pg_dump dumps a database as a text file or to other formats.

Usage:
  pg_dump [OPTION]... [DBNAME]

General options:
  -f, --file=FILENAME          output file or directory name
  -F, --format=c|d|t|p         output file format (custom, directory, tar,
                               plain text (default))
  -j, --jobs=NUM               use this many parallel jobs to dump
  -v, --verbose                verbose mode
  -V, --version                output version information, then exit
  -Z, --compress=0-9           compression level for compressed formats
  --lock-wait-timeout=TIMEOUT  fail after waiting TIMEOUT for a table lock
  --no-sync                    do not wait for changes to be written safely to disk
  -?, --help                   show this help, then exit

Options controlling the output content:
  -a, --data-only              dump only the data, not the schema
  -b, --blobs                  include large objects in dump
  -B, --no-blobs               exclude large objects in dump
  -c, --clean                  clean (drop) database objects before recreating
  -C, --create                 include commands to create database in dump
  -E, --encoding=ENCODING      dump the data in encoding ENCODING
  -n, --schema=SCHEMA          dump the named schema(s) only
  -N, --exclude-schema=SCHEMA  do NOT dump the named schema(s)
  -o, --oids                   include OIDs in dump
  -O, --no-owner               skip restoration of object ownership in
                               plain-text format
  -s, --schema-only            dump only the schema, no data
  -S, --superuser=NAME         superuser user name to use in plain-text format
  -t, --table=TABLE            dump the named table(s) only
  -T, --exclude-table=TABLE    do NOT dump the named table(s)
  -x, --no-privileges          do not dump privileges (grant/revoke)
  --binary-upgrade             for use by upgrade utilities only
  --column-inserts             dump data as INSERT commands with column names
  --disable-dollar-quoting     disable dollar quoting, use SQL standard quoting
  --disable-triggers           disable triggers during data-only restore
  --enable-row-security        enable row security (dump only content user has
                               access to)
  --exclude-table-data=TABLE   do NOT dump data for the named table(s)
  --if-exists                  use IF EXISTS when dropping objects
  --inserts                    dump data as INSERT commands, rather than COPY
  --load-via-partition-root    load partitions via the root table
  --no-comments                do not dump comments
  --no-publications            do not dump publications
  --no-security-labels         do not dump security label assignments
  --no-subscriptions           do not dump subscriptions
  --no-synchronized-snapshots  do not use synchronized snapshots in parallel jobs
  --no-tablespaces             do not dump tablespace assignments
  --no-unlogged-table-data     do not dump unlogged table data
  --quote-all-identifiers      quote all identifiers, even if not key words
  --section=SECTION            dump named section (pre-data, data, or post-data)
  --serializable-deferrable    wait until the dump can run without anomalies
  --snapshot=SNAPSHOT          use given snapshot for the dump
  --strict-names               require table and/or schema include patterns to
                               match at least one entity each
  --use-set-session-authorization
                               use SET SESSION AUTHORIZATION commands instead of
                               ALTER OWNER commands to set ownership

Connection options:
  -d, --dbname=DBNAME      database to dump
  -h, --host=HOSTNAME      database server host or socket directory
  -p, --port=PORT          database server port number
  -U, --username=NAME      connect as specified database user
  -w, --no-password        never prompt for password
  -W, --password           force password prompt (should happen automatically)
  --role=ROLENAME          do SET ROLE before dump

If no database name is supplied, then the PGDATABASE environment
variable value is used.

Report bugs to <pgsql-bugs@postgresql.org>.
root@4c8329782e15:/# pg_restore --help
pg_restore restores a PostgreSQL database from an archive created by pg_dump.

Usage:
  pg_restore [OPTION]... [FILE]

General options:
  -d, --dbname=NAME        connect to database name
  -f, --file=FILENAME      output file name
  -F, --format=c|d|t       backup file format (should be automatic)
  -l, --list               print summarized TOC of the archive
  -v, --verbose            verbose mode
  -V, --version            output version information, then exit
  -?, --help               show this help, then exit

Options controlling the restore:
  -a, --data-only              restore only the data, no schema
  -c, --clean                  clean (drop) database objects before recreating
  -C, --create                 create the target database
  -e, --exit-on-error          exit on error, default is to continue
  -I, --index=NAME             restore named index
  -j, --jobs=NUM               use this many parallel jobs to restore
  -L, --use-list=FILENAME      use table of contents from this file for
                               selecting/ordering output
  -n, --schema=NAME            restore only objects in this schema
  -N, --exclude-schema=NAME    do not restore objects in this schema
  -O, --no-owner               skip restoration of object ownership
  -P, --function=NAME(args)    restore named function
  -s, --schema-only            restore only the schema, no data
  -S, --superuser=NAME         superuser user name to use for disabling triggers
  -t, --table=NAME             restore named relation (table, view, etc.)
  -T, --trigger=NAME           restore named trigger
  -x, --no-privileges          skip restoration of access privileges (grant/revoke)
  -1, --single-transaction     restore as a single transaction
  --disable-triggers           disable triggers during data-only restore
  --enable-row-security        enable row security
  --if-exists                  use IF EXISTS when dropping objects
  --no-comments                do not restore comments
  --no-data-for-failed-tables  do not restore data of tables that could not be
                               created
  --no-publications            do not restore publications
  --no-security-labels         do not restore security labels
  --no-subscriptions           do not restore subscriptions
  --no-tablespaces             do not restore tablespace assignments
  --section=SECTION            restore named section (pre-data, data, or post-data)
  --strict-names               require table and/or schema include patterns to
                               match at least one entity each
  --use-set-session-authorization
                               use SET SESSION AUTHORIZATION commands instead of
                               ALTER OWNER commands to set ownership

Connection options:
  -h, --host=HOSTNAME      database server host or socket directory
  -p, --port=PORT          database server port number
  -U, --username=NAME      connect as specified database user
  -w, --no-password        never prompt for password
  -W, --password           force password prompt (should happen automatically)
  --role=ROLENAME          do SET ROLE before restore

The options -I, -n, -N, -P, -t, -T, and --section can be combined and specified
multiple times to select multiple objects.

If no input file name is supplied, then standard input is used.

Report bugs to <pgsql-bugs@postgresql.org>.

environment - The required environment variables which are required to start the redmine and database services. Refer to for more details.

Refer to official for more details on each properties.

Now you would be able to access the redmine from localhost:3000 . According to the , the default username and password are admin/admin

More backup and restoration details could be found

redmine docker hub
compose file doc
Redmine doc
here
the list of features
here
wikipedia.org/wiki/Redmine