224 changes: 112 additions & 112 deletions docs/compile_guide.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,175 +4,175 @@ This guide provides instructions for developers to build and run Harbor from sou

## Step 1: Prepare for a build environment for Harbor

Harbor is deployed as several Docker containers and most of the code is written in Go language. The build environment requires Python, Docker, Docker Compose and golang development environment. Please install the below prerequisites:
Harbor is deployed as several Docker containers and most of the code is written in Go language. The build environment requires Docker, Docker Compose and golang development environment. Please install the below prerequisites:

| Software | Required Version |
| -------------- | ---------------- |
| docker | 17.05 + |
| docker-compose | 1.18.0 + |
| python | 2.7 + |
| git | 1.9.1 + |
| make | 3.81 + |
| golang\* | 1.7.3 + |

Software | Required Version
----------------------|--------------------------
docker | 17.05 +
docker-compose | 1.11.0 +
python | 2.7 +
git | 1.9.1 +
make | 3.81 +
golang* | 1.7.3 +
*optional, required only if you use your own Golang environment.

\*optional, required only if you use your own Golang environment.

## Step 2: Getting the source code

```sh
$ git clone https://github.com/goharbor/harbor
```
```sh
$ git clone https://github.com/goharbor/harbor
```

## Step 3: Building and installing Harbor

### Configuration

Edit the file **make/harbor.cfg** and make necessary configuration changes such as hostname, admin password and mail server. Refer to **[Installation and Configuration Guide](installation_guide.md#configuring-harbor)** for more info.
Edit the file **make/harbor.yml** and make necessary configuration changes such as hostname, admin password and mail server. Refer to **[Installation and Configuration Guide](installation_guide.md#configuring-harbor)** for more info.

```sh
$ cd harbor
$ vi make/harbor.cfg
```
```sh
$ cd harbor
$ vi make/harbor.yml
```

### Compiling and Running

You can compile the code by one of the three approaches:

#### I. Build with official Golang image

* Get official Golang image from docker hub:
- Get official Golang image from docker hub:

```sh
$ docker pull golang:1.11.2
```
```sh
$ docker pull golang:1.12.5
```

* Build, install and bring up Harbor without Notary:
- Build, install and bring up Harbor without Notary:

```sh
$ make install GOBUILDIMAGE=golang:1.11.2 COMPILETAG=compile_golangimage
```
```sh
$ make install GOBUILDIMAGE=golang:1.12.5 COMPILETAG=compile_golangimage
```

* Build, install and bring up Harbor with Notary:
- Build, install and bring up Harbor with Notary:

```sh
$ make install GOBUILDIMAGE=golang:1.11.2 COMPILETAG=compile_golangimage NOTARYFLAG=true
```
```sh
$ make install GOBUILDIMAGE=golang:1.12.5 COMPILETAG=compile_golangimage NOTARYFLAG=true
```

* Build, install and bring up Harbor with Clair:
- Build, install and bring up Harbor with Clair:

```sh
$ make install GOBUILDIMAGE=golang:1.11.2 COMPILETAG=compile_golangimage CLAIRFLAG=true
```
```sh
$ make install GOBUILDIMAGE=golang:1.12.5 COMPILETAG=compile_golangimage CLAIRFLAG=true
```

#### II. Compile code with your own Golang environment, then build Harbor

* Move source code to $GOPATH
- Move source code to \$GOPATH

```sh
$ mkdir $GOPATH/src/github.com/goharbor/
$ cd ..
$ mv harbor $GOPATH/src/github.com/goharbor/.
```

```sh
$ mkdir $GOPATH/src/github.com/goharbor/
$ cd ..
$ mv harbor $GOPATH/src/github.com/goharbor/.
```
- Build, install and run Harbor without Notary and Clair:

* Build, install and run Harbor without Notary and Clair:
```sh
$ cd $GOPATH/src/github.com/goharbor/harbor
$ make install
```

```sh
$ cd $GOPATH/src/github.com/goharbor/harbor
$ make install
```
- Build, install and run Harbor with Notary and Clair:

* Build, install and run Harbor with Notary and Clair:
```sh
$ cd $GOPATH/src/github.com/goharbor/harbor
$ make install -e NOTARYFLAG=true CLAIRFLAG=true
```

```sh
$ cd $GOPATH/src/github.com/goharbor/harbor
$ make install -e NOTARYFLAG=true CLAIRFLAG=true
```

### Verify your installation

If everything worked properly, you can get the below message:

```sh
...
Start complete. You can visit harbor now.
```
```sh
...
Start complete. You can visit harbor now.
```

Refer to [Installation and Configuration Guide](installation_guide.md#managing-harbors-lifecycle) for more information about managing your Harbor instance.
Refer to [Installation and Configuration Guide](installation_guide.md#managing-harbors-lifecycle) for more information about managing your Harbor instance.

## Appendix
* Using the Makefile

- Using the Makefile

The `Makefile` contains these configurable parameters:

Variable | Description
-------------------|-------------
BASEIMAGE | Container base image, default: photon
DEVFLAG | Build model flag, default: dev
COMPILETAG | Compile model flag, default: compile_normal (local golang build)
NOTARYFLAG | Notary mode flag, default: false
CLAIRFLAG | Clair mode flag, default: false
HTTPPROXY | NPM http proxy for Clarity UI builder
REGISTRYSERVER | Remote registry server IP address
REGISTRYUSER | Remote registry server user name
REGISTRYPASSWORD | Remote registry server user password
REGISTRYPROJECTNAME| Project name on remote registry server
VERSIONTAG | Harbor images tag, default: dev
PKGVERSIONTAG | Harbor online and offline version tag, default:dev

* Predefined targets:

Target | Description
--------------------|-------------
all | prepare env, compile binaries, build images and install images
prepare | prepare env
compile | compile ui and jobservice code
compile_portal | compile portal code
compile_ui | compile ui binary
compile_jobservice | compile jobservice binary
build | build Harbor docker images (default: using build_photon)
build_photon | build Harbor docker images from Photon OS base image
install | compile binaries, build images, prepare specific version of compose file and startup Harbor instance
start | startup Harbor instance (set NOTARYFLAG=true when with Notary)
down | shutdown Harbor instance (set NOTARYFLAG=true when with Notary)
package_online | prepare online install package
package_offline | prepare offline install package
pushimage | push Harbor images to specific registry server
clean all | remove binary, Harbor images, specific version docker-compose file, specific version tag and online/offline install package
cleanbinary | remove ui and jobservice binary
cleanimage | remove Harbor images
cleandockercomposefile | remove specific version docker-compose
cleanversiontag | remove specific version tag
cleanpackage | remove online/offline install package
| Variable | Description |
| ------------------- | ---------------------------------------------------------------- |
| BASEIMAGE | Container base image, default: photon |
| DEVFLAG | Build model flag, default: dev |
| COMPILETAG | Compile model flag, default: compile_normal (local golang build) |
| NOTARYFLAG | Notary mode flag, default: false |
| CLAIRFLAG | Clair mode flag, default: false |
| HTTPPROXY | NPM http proxy for Clarity UI builder |
| REGISTRYSERVER | Remote registry server IP address |
| REGISTRYUSER | Remote registry server user name |
| REGISTRYPASSWORD | Remote registry server user password |
| REGISTRYPROJECTNAME | Project name on remote registry server |
| VERSIONTAG | Harbor images tag, default: dev |
| PKGVERSIONTAG | Harbor online and offline version tag, default:dev |

- Predefined targets:

| Target | Description |
| ---------------------- | --------------------------------------------------------------------------------------------------------------------------- |
| all | prepare env, compile binaries, build images and install images |
| prepare | prepare env |
| compile | compile ui and jobservice code |
| compile_portal | compile portal code |
| compile_ui | compile ui binary |
| compile_jobservice | compile jobservice binary |
| build | build Harbor docker images (default: using build_photon) |
| build_photon | build Harbor docker images from Photon OS base image |
| install | compile binaries, build images, prepare specific version of compose file and startup Harbor instance |
| start | startup Harbor instance (set NOTARYFLAG=true when with Notary) |
| down | shutdown Harbor instance (set NOTARYFLAG=true when with Notary) |
| package_online | prepare online install package |
| package_offline | prepare offline install package |
| pushimage | push Harbor images to specific registry server |
| clean all | remove binary, Harbor images, specific version docker-compose file, specific version tag and online/offline install package |
| cleanbinary | remove ui and jobservice binary |
| cleanimage | remove Harbor images |
| cleandockercomposefile | remove specific version docker-compose |
| cleanversiontag | remove specific version tag |
| cleanpackage | remove online/offline install package |

#### EXAMPLE:

#### Push Harbor images to specific registry server

```sh
$ make pushimage -e DEVFLAG=false REGISTRYSERVER=[$SERVERADDRESS] REGISTRYUSER=[$USERNAME] REGISTRYPASSWORD=[$PASSWORD] REGISTRYPROJECTNAME=[$PROJECTNAME]
```sh
$ make pushimage -e DEVFLAG=false REGISTRYSERVER=[$SERVERADDRESS] REGISTRYUSER=[$USERNAME] REGISTRYPASSWORD=[$PASSWORD] REGISTRYPROJECTNAME=[$PROJECTNAME]

```
```

**Note**: need add "/" on end of REGISTRYSERVER. If REGISTRYSERVER is not set, images will be pushed directly to Docker Hub.
**Note**: need add "/" on end of REGISTRYSERVER. If REGISTRYSERVER is not set, images will be pushed directly to Docker Hub.

```sh
$ make pushimage -e DEVFLAG=false REGISTRYUSER=[$USERNAME] REGISTRYPASSWORD=[$PASSWORD] REGISTRYPROJECTNAME=[$PROJECTNAME]

```sh
$ make pushimage -e DEVFLAG=false REGISTRYUSER=[$USERNAME] REGISTRYPASSWORD=[$PASSWORD] REGISTRYPROJECTNAME=[$PROJECTNAME]

```
```

#### Clean up binaries and images of a specific version

```sh
$ make clean -e VERSIONTAG=[TAG]
```sh
$ make clean -e VERSIONTAG=[TAG]

```

```
**Note**: If new code had been added to Github, the git commit TAG will change. Better use this command to clean up images and files of previous TAG.
**Note**: If new code had been added to Github, the git commit TAG will change. Better use this command to clean up images and files of previous TAG.

#### By default, the make process create a development build. To create a release build of Harbor, set the below flag to false.

```sh
$ make XXXX -e DEVFLAG=false
```sh
$ make XXXX -e DEVFLAG=false

```
```
29 changes: 18 additions & 11 deletions docs/configure_https.md
Original file line number Diff line number Diff line change
Expand Up @@ -113,17 +113,24 @@ Notice that you may need to trust the certificate at OS level. Please refer to t

**3) Configure Harbor**

Edit the file ```harbor.cfg```, update the hostname and the protocol, and update the attributes ```ssl_cert``` and ```ssl_cert_key```:
Edit the file `harbor.yml`, update the hostname and uncomment the https block, and update the attributes `certificate` and `private_key`:

```yaml
#set hostname
hostname: yourdomain.com

http:
port: 80

https:
# https port for harbor, default is 443
port: 443
# The path of cert and key files for nginx
certificate: /data/cert/yourdomain.com.crt
private_key: /data/cert/yourdomain.com.key

```
#set hostname
hostname = yourdomain.com:port
#set ui_url_protocol
ui_url_protocol = https
......
#The path of cert and key files for nginx, they are applied only the protocol is set to https
ssl_cert = /data/cert/yourdomain.com.crt
ssl_cert_key = /data/cert/yourdomain.com.key

```

Generate configuration files for Harbor:
Expand All @@ -148,7 +155,7 @@ After setting up HTTPS for Harbor, you can verify it by the following steps:

* Notice that some browser may still shows the warning regarding Certificate Authority (CA) unknown for security reason even though we signed certificates by self-signed CA and deploy the CA to the place mentioned above. It is because self-signed CA essentially is not a trusted third-party CA. You can import the CA to the browser on your own to solve the warning.

* On a machine with Docker daemon, make sure the option "-insecure-registry" for https://yourdomain.com does not present.
* On a machine with Docker daemon, make sure the option "-insecure-registry" for https://yourdomain.com is not present.

* If you mapped nginx port 443 to another port, then you should instead create the directory ```/etc/docker/certs.d/yourdomain.com:port``` (or your registry host IP:port). Then run any docker command to verify the setup, e.g.

Expand All @@ -163,7 +170,7 @@ If you've mapped nginx 443 port to another, you need to add the port to login, l
```


##Troubleshooting
## Troubleshooting
1. You may get an intermediate certificate from a certificate issuer. In this case, you should merge the intermediate certificate with your own certificate to create a certificate bundle. You can achieve this by the below command:

```
Expand Down
Binary file removed docs/img/caicloudLogoWeb.png
Binary file not shown.
Binary file modified docs/img/create_rule.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified docs/img/delete_rule.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/img/list_stop_executions.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file removed docs/img/list_stop_jobs.png
Binary file not shown.
Binary file added docs/img/list_tasks.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file removed docs/img/manage_endpoint.png
Binary file not shown.
Binary file added docs/img/manage_registry.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified docs/img/manage_replication.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/img/oidc_auth_setting.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/img/oidc_login.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/img/oidc_onboard_dlg.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/img/profile_dlg.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified docs/img/start_replicate.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/img/user_profile.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
412 changes: 209 additions & 203 deletions docs/installation_guide.md

Large diffs are not rendered by default.

4 changes: 2 additions & 2 deletions docs/kubernetes_deployment.md
Original file line number Diff line number Diff line change
@@ -1,12 +1,12 @@
**IMPORTANT** This guide is deprecated and not updated any more. We strongly recommend using [Harbor Helm Chart](https://github.com/goharbor/harbor-helm) to deploy latest Harbor release on Kubernetes.

## Integration with Kubernetes
This Document decribes how to deploy Harbor on Kubernetes. It has been verified on **Kubernetes v1.6.5** and **Harbor v1.2.0**
This Document describes how to deploy Harbor on Kubernetes. It has been verified on **Kubernetes v1.6.5** and **Harbor v1.2.0**

### Prerequisite

* You should have domain knowledge about Kubernetes (Deployment, Service, Persistent Volume, Persistent Volume Claim, Config Map, Ingress).
* **Optional**: Load the docker images onto woker nodes. *If you skip this step, worker node will pull images from Docker Hub when starting the pods.*
* **Optional**: Load the docker images onto worker nodes. *If you skip this step, worker node will pull images from Docker Hub when starting the pods.*
* Download the offline installer of Harbor v1.2.0 from the [release](https://github.com/goharbor/harbor/releases) page.
* Uncompress the offline installer and get the images tgz file harbor.*.tgz, transfer it to each of the worker nodes.
* Load the images into docker:
Expand Down
21 changes: 13 additions & 8 deletions docs/manage_role_by_ldap_group.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ This guide provides instructions to manage roles by LDAP/AD group. You can impor

## Prerequisite

1. Harbor's auth_mode is ldap_auth and **[basic LDAP configure paremters](https://github.com/vmware/harbor/blob/master/docs/installation_guide.md#optional-parameters)** are configured.
1. Harbor's auth_mode is ldap_auth and **[basic LDAP configure parameters](https://github.com/vmware/harbor/blob/master/docs/installation_guide.md#optional-parameters)** are configured.
1. Memberof overlay

This feature requires the LDAP/AD server enabled the feature **memberof overlay**.
Expand All @@ -17,18 +17,23 @@ This guide provides instructions to manage roles by LDAP/AD group. You can impor

Besides **[basic LDAP configure parameters](https://github.com/vmware/harbor/blob/master/docs/installation_guide.md#optional-parameters)** , LDAP group related configure parameters should be configured, they can be configured before or after installation

1. Configure parameters in harbor.cfg before installation
1. Configure LDAP parameters via API, refer to **[Config Harbor user settings by command line](configure_user_settings.md)**

For example:
```
curl -X PUT -u "<username>:<password>" -H "Content-Type: application/json" -ki https://harbor.sample.domain/api/configurations -d'{"ldap_group_basedn":"ou=groups,dc=example,dc=com"}'
```
The following parameters are related to LDAP group configuration.
* ldap_group_basedn -- The base DN from which to lookup a group in LDAP/AD, for example: ou=groups,dc=example,dc=com
* ldap_group_filter -- The filter to search LDAP/AD group, for example: objectclass=groupOfNames
* ldap_group_gid -- The attribute used to name an LDAP/AD group, for example: cn
* ldap_group_scope -- The scope to search for LDAP/AD groups. 0-LDAP_SCOPE_BASE, 1-LDAP_SCOPE_ONELEVEL, 2-LDAP_SCOPE_SUBTREE

2. Or Change configure parameter in web console after installation. Go to "Administration" -> "Configuration" -> "Authentication" and change following settings.
- LDAP Group Base DN -- ldap_group_basedn in harbor.cfg
- LDAP Group Filter -- ldap_group_filter in harbor.cfg
- LDAP Group GID -- ldap_group_gid in harbor.cfg
- LDAP Group Scope -- ldap_group_scope in harbor.cfg
2. Or change configure parameter in web console after installation. Go to "Administration" -> "Configuration" -> "Authentication" and change following settings.
- LDAP Group Base DN -- ldap_group_basedn in the Harbor user settings
- LDAP Group Filter -- ldap_group_filter in the Harbor user settings
- LDAP Group GID -- ldap_group_gid in the Harbor user settings
- LDAP Group Scope -- ldap_group_scope in the Harbor user settings
- LDAP Groups With Admin Privilege -- Specify an LDAP/AD group DN, all LDAPA/AD users in this group have harbor admin privileges.

![Screenshot of LDAP group config](img/group/ldap_group_config.png)
Expand All @@ -49,4 +54,4 @@ If a user is in the LDAP groups with admin privilege (ldap_group_admin_dn), the

## User privileges and group privileges

If a user has both user-level role and group-level role, only the user level role privileges will be considered.
If a user has both user-level role and group-level role, these privileges are merged together.
6 changes: 3 additions & 3 deletions docs/migration_guide.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Harbor upgrade and migration guide

This guide only covers upgrade and mgiration to version >= v1.8.0
This guide only covers upgrade and migration to version >= v1.8.0

When upgrading your existing Harbor instance to a newer version, you may need to migrate the data in your database and the settings in `harbor.cfg`.
Since the migration may alter the database schema and the settings of `harbor.cfg`, you should **always** back up your data before any migration.
Expand Down Expand Up @@ -34,7 +34,7 @@ you follow the steps below.
```
mv harbor /my_backup_dir/harbor
```
Back up database (by default in diretory `/data/database`)
Back up database (by default in directory `/data/database`)
```
cp -r /data/database /my_backup_dir/
```
Expand All @@ -52,7 +52,7 @@ you follow the steps below.
in that path will be updated with the values from ${harbor_cfg}

```
docker run -it --rm -v ${harbor_cfg}:/harbor-migration/harbor-cfg/harbor.cfg -v ${harbor_yml}:/harbor-migration/harbor-cfg-out/harbor.yml goharbor/harbor-migrator:[tag] --cfg up
docker run -it --rm -v ${harbor_cfg}:/harbor-migration/harbor-cfg/harbor.yml -v ${harbor_yml}:/harbor-migration/harbor-cfg-out/harbor.yml goharbor/harbor-migrator:[tag] --cfg up
```
**NOTE:** The schema upgrade and data migration of Database is performed by core when Harbor starts, if the migration fails,
please check the log of core to debug.
Expand Down
748 changes: 730 additions & 18 deletions docs/swagger.yaml

Large diffs are not rendered by default.

4 changes: 2 additions & 2 deletions docs/use_make.md
Original file line number Diff line number Diff line change
Expand Up @@ -36,10 +36,10 @@ version | set harbor version
#### EXAMPLE:

#### Build and run harbor from source code.
make install GOBUILDIMAGE=golang:1.11.2 COMPILETAG=compile_golangimage NOTARYFLAG=true
make install GOBUILDIMAGE=golang:1.12.5 COMPILETAG=compile_golangimage NOTARYFLAG=true

### Package offline installer
make package_offline GOBUILDIMAGE=golang:1.11.2 COMPILETAG=compile_golangimage NOTARYFLAG=true
make package_offline GOBUILDIMAGE=golang:1.12.5 COMPILETAG=compile_golangimage NOTARYFLAG=true

### Start harbor with notary
make -e NOTARYFLAG=true start
Expand Down
12 changes: 8 additions & 4 deletions docs/use_notary.md
Original file line number Diff line number Diff line change
@@ -1,11 +1,14 @@
### Setup
In harbor.cfg, make sure the attribute ```ui_url_protocol``` is set to ```https```, and the attributes ```ssl_cert``` and ```ssl_cert_key``` are pointed to valid certificates. For more information about generating https certificate please refer to: [Configuring HTTPS for Harbor](configure_https.md)

In harbor.yml, make sure https is enabled, and the attributes `ssl_cert` and `ssl_cert_key` are pointed to valid certificates. For more information about generating https certificate please refer to: [Configuring HTTPS for Harbor](configure_https.md)

### Copy Root Certificate
Suppose the Harbor instance is hosted on a machine ```192.168.0.5```
If you are using a self-signed certificate, make sure to copy the CA root cert to ```/etc/docker/certs.d/192.168.0.5/``` and ```~/.docker/tls/192.168.0.5:4443/```

Suppose the Harbor instance is hosted on a machine `192.168.0.5`
If you are using a self-signed certificate, make sure to copy the CA root cert to `/etc/docker/certs.d/192.168.0.5/` and `~/.docker/tls/192.168.0.5:4443/`

### Enable Docker Content Trust

It can be done via setting environment variables:

```
Expand All @@ -14,7 +17,8 @@ export DOCKER_CONTENT_TRUST_SERVER=https://192.168.0.5:4443
```

### Set alias for notary (optional)
Because by default the local directory for storing meta files for notary client is different from docker client. If you want to use notary client to manipulate the keys/meta files generated by Docker Content Trust, please set the alias to reduce the effort:

Because by default the local directory for storing meta files for notary client is different from docker client. If you want to use notary client to manipulate the keys/meta files generated by Docker Content Trust, please set the alias to reduce the effort:

```
alias notary="notary -s https://192.168.0.5:4443 -d ~/.docker/trust --tlscacert /etc/docker/certs.d/192.168.0.5/ca.crt"
Expand Down
155 changes: 118 additions & 37 deletions docs/user_guide.md

Large diffs are not rendered by default.

88 changes: 61 additions & 27 deletions make/harbor.yml
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
## Configuration file of Harbor
# Configuration file of Harbor

#The IP address or hostname to access admin UI and registry service.
#DO NOT use localhost or 127.0.0.1, because Harbor needs to be accessed by external clients.
# The IP address or hostname to access admin UI and registry service.
# DO NOT use localhost or 127.0.0.1, because Harbor needs to be accessed by external clients.
hostname: reg.mydomain.com

# http related config
Expand All @@ -26,10 +26,15 @@ http:
# Remember Change the admin password from UI after launching Harbor.
harbor_admin_password: Harbor12345

## Harbor DB configuration
# Harbor DB configuration
database:
#The password for the root user of Harbor DB. Change this before any production use.
# The password for the root user of Harbor DB. Change this before any production use.
password: root123
# The maximum number of connections in the idle connection pool. If it <=0, no idle connections are retained.
max_idle_conns: 50
# The maximum number of open connections to the database. If it <= 0, then there is no limit on the number of open connections.
# Note: the default number of connections is 100 for postgres.
max_open_conns: 100

# The default data volume
data_volume: /data
Expand All @@ -50,40 +55,50 @@ data_volume: /data
# disabled: false

# Clair configuration
clair:
clair:
# The interval of clair updaters, the unit is hour, set to 0 to disable the updaters.
updaters_interval: 12

# Config http proxy for Clair, e.g. http://my.proxy.com:3128
# Clair doesn't need to connect to harbor internal components via http proxy.
http_proxy:
https_proxy:
no_proxy: 127.0.0.1,localhost,core,registry

jobservice:
# Maximum number of job workers in job service
# Maximum number of job workers in job service
max_job_workers: 10

notification:
# Maximum retry count for webhook job
webhook_job_max_retry: 10

chart:
# Change the value of absolute_url to enabled can enable absolute url in chart
absolute_url: disabled

# Log configurations
log:
# options are debug, info, warn, error
# options are debug, info, warning, error, fatal
level: info
# Log files are rotated log_rotate_count times before being removed. If count is 0, old versions are removed rather than rotated.
rotate_count: 50
# Log files are rotated only if they grow bigger than log_rotate_size bytes. If size is followed by k, the size is assumed to be in kilobytes.
# If the M is used, the size is in megabytes, and if G is used, the size is in gigabytes. So size 100, size 100k, size 100M and size 100G
# are all valid.
rotate_size: 200M
# The directory on your host that store log
location: /var/log/harbor
# configs for logs in local storage
local:
# Log files are rotated log_rotate_count times before being removed. If count is 0, old versions are removed rather than rotated.
rotate_count: 50
# Log files are rotated only if they grow bigger than log_rotate_size bytes. If size is followed by k, the size is assumed to be in kilobytes.
# If the M is used, the size is in megabytes, and if G is used, the size is in gigabytes. So size 100, size 100k, size 100M and size 100G
# are all valid.
rotate_size: 200M
# The directory on your host that store log
location: /var/log/harbor

# Uncomment following lines to enable external syslog endpoint.
# external_endpoint:
# # protocol used to transmit log to external endpoint, options is tcp or udp
# protocol: tcp
# # The host of external endpoint
# host: localhost
# # Port of external endpoint
# port: 5140

#This attribute is for migrator to detect the version of the .cfg file, DO NOT MODIFY!
_version: 1.8.0
_version: 1.9.0

# Uncomment external_database if using external database. Currently only support POSTGRES.
# Four databases are needed to be create first by users for Harbor core, Clair, Notary server
# and Notary signer. And the tables will be generated automatically when Harbor starting up.
# NOTE: external_database is unable to custom attributes individually, you must do them in block.
# Uncomment external_database if using external database.
# external_database:
# harbor:
# host: harbor_db_host
Expand All @@ -92,6 +107,8 @@ _version: 1.8.0
# username: harbor_db_username
# password: harbor_db_password
# ssl_mode: disable
# max_idle_conns: 2
# max_open_conns: 0
# clair:
# host: clair_db_host
# port: clair_db_port
Expand Down Expand Up @@ -127,3 +144,20 @@ _version: 1.8.0
# Uncomment uaa for trusting the certificate of uaa instance that is hosted via self-signed cert.
# uaa:
# ca_file: /path/to/ca

# Global proxy
# Config http proxy for components, e.g. http://my.proxy.com:3128
# Components doesn't need to connect to each others via http proxy.
# Remove component from `components` array if want disable proxy
# for it. If you want use proxy for replication, MUST enable proxy
# for core and jobservice, and set `http_proxy` and `https_proxy`.
# Add domain to the `no_proxy` field, when you want disable proxy
# for some special registry.
proxy:
http_proxy:
https_proxy:
no_proxy: 127.0.0.1,localhost,.local,.internal,log,db,redis,nginx,core,portal,postgresql,jobservice,registry,registryctl,clair
components:
- core
- jobservice
- clair
6 changes: 3 additions & 3 deletions make/migrations/postgresql/0001_initial_schema.up.sql
Original file line number Diff line number Diff line change
Expand Up @@ -56,9 +56,9 @@ $$;

CREATE TRIGGER harbor_user_update_time_at_modtime BEFORE UPDATE ON harbor_user FOR EACH ROW EXECUTE PROCEDURE update_update_time_at_column();

insert into harbor_user (username, email, password, realname, comment, deleted, sysadmin_flag, creation_time, update_time) values
('admin', 'admin@example.com', '', 'system admin', 'admin user',false, true, NOW(), NOW()),
('anonymous', 'anonymous@example.com', '', 'anonymous user', 'anonymous user', true, false, NOW(), NOW());
insert into harbor_user (username, password, realname, comment, deleted, sysadmin_flag, creation_time, update_time) values
('admin', '', 'system admin', 'admin user',false, true, NOW(), NOW()),
('anonymous', '', 'anonymous user', 'anonymous user', true, false, NOW(), NOW());

create table project (
project_id SERIAL PRIMARY KEY NOT NULL,
Expand Down
30 changes: 30 additions & 0 deletions make/migrations/postgresql/0005_1.8.2_schema.up.sql
Original file line number Diff line number Diff line change
@@ -0,0 +1,30 @@
/*
Rename the duplicate names before adding "UNIQUE" constraint
*/
DO $$
BEGIN
WHILE EXISTS (SELECT count(*) FROM user_group GROUP BY group_name HAVING count(*) > 1) LOOP
UPDATE user_group AS r
SET group_name = (
/*
truncate the name if it is too long after appending the sequence number
*/
CASE WHEN (length(group_name)+length(v.seq::text)+1) > 256
THEN
substring(group_name from 1 for (255-length(v.seq::text))) || '_' || v.seq
ELSE
group_name || '_' || v.seq
END
)
FROM (SELECT id, row_number() OVER (PARTITION BY group_name ORDER BY id) AS seq FROM user_group) AS v
WHERE r.id = v.id AND v.seq > 1;
END LOOP;
END $$;

ALTER TABLE user_group ADD CONSTRAINT unique_group_name UNIQUE (group_name);


/*
Fix issue https://github.com/goharbor/harbor/issues/8526, delete the none scan_all schedule.
*/
UPDATE admin_job SET deleted='true' WHERE cron_str='{"type":"none"}';
188 changes: 188 additions & 0 deletions make/migrations/postgresql/0010_1.9.0_schema.up.sql
Original file line number Diff line number Diff line change
@@ -0,0 +1,188 @@
/* add table for CVE whitelist */
CREATE TABLE cve_whitelist
(
id SERIAL PRIMARY KEY NOT NULL,
project_id int,
creation_time timestamp default CURRENT_TIMESTAMP,
update_time timestamp default CURRENT_TIMESTAMP,
expires_at bigint,
items text NOT NULL,
UNIQUE (project_id)
);

CREATE TABLE blob
(
id SERIAL PRIMARY KEY NOT NULL,
/*
digest of config, layer, manifest
*/
digest varchar(255) NOT NULL,
content_type varchar(1024) NOT NULL,
size bigint NOT NULL,
creation_time timestamp default CURRENT_TIMESTAMP,
UNIQUE (digest)
);

/* add the table for project and blob */
CREATE TABLE project_blob (
id SERIAL PRIMARY KEY NOT NULL,
project_id int NOT NULL,
blob_id int NOT NULL,
creation_time timestamp default CURRENT_TIMESTAMP,
CONSTRAINT unique_project_blob UNIQUE (project_id, blob_id)
);

CREATE TABLE artifact
(
id SERIAL PRIMARY KEY NOT NULL,
project_id int NOT NULL,
repo varchar(255) NOT NULL,
tag varchar(255) NOT NULL,
/*
digest of manifest
*/
digest varchar(255) NOT NULL,
/*
kind of artifact, image, chart, etc..
*/
kind varchar(255) NOT NULL,
creation_time timestamp default CURRENT_TIMESTAMP,
pull_time timestamp,
push_time timestamp,
CONSTRAINT unique_artifact UNIQUE (project_id, repo, tag)
);

/* add the table for relation of artifact and blob */
CREATE TABLE artifact_blob
(
id SERIAL PRIMARY KEY NOT NULL,
digest_af varchar(255) NOT NULL,
digest_blob varchar(255) NOT NULL,
creation_time timestamp default CURRENT_TIMESTAMP,
CONSTRAINT unique_artifact_blob UNIQUE (digest_af, digest_blob)
);

/* add quota table */
CREATE TABLE quota
(
id SERIAL PRIMARY KEY NOT NULL,
reference VARCHAR(255) NOT NULL,
reference_id VARCHAR(255) NOT NULL,
hard JSONB NOT NULL,
creation_time timestamp default CURRENT_TIMESTAMP,
update_time timestamp default CURRENT_TIMESTAMP,
UNIQUE (reference, reference_id)
);

/* add quota usage table */
CREATE TABLE quota_usage
(
id SERIAL PRIMARY KEY NOT NULL,
reference VARCHAR(255) NOT NULL,
reference_id VARCHAR(255) NOT NULL,
used JSONB NOT NULL,
creation_time timestamp default CURRENT_TIMESTAMP,
update_time timestamp default CURRENT_TIMESTAMP,
UNIQUE (reference, reference_id)
);

/* only set quota and usage for 'library', and let the sync quota handling others. */
INSERT INTO quota (reference, reference_id, hard, creation_time, update_time)
SELECT 'project',
CAST(project_id AS VARCHAR),
'{"count": -1, "storage": -1}',
NOW(),
NOW()
FROM project
WHERE name = 'library' and deleted = 'f';

INSERT INTO quota_usage (id, reference, reference_id, used, creation_time, update_time)
SELECT id,
reference,
reference_id,
'{"count": 0, "storage": 0}',
creation_time,
update_time
FROM quota;

create table retention_policy
(
id serial PRIMARY KEY NOT NULL,
scope_level varchar(20),
scope_reference integer,
trigger_kind varchar(20),
data text,
create_time time,
update_time time
);

create table retention_execution
(
id serial PRIMARY KEY NOT NULL,
policy_id integer,
dry_run boolean,
trigger varchar(20),
start_time timestamp
);

create table retention_task
(
id SERIAL NOT NULL,
execution_id integer,
repository varchar(255),
job_id varchar(64),
status varchar(32),
status_code integer,
status_revision integer,
start_time timestamp default CURRENT_TIMESTAMP,
end_time timestamp default CURRENT_TIMESTAMP,
total integer,
retained integer,
PRIMARY KEY (id)
);

create table schedule
(
id SERIAL NOT NULL,
job_id varchar(64),
status varchar(64),
creation_time timestamp default CURRENT_TIMESTAMP,
update_time timestamp default CURRENT_TIMESTAMP,
PRIMARY KEY (id)
);

/*add notification policy table*/
create table notification_policy (
id SERIAL NOT NULL,
name varchar(256),
project_id int NOT NULL,
enabled boolean NOT NULL DEFAULT true,
description text,
targets text,
event_types text,
creator varchar(256),
creation_time timestamp default CURRENT_TIMESTAMP,
update_time timestamp default CURRENT_TIMESTAMP,
PRIMARY KEY (id),
CONSTRAINT unique_project_id UNIQUE (project_id)
);

/*add notification job table*/
CREATE TABLE notification_job (
id SERIAL NOT NULL,
policy_id int NOT NULL,
status varchar(32),
/* event_type is the type of trigger event, eg. pushImage, pullImage, uploadChart... */
event_type varchar(256),
/* notify_type is the type to notify event to user, eg. HTTP, Email... */
notify_type varchar(256),
job_detail text,
job_uuid varchar(64),
creation_time timestamp default CURRENT_TIMESTAMP,
update_time timestamp default CURRENT_TIMESTAMP,
PRIMARY KEY (id)
);

ALTER TABLE replication_task ADD COLUMN status_revision int DEFAULT 0;
DELETE FROM project_metadata WHERE deleted = TRUE;
ALTER TABLE project_metadata DROP COLUMN deleted;
6 changes: 3 additions & 3 deletions make/photon/chartserver/builder
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ set +e

usage(){
echo "Usage: builder <golang image:version> <code path> <code release tag> <main.go path> <binary name>"
echo "e.g: builder golang:1.11.2 github.com/helm/chartmuseum v0.8.1 cmd/chartmuseum chartm"
echo "e.g: builder golang:1.11.2 github.com/helm/chartmuseum v0.9.0 cmd/chartmuseum chartm"
exit 1
}

Expand All @@ -13,7 +13,7 @@ if [ $# != 5 ]; then
fi

GOLANG_IMAGE="$1"
CODE_PATH="$2"
GIT_PATH="$2"
CODE_VERSION="$3"
MAIN_GO_PATH="$4"
BIN_NAME="$5"
Expand All @@ -27,7 +27,7 @@ mkdir -p binary
rm -rf binary/$BIN_NAME || true
cp compile.sh binary/

docker run -it -v $cur/binary:/go/bin --name golang_code_builder $GOLANG_IMAGE /bin/bash /go/bin/compile.sh $CODE_PATH $CODE_VERSION $MAIN_GO_PATH $BIN_NAME
docker run -it --rm -v $cur/binary:/go/bin --name golang_code_builder $GOLANG_IMAGE /bin/bash /go/bin/compile.sh $GIT_PATH $CODE_VERSION $MAIN_GO_PATH $BIN_NAME

#Clear
docker rm -f golang_code_builder
19 changes: 8 additions & 11 deletions make/photon/chartserver/compile.sh
Original file line number Diff line number Diff line change
Expand Up @@ -11,24 +11,21 @@ if [ $# != 4 ]; then
usage
fi

CODE_PATH="$1"
GIT_PATH="$1"
VERSION="$2"
MAIN_GO_PATH="$3"
BIN_NAME="$4"

#Get the source code of chartmusem
go get $CODE_PATH

#Get the source code
git clone $GIT_PATH src_code
ls
SRC_PATH=$(pwd)/src_code
set -e

#Checkout the released tag branch
cd /go/src/$CODE_PATH
git checkout tags/$VERSION -b $VERSION

#Install the go dep tool to restore the package dependencies
curl https://raw.githubusercontent.com/golang/dep/master/install.sh | sh
dep ensure
cd $SRC_PATH
git checkout tags/$VERSION -b $VERSION

#Compile
cd /go/src/$CODE_PATH/$MAIN_GO_PATH && go build -a -o $BIN_NAME
cd $SRC_PATH/$MAIN_GO_PATH && go build -a -o $BIN_NAME
mv $BIN_NAME /go/bin/
10 changes: 5 additions & 5 deletions make/photon/core/Dockerfile
Original file line number Diff line number Diff line change
@@ -1,16 +1,16 @@
FROM photon:2.0

RUN tdnf install sudo -y >> /dev/null\
RUN tdnf install sudo tzdata -y >> /dev/null \
&& tdnf clean all \
&& groupadd -r -g 10000 harbor && useradd --no-log-init -r -g 10000 -u 10000 harbor \
&& mkdir /harbor/

HEALTHCHECK CMD curl --fail -s http://127.0.0.1:8080/api/ping || exit 1
COPY ./make/photon/core/harbor_core ./make/photon/core/start.sh ./UIVERSION /harbor/
COPY ./make/photon/core/harbor_core ./UIVERSION /harbor/
COPY ./src/core/views /harbor/views
COPY ./make/migrations /harbor/migrations

RUN chmod u+x /harbor/start.sh /harbor/harbor_core
RUN chmod u+x /harbor/harbor_core
WORKDIR /harbor/
ENTRYPOINT ["/harbor/start.sh"]
USER harbor
ENTRYPOINT ["/harbor/harbor_core"]
3 changes: 0 additions & 3 deletions make/photon/core/start.sh

This file was deleted.

15 changes: 8 additions & 7 deletions make/photon/db/Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -18,15 +18,16 @@ RUN tdnf erase -y toybox && tdnf install -y util-linux net-tools

VOLUME /var/lib/postgresql/data

ADD ./make/photon/db/docker-entrypoint.sh /entrypoint.sh
ADD ./make/photon/db/docker-healthcheck.sh /docker-healthcheck.sh
RUN chmod u+x /entrypoint.sh /docker-healthcheck.sh
ENTRYPOINT ["/entrypoint.sh"]
HEALTHCHECK CMD ["/docker-healthcheck.sh"]

COPY ./make/photon/db/docker-entrypoint.sh /docker-entrypoint.sh
COPY ./make/photon/db/docker-healthcheck.sh /docker-healthcheck.sh
COPY ./make/photon/db/initial-notaryserver.sql /docker-entrypoint-initdb.d/
COPY ./make/photon/db/initial-notarysigner.sql /docker-entrypoint-initdb.d/
COPY ./make/photon/db/initial-registry.sql /docker-entrypoint-initdb.d/
RUN chown -R postgres:postgres /docker-entrypoint.sh /docker-healthcheck.sh /docker-entrypoint-initdb.d \
&& chmod u+x /docker-entrypoint.sh /docker-healthcheck.sh

ENTRYPOINT ["/docker-entrypoint.sh"]
HEALTHCHECK CMD ["/docker-healthcheck.sh"]

EXPOSE 5432
CMD ["postgres"]
USER postgres
147 changes: 70 additions & 77 deletions make/photon/db/docker-entrypoint.sh
Original file line number Diff line number Diff line change
Expand Up @@ -23,95 +23,88 @@ file_env() {
unset "$fileVar"
}

if [ "${1:0:1}" = '-' ]; then
set -- postgres "$@"
fi

if [ "$1" = 'postgres' ]; then
chown -R postgres:postgres $PGDATA
# look specifically for PG_VERSION, as it is expected in the DB dir
if [ ! -s "$PGDATA/PG_VERSION" ]; then
file_env 'POSTGRES_INITDB_ARGS'
if [ "$POSTGRES_INITDB_XLOGDIR" ]; then
export POSTGRES_INITDB_ARGS="$POSTGRES_INITDB_ARGS --xlogdir $POSTGRES_INITDB_XLOGDIR"
fi
su - $1 -c "initdb -D $PGDATA -U postgres -E UTF-8 --lc-collate=en_US.UTF-8 --lc-ctype=en_US.UTF-8 $POSTGRES_INITDB_ARGS"
# check password first so we can output the warning before postgres
# messes it up
file_env 'POSTGRES_PASSWORD'
if [ "$POSTGRES_PASSWORD" ]; then
pass="PASSWORD '$POSTGRES_PASSWORD'"
authMethod=md5
else
# The - option suppresses leading tabs but *not* spaces. :)
cat >&2 <<-EOF
****************************************************
WARNING: No password has been set for the database.
This will allow anyone with access to the
Postgres port to access your database. In
Docker's default configuration, this is
effectively any other container on the same
system.
Use "-e POSTGRES_PASSWORD=password" to set
it in "docker run".
****************************************************
# look specifically for PG_VERSION, as it is expected in the DB dir
if [ ! -s "$PGDATA/PG_VERSION" ]; then
file_env 'POSTGRES_INITDB_ARGS'
if [ "$POSTGRES_INITDB_XLOGDIR" ]; then
export POSTGRES_INITDB_ARGS="$POSTGRES_INITDB_ARGS --xlogdir $POSTGRES_INITDB_XLOGDIR"
fi
initdb -D $PGDATA -U postgres -E UTF-8 --lc-collate=en_US.UTF-8 --lc-ctype=en_US.UTF-8 $POSTGRES_INITDB_ARGS
# check password first so we can output the warning before postgres
# messes it up
file_env 'POSTGRES_PASSWORD'
if [ "$POSTGRES_PASSWORD" ]; then
pass="PASSWORD '$POSTGRES_PASSWORD'"
authMethod=md5
else
# The - option suppresses leading tabs but *not* spaces. :)
cat >&2 <<-EOF
****************************************************
WARNING: No password has been set for the database.
This will allow anyone with access to the
Postgres port to access your database. In
Docker's default configuration, this is
effectively any other container on the same
system.
Use "-e POSTGRES_PASSWORD=password" to set
it in "docker run".
****************************************************
EOF

pass=
authMethod=trust
fi

{
echo
echo "host all all all $authMethod"
} >> "$PGDATA/pg_hba.conf"
su postgres
echo `whoami`
# internal start of server in order to allow set-up using psql-client
# does not listen on external TCP/IP and waits until start finishes
su - $1 -c "pg_ctl -D \"$PGDATA\" -o \"-c listen_addresses='localhost'\" -w start"
pass=
authMethod=trust
fi

file_env 'POSTGRES_USER' 'postgres'
file_env 'POSTGRES_DB' "$POSTGRES_USER"
{
echo
echo "host all all all $authMethod"
} >> "$PGDATA/pg_hba.conf"
echo `whoami`
# internal start of server in order to allow set-up using psql-client
# does not listen on external TCP/IP and waits until start finishes
pg_ctl -D "$PGDATA" -o "-c listen_addresses=''" -w start

psql=( psql -v ON_ERROR_STOP=1 )
file_env 'POSTGRES_USER' 'postgres'
file_env 'POSTGRES_DB' "$POSTGRES_USER"

if [ "$POSTGRES_DB" != 'postgres' ]; then
"${psql[@]}" --username postgres <<-EOSQL
CREATE DATABASE "$POSTGRES_DB" ;
EOSQL
echo
fi
psql=( psql -v ON_ERROR_STOP=1 )

if [ "$POSTGRES_USER" = 'postgres' ]; then
op='ALTER'
else
op='CREATE'
fi
if [ "$POSTGRES_DB" != 'postgres' ]; then
"${psql[@]}" --username postgres <<-EOSQL
$op USER "$POSTGRES_USER" WITH SUPERUSER $pass ;
CREATE DATABASE "$POSTGRES_DB" ;
EOSQL
echo
fi

psql+=( --username "$POSTGRES_USER" --dbname "$POSTGRES_DB" )
if [ "$POSTGRES_USER" = 'postgres' ]; then
op='ALTER'
else
op='CREATE'
fi
"${psql[@]}" --username postgres <<-EOSQL
$op USER "$POSTGRES_USER" WITH SUPERUSER $pass ;
EOSQL
echo

psql+=( --username "$POSTGRES_USER" --dbname "$POSTGRES_DB" )

echo
for f in /docker-entrypoint-initdb.d/*; do
case "$f" in
*.sh) echo "$0: running $f"; . "$f" ;;
*.sql) echo "$0: running $f"; "${psql[@]}" -f "$f"; echo ;;
*.sql.gz) echo "$0: running $f"; gunzip -c "$f" | "${psql[@]}"; echo ;;
*) echo "$0: ignoring $f" ;;
esac
echo
for f in /docker-entrypoint-initdb.d/*; do
case "$f" in
*.sh) echo "$0: running $f"; . "$f" ;;
*.sql) echo "$0: running $f"; "${psql[@]}" -f "$f"; echo ;;
*.sql.gz) echo "$0: running $f"; gunzip -c "$f" | "${psql[@]}"; echo ;;
*) echo "$0: ignoring $f" ;;
esac
echo
done
done

PGUSER="${PGUSER:-postgres}" \
su - $1 -c "pg_ctl -D \"$PGDATA\" -m fast -w stop"
PGUSER="${PGUSER:-postgres}" \
pg_ctl -D "$PGDATA" -m fast -w stop

echo
echo 'PostgreSQL init process complete; ready for start up.'
echo
fi
echo
echo 'PostgreSQL init process complete; ready for start up.'
echo
fi
exec su - $1 -c "$@ -D $PGDATA"

postgres -D $PGDATA
19 changes: 13 additions & 6 deletions make/photon/jobservice/Dockerfile
Original file line number Diff line number Diff line change
@@ -1,12 +1,19 @@
FROM photon:2.0

RUN mkdir /harbor/ \
&& tdnf install sudo -y >> /dev/null\
RUN tdnf install sudo tzdata -y >> /dev/null \
&& tdnf clean all \
&& groupadd -r -g 10000 harbor && useradd --no-log-init -r -g 10000 -u 10000 harbor
&& groupadd -r -g 10000 harbor && useradd --no-log-init -r -g 10000 -u 10000 harbor

COPY ./make/photon/jobservice/start.sh ./make/photon/jobservice/harbor_jobservice /harbor/
COPY ./make/photon/jobservice/harbor_jobservice /harbor/

RUN chmod u+x /harbor/harbor_jobservice

RUN chmod u+x /harbor/harbor_jobservice /harbor/start.sh
WORKDIR /harbor/
ENTRYPOINT ["/harbor/start.sh"]

USER harbor

VOLUME ["/var/log/jobs/"]

HEALTHCHECK CMD curl --fail -s http://127.0.0.1:8080/api/v1/stats || exit 1

ENTRYPOINT ["/harbor/harbor_jobservice", "-c", "/etc/jobservice/config.yml"]
6 changes: 0 additions & 6 deletions make/photon/jobservice/start.sh

This file was deleted.

3 changes: 3 additions & 0 deletions make/photon/log/rsyslog.conf
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,9 @@
#
# Default logging rules can be found in /etc/rsyslog.d/50-default.conf

# The default value is 8k. When the size of one log line > 8k, the line
# is truncated and causes mess in log file directory
$MaxMessageSize 32k

#################
#### MODULES ####
Expand Down
11 changes: 4 additions & 7 deletions make/photon/log/rsyslog_docker.conf
Original file line number Diff line number Diff line change
@@ -1,8 +1,5 @@
# Rsyslog configuration file for docker.

template(name="DynaFile" type="string"
string="/var/log/docker/%syslogtag:R,ERE,0,DFLT:[^[]*--end:secpath-replace%.log"
)
#if $programname == "docker" then ?DynaFile
if $programname != "rsyslogd" then -?DynaFile

template(name="DynaFile" type="string" string="/var/log/docker/%programname%.log")
if $programname != "rsyslogd" then {
action(type="omfile" dynaFile="DynaFile")
}
15 changes: 10 additions & 5 deletions make/photon/nginx/Dockerfile
Original file line number Diff line number Diff line change
@@ -1,14 +1,19 @@
FROM photon:2.0

RUN tdnf install -y nginx >> /dev/null\
RUN tdnf install sudo nginx -y >> /dev/null\
&& tdnf clean all \
&& groupadd -r -g 10000 nginx && useradd --no-log-init -r -g 10000 -u 10000 nginx \
&& ln -sf /dev/stdout /var/log/nginx/access.log \
&& ln -sf /dev/stderr /var/log/nginx/error.log \
&& tdnf clean all
&& ln -sf /dev/stderr /var/log/nginx/error.log

EXPOSE 80
VOLUME /var/cache/nginx /var/log/nginx /run

EXPOSE 8080

STOPSIGNAL SIGQUIT

HEALTHCHECK CMD curl --fail -s http://127.0.0.1 || exit 1
HEALTHCHECK CMD curl --fail -s http://127.0.0.1:8080 || exit 1

USER nginx

CMD ["nginx", "-g", "daemon off;"]
2 changes: 0 additions & 2 deletions make/photon/notary/server-start.sh

This file was deleted.

8 changes: 4 additions & 4 deletions make/photon/notary/server.Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -4,12 +4,12 @@ RUN tdnf install -y shadow sudo \
&& tdnf clean all \
&& groupadd -r -g 10000 notary \
&& useradd --no-log-init -r -g 10000 -u 10000 notary

COPY ./make/photon/notary/migrate-patch /bin/migrate-patch
COPY ./make/photon/notary/binary/notary-server /bin/notary-server
COPY ./make/photon/notary/binary/migrate /bin/migrate
COPY ./make/photon/notary/binary/migrations/ /migrations/
COPY ./make/photon/notary/server-start.sh /bin/server-start.sh
RUN chmod +x /bin/notary-server /migrations/migrate.sh /bin/migrate /bin/migrate-patch /bin/server-start.sh

RUN chmod +x /bin/notary-server /migrations/migrate.sh /bin/migrate /bin/migrate-patch
ENV SERVICE_NAME=notary_server
ENTRYPOINT [ "/bin/server-start.sh" ]
USER notary
CMD migrate-patch -database=${DB_URL} && /migrations/migrate.sh && /bin/notary-server -config=/etc/notary/server-config.postgres.json -logf=logfmt
2 changes: 0 additions & 2 deletions make/photon/notary/signer-start.sh

This file was deleted.

6 changes: 3 additions & 3 deletions make/photon/notary/signer.Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -8,8 +8,8 @@ COPY ./make/photon/notary/migrate-patch /bin/migrate-patch
COPY ./make/photon/notary/binary/notary-signer /bin/notary-signer
COPY ./make/photon/notary/binary/migrate /bin/migrate
COPY ./make/photon/notary/binary/migrations/ /migrations/
COPY ./make/photon/notary/signer-start.sh /bin/signer-start.sh

RUN chmod +x /bin/notary-signer /migrations/migrate.sh /bin/migrate /bin/migrate-patch /bin/signer-start.sh
RUN chmod +x /bin/notary-signer /migrations/migrate.sh /bin/migrate /bin/migrate-patch
ENV SERVICE_NAME=notary_signer
ENTRYPOINT [ "/bin/signer-start.sh" ]
USER notary
CMD migrate-patch -database=${DB_URL} && /migrations/migrate.sh && /bin/notary-signer -config=/etc/notary/signer-config.postgres.json -logf=logfmt
45 changes: 25 additions & 20 deletions make/photon/portal/Dockerfile
Original file line number Diff line number Diff line change
@@ -1,39 +1,44 @@
FROM node:10.15.0 as nodeportal

RUN mkdir -p /portal_src
RUN mkdir -p /build_dir

COPY make/photon/portal/entrypoint.sh /
COPY src/portal /portal_src
COPY ./docs/swagger.yaml /portal_src
COPY ./LICENSE /portal_src

WORKDIR /portal_src
WORKDIR /build_dir

RUN npm install && \
chmod u+x /entrypoint.sh
RUN /entrypoint.sh
VOLUME ["/portal_src"]
RUN cp -r /portal_src/* /build_dir \
&& ls -la \
&& apt-get update \
&& apt-get install -y --no-install-recommends python-yaml=3.12-1 \
&& python -c 'import sys, yaml, json; y=yaml.load(sys.stdin.read()); print json.dumps(y)' < swagger.yaml > swagger.json \
&& npm install \
&& npm run build_lib \
&& npm run link_lib \
&& npm run release


FROM photon:2.0

RUN tdnf install -y nginx >> /dev/null \
&& ln -sf /dev/stdout /var/log/nginx/access.log \
&& ln -sf /dev/stderr /var/log/nginx/error.log \
&& tdnf clean all

EXPOSE 80
VOLUME /var/cache/nginx /var/log/nginx /run


COPY --from=nodeportal /build_dir/dist /usr/share/nginx/html
COPY --from=nodeportal /build_dir/swagger.yaml /usr/share/nginx/html
COPY --from=nodeportal /build_dir/swagger.json /usr/share/nginx/html
COPY --from=nodeportal /build_dir/LICENSE /usr/share/nginx/html

COPY make/photon/portal/nginx.conf /etc/nginx/nginx.conf

STOPSIGNAL SIGQUIT
RUN tdnf install -y nginx sudo >> /dev/null \
&& ln -sf /dev/stdout /var/log/nginx/access.log \
&& ln -sf /dev/stderr /var/log/nginx/error.log \
&& groupadd -r -g 10000 nginx && useradd --no-log-init -r -g 10000 -u 10000 nginx \
&& chown -R nginx:nginx /etc/nginx \
&& tdnf clean all

HEALTHCHECK CMD curl --fail -s http://127.0.0.1 || exit 1
EXPOSE 8080
VOLUME /var/cache/nginx /var/log/nginx /run

STOPSIGNAL SIGQUIT

HEALTHCHECK CMD curl --fail -s http://127.0.0.1:8080 || exit 1
USER nginx
CMD ["nginx", "-g", "daemon off;"]

21 changes: 0 additions & 21 deletions make/photon/portal/entrypoint.sh

This file was deleted.

16 changes: 14 additions & 2 deletions make/photon/portal/nginx.conf
Original file line number Diff line number Diff line change
@@ -1,13 +1,21 @@

worker_processes 1;
worker_processes auto;
pid /tmp/nginx.pid;

events {
worker_connections 1024;
}

http {

client_body_temp_path /tmp/client_body_temp;
proxy_temp_path /tmp/proxy_temp;
fastcgi_temp_path /tmp/fastcgi_temp;
uwsgi_temp_path /tmp/uwsgi_temp;
scgi_temp_path /tmp/scgi_temp;

server {
listen 80;
listen 8080;
server_name localhost;

root /usr/share/nginx/html;
Expand All @@ -22,5 +30,9 @@ http {
location / {
try_files $uri $uri/ /index.html;
}

location = /index.html {
add_header Cache-Control "no-store, no-cache, must-revalidate";
}
}
}
12 changes: 10 additions & 2 deletions make/photon/prepare/g.py
Original file line number Diff line number Diff line change
Expand Up @@ -5,13 +5,21 @@
DEFAULT_UID = 10000
DEFAULT_GID = 10000

PG_UID = 999
PG_GID = 999

REDIS_UID = 999
REDIS_GID = 999

## Global variable
host_root_dir = '/hostfs'

base_dir = '/harbor_make'
templates_dir = "/usr/src/app/templates"
config_dir = '/config'

data_dir = '/data'
secret_dir = '/secret'
secret_key_dir='/secret/keys'
secret_key_dir = '/secret/keys'

old_private_key_pem_path = Path('/config/core/private_key.pem')
old_crt_path = Path('/config/registry/root.crt')
Expand Down
2 changes: 2 additions & 0 deletions make/photon/prepare/main.py
Original file line number Diff line number Diff line change
Expand Up @@ -16,6 +16,7 @@
from utils.chart import prepare_chartmuseum
from utils.docker_compose import prepare_docker_compose
from utils.nginx import prepare_nginx, nginx_confd_dir
from utils.redis import prepare_redis
from g import (config_dir, input_config_path, private_key_pem_path, root_crt_path, secret_key_dir,
old_private_key_pem_path, old_crt_path)

Expand All @@ -38,6 +39,7 @@ def main(conf, with_notary, with_clair, with_chartmuseum):
prepare_registry_ctl(config_dict)
prepare_db(config_dict)
prepare_job_service(config_dict)
prepare_redis(config_dict)

get_secret_key(secret_key_dir)

Expand Down
4 changes: 4 additions & 0 deletions make/photon/prepare/templates/chartserver/env.jinja
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,11 @@ DISABLE_METRICS=false
DISABLE_API=false
DISABLE_STATEFILES=false
ALLOW_OVERWRITE=true
{% if chart_absolute_url %}
CHART_URL={{public_url}}/chartrepo
{% else %}
CHART_URL=
{% endif %}
AUTH_ANONYMOUS_GET=false
TLS_CERT=
TLS_KEY=
Expand Down
6 changes: 3 additions & 3 deletions make/photon/prepare/templates/clair/clair_env.jinja
Original file line number Diff line number Diff line change
@@ -1,3 +1,3 @@
http_proxy={{clair_http_proxy}}
https_proxy={{clair_https_proxy}}
no_proxy={{clair_no_proxy}}
HTTP_PROXY={{clair_http_proxy}}
HTTPS_PROXY={{clair_https_proxy}}
NO_PROXY={{clair_no_proxy}}
6 changes: 0 additions & 6 deletions make/photon/prepare/templates/clair/config.yaml.jinja
Original file line number Diff line number Diff line change
Expand Up @@ -17,9 +17,3 @@ clair:
timeout: 300s
updater:
interval: {{clair_updaters_interval}}h

notifier:
attempts: 3
renotifyinterval: 2h
http:
endpoint: http://core:8080/service/notifications/clair
7 changes: 7 additions & 0 deletions make/photon/prepare/templates/core/env.jinja
Original file line number Diff line number Diff line change
Expand Up @@ -15,6 +15,8 @@ POSTGRESQL_USERNAME={{harbor_db_username}}
POSTGRESQL_PASSWORD={{harbor_db_password}}
POSTGRESQL_DATABASE={{harbor_db_name}}
POSTGRESQL_SSLMODE={{harbor_db_sslmode}}
POSTGRESQL_MAX_IDLE_CONNS={{harbor_db_max_idle_conns}}
POSTGRESQL_MAX_OPEN_CONNS={{harbor_db_max_open_conns}}
REGISTRY_URL={{registry_url}}
TOKEN_SERVICE_URL={{token_service_url}}
HARBOR_ADMIN_PASSWORD={{harbor_admin_password}}
Expand All @@ -31,6 +33,7 @@ CLAIR_DB_USERNAME={{clair_db_username}}
CLAIR_DB={{clair_db_name}}
CLAIR_DB_SSLMODE={{clair_db_sslmode}}
CORE_URL={{core_url}}
CORE_LOCAL_URL={{core_local_url}}
JOBSERVICE_URL={{jobservice_url}}
CLAIR_URL={{clair_url}}
NOTARY_URL={{notary_url}}
Expand All @@ -40,3 +43,7 @@ RELOAD_KEY={{reload_key}}
CHART_REPOSITORY_URL={{chart_repository_url}}
REGISTRY_CONTROLLER_URL={{registry_controller_url}}
WITH_CHARTMUSEUM={{with_chartmuseum}}

HTTP_PROXY={{core_http_proxy}}
HTTPS_PROXY={{core_https_proxy}}
NO_PROXY={{core_no_proxy}}
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,8 @@ services:
- SETUID
volumes:
- {{log_location}}/:/var/log/docker/:z
- ./common/config/log/:/etc/logrotate.d/:z
- ./common/config/log/logrotate.conf:/etc/logrotate.d/logrotate.conf:z
- ./common/config/log/rsyslog_docker.conf:/etc/rsyslog.d/rsyslog_docker.conf:z
ports:
- 127.0.0.1:1514:10514
networks:
Expand Down Expand Up @@ -91,6 +92,7 @@ services:
options:
syslog-address: "tcp://127.0.0.1:1514"
tag: "registryctl"
{% if external_database == False %}
postgresql:
image: goharbor/harbor-db:{{version}}
container_name: harbor-db
Expand All @@ -106,16 +108,16 @@ services:
- {{data_volume}}/database:/var/lib/postgresql/data:z
networks:
harbor:
{% if with_notary %}
{% if with_notary %}
harbor-notary:
aliases:
- harbor-db
{% endif %}
{% if with_clair %}
{% endif %}
{% if with_clair %}
harbor-clair:
aliases:
- harbor-db
{% endif %}
{% endif %}
dns_search: .
env_file:
- ./common/config/db/env
Expand All @@ -126,6 +128,7 @@ services:
options:
syslog-address: "tcp://127.0.0.1:1514"
tag: "postgresql"
{% endif %}
core:
image: goharbor/harbor-core:{{version}}
container_name: harbor-core
Expand Down Expand Up @@ -175,6 +178,12 @@ services:
depends_on:
- log
- registry
{% if external_redis == False %}
- redis
{% endif %}
{% if external_database == False %}
- postgresql
{% endif %}
logging:
driver: "syslog"
options:
Expand All @@ -196,7 +205,6 @@ services:
dns_search: .
depends_on:
- log
- core
logging:
driver: "syslog"
options:
Expand Down Expand Up @@ -227,13 +235,13 @@ services:
{% endif %}
dns_search: .
depends_on:
- redis
- core
logging:
driver: "syslog"
options:
syslog-address: "tcp://127.0.0.1:1514"
tag: "jobservice"
{% if external_redis == False %}
redis:
image: goharbor/redis-photon:{{redis_version}}
container_name: redis
Expand All @@ -248,11 +256,11 @@ services:
- {{data_volume}}/redis:/var/lib/redis
networks:
harbor:
{% if with_chartmuseum %}
{% if with_chartmuseum %}
harbor-chartmuseum:
aliases:
- redis
{% endif %}
{% endif %}
dns_search: .
depends_on:
- log
Expand All @@ -261,8 +269,9 @@ services:
options:
syslog-address: "tcp://127.0.0.1:1514"
tag: "redis"
{% endif %}
proxy:
image: goharbor/nginx-photon:{{redis_version}}
image: goharbor/nginx-photon:{{version}}
container_name: nginx
restart: always
cap_drop:
Expand All @@ -275,12 +284,7 @@ services:
volumes:
- ./common/config/nginx:/etc/nginx:z
{% if protocol == 'https' %}
- type: bind
source: {{cert_key_path}}
target: /etc/cert/server.key
- type: bind
source: {{cert_path}}
target: /etc/cert/server.crt
- {{data_volume}}/secret/cert:/etc/cert:z
{% endif %}
networks:
- harbor
Expand All @@ -289,15 +293,14 @@ services:
{% endif %}
dns_search: .
ports:
- {{http_port}}:80
- {{http_port}}:8080
{% if protocol == 'https' %}
- {{https_port}}:443
- {{https_port}}:8443
{% endif %}
{% if with_notary %}
- 4443:4443
{% endif %}
depends_on:
- postgresql
- registry
- core
- portal
Expand Down Expand Up @@ -327,7 +330,9 @@ services:
env_file:
- ./common/config/notary/server_env
depends_on:
{% if external_database == False %}
- postgresql
{% endif %}
- notary-signer
logging:
driver: "syslog"
Expand Down Expand Up @@ -355,7 +360,10 @@ services:
env_file:
- ./common/config/notary/signer_env
depends_on:
- log
{% if external_database == False %}
- postgresql
{% endif %}
logging:
driver: "syslog"
options:
Expand All @@ -378,16 +386,19 @@ services:
cpu_quota: 50000
dns_search: .
depends_on:
- log
{% if external_database == False %}
- postgresql
{% endif %}
volumes:
- type: bind
source: ./common/config/clair/config.yaml
target: /etc/clair/config.yaml
{%if registry_custom_ca_bundle_path %}
{%if registry_custom_ca_bundle_path %}
- type: bind
source: {{registry_custom_ca_bundle_path}}
target: /harbor_cust_cert/custom-ca-bundle.crt
{% endif %}
{% endif %}
logging:
driver: "syslog"
options:
Expand All @@ -412,14 +423,14 @@ services:
- harbor-chartmuseum
dns_search: .
depends_on:
- redis
- log
volumes:
- {{data_volume}}/chart_storage:/chart_storage:z
- ./common/config/chartserver:/etc/chartserver:z
{% if gcs_keyfile %}
- type: bind
source: {{gcs_keyfile}}
target: /etc/registry/gcs.key
target: /etc/chartserver/gcs.key
{% endif %}
{%if registry_custom_ca_bundle_path %}
- type: bind
Expand Down
7 changes: 4 additions & 3 deletions make/photon/prepare/templates/jobservice/config.yml.jinja
Original file line number Diff line number Diff line change
Expand Up @@ -20,12 +20,13 @@ worker_pool:
#redis://[arbitrary_username:password@]ipaddress:port/database_index
redis_url: {{redis_url}}
namespace: "harbor_job_service_namespace"
idle_timeout_second: 3600
#Loggers for the running job
job_loggers:
- name: "STD_OUTPUT" # logger backend name, only support "FILE" and "STD_OUTPUT"
level: "INFO" # INFO/DEBUG/WARNING/ERROR/FATAL
level: "{{level}}" # INFO/DEBUG/WARNING/ERROR/FATAL
- name: "FILE"
level: "INFO"
level: "{{level}}"
settings: # Customized settings of logger
base_dir: "/var/log/jobs"
sweeper:
Expand All @@ -36,4 +37,4 @@ job_loggers:
#Loggers for the job service
loggers:
- name: "STD_OUTPUT" # Same with above
level: "INFO"
level: "{{level}}"
5 changes: 5 additions & 0 deletions make/photon/prepare/templates/jobservice/env.jinja
Original file line number Diff line number Diff line change
@@ -1,3 +1,8 @@
CORE_SECRET={{core_secret}}
JOBSERVICE_SECRET={{jobservice_secret}}
CORE_URL={{core_url}}
JOBSERVICE_WEBHOOK_JOB_MAX_RETRY={{notification_webhook_job_max_retry}}

HTTP_PROXY={{jobservice_http_proxy}}
HTTPS_PROXY={{jobservice_https_proxy}}
NO_PROXY={{jobservice_no_proxy}}
11 changes: 11 additions & 0 deletions make/photon/prepare/templates/log/rsyslog_docker.conf.jinja
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
# Rsyslog configuration file for docker.

template(name="DynaFile" type="string" string="/var/log/docker/%programname%.log")

if $programname != "rsyslogd" then {
{%if log_external %}
action(type="omfwd" Target="{{log_ep_host}}" Port="{{log_ep_port}}" Protocol="{{log_ep_protocol}}" Template="RSYSLOG_SyslogProtocol23Format")
{% else %}
action(type="omfile" dynaFile="DynaFile")
{% endif %}
}
12 changes: 9 additions & 3 deletions make/photon/prepare/templates/nginx/nginx.http.conf.jinja
Original file line number Diff line number Diff line change
@@ -1,4 +1,5 @@
worker_processes auto;
pid /tmp/nginx.pid;

events {
worker_connections 1024;
Expand All @@ -7,6 +8,11 @@ events {
}

http {
client_body_temp_path /tmp/client_body_temp;
proxy_temp_path /tmp/proxy_temp;
fastcgi_temp_path /tmp/fastcgi_temp;
uwsgi_temp_path /tmp/uwsgi_temp;
scgi_temp_path /tmp/scgi_temp;
tcp_nodelay on;

# this is necessary for us to be able to disable request buffering in all cases
Expand All @@ -17,7 +23,7 @@ http {
}

upstream portal {
server portal:80;
server portal:8080;
}

log_format timed_combined '$remote_addr - '
Expand All @@ -28,7 +34,7 @@ http {
access_log /dev/stdout timed_combined;

server {
listen 80;
listen 8080;
server_tokens off;
# disable any limits to avoid HTTP 413 for large image uploads
client_max_body_size 0;
Expand Down Expand Up @@ -117,7 +123,7 @@ http {
proxy_request_buffering off;
}

location /service/notifications {
location /service/notifications {
return 404;
}
}
Expand Down
18 changes: 12 additions & 6 deletions make/photon/prepare/templates/nginx/nginx.https.conf.jinja
Original file line number Diff line number Diff line change
@@ -1,4 +1,5 @@
worker_processes auto;
pid /tmp/nginx.pid;

events {
worker_connections 1024;
Expand All @@ -7,6 +8,11 @@ events {
}

http {
client_body_temp_path /tmp/client_body_temp;
proxy_temp_path /tmp/proxy_temp;
fastcgi_temp_path /tmp/fastcgi_temp;
uwsgi_temp_path /tmp/uwsgi_temp;
scgi_temp_path /tmp/scgi_temp;
tcp_nodelay on;
include /etc/nginx/conf.d/*.upstream.conf;

Expand All @@ -18,7 +24,7 @@ http {
}

upstream portal {
server portal:80;
server portal:8080;
}

log_format timed_combined '$remote_addr - '
Expand All @@ -31,7 +37,7 @@ http {
include /etc/nginx/conf.d/*.server.conf;

server {
listen 443 ssl;
listen 8443 ssl;
# server_name harbordomain.com;
server_tokens off;
# SSL
Expand Down Expand Up @@ -136,13 +142,13 @@ http {
proxy_buffering off;
proxy_request_buffering off;
}
location /service/notifications {

location /service/notifications {
return 404;
}
}
server {
listen 80;
server {
listen 8080;
#server_name harbordomain.com;
return 308 https://$host$request_uri;
}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,6 @@
"storage": {
"backend": "postgres",
"db_url": "postgres://{{notary_signer_db_username}}:{{notary_signer_db_password}}@{{notary_signer_db_host}}:{{notary_signer_db_port}}/{{notary_signer_db_name}}?sslmode={{notary_signer_db_sslmode}}",
"default_alias": "{{alias}}"
"default_alias": "defaultalias"
}
}
2 changes: 1 addition & 1 deletion make/photon/prepare/templates/registry/config.yml.jinja
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ storage:
enabled: true
{% if storage_redirect_disabled %}
redirect:
disabled: true
disable: true
{% endif %}
redis:
addr: {{redis_host}}:{{redis_port}}
Expand Down
15 changes: 12 additions & 3 deletions make/photon/prepare/utils/cert.py
Original file line number Diff line number Diff line change
Expand Up @@ -4,8 +4,11 @@
from subprocess import DEVNULL
from functools import wraps

from .misc import mark_file
from .misc import generate_random_string
from g import DEFAULT_GID, DEFAULT_UID
from .misc import (
mark_file,
generate_random_string,
check_permission)

SSL_CERT_PATH = os.path.join("/etc/cert", "server.crt")
SSL_CERT_KEY_PATH = os.path.join("/etc/cert", "server.key")
Expand Down Expand Up @@ -101,4 +104,10 @@ def prepare_ca(
mark_file(root_crt_path)
else:
shutil.move(old_crt_path, root_crt_path)
shutil.move(old_private_key_pem_path, private_key_pem_path)
shutil.move(old_private_key_pem_path, private_key_pem_path)

if not check_permission(root_crt_path, uid=DEFAULT_UID, gid=DEFAULT_GID):
os.chown(root_crt_path, DEFAULT_UID, DEFAULT_GID)

if not check_permission(private_key_pem_path, uid=DEFAULT_UID, gid=DEFAULT_GID):
os.chown(private_key_pem_path, DEFAULT_UID, DEFAULT_GID)
39 changes: 23 additions & 16 deletions make/photon/prepare/utils/chart.py
Original file line number Diff line number Diff line change
@@ -1,27 +1,28 @@
import os, shutil

from g import templates_dir, config_dir
from g import templates_dir, config_dir, data_dir, DEFAULT_UID, DEFAULT_GID
from .jinja import render_jinja
from .misc import prepare_dir

chartm_temp_dir = os.path.join(templates_dir, "chartserver")
chartm_env_temp = os.path.join(chartm_temp_dir, "env.jinja")
chart_museum_temp_dir = os.path.join(templates_dir, "chartserver")
chart_museum_env_temp = os.path.join(chart_museum_temp_dir, "env.jinja")

chartm_config_dir = os.path.join(config_dir, "chartserver")
chartm_env = os.path.join(config_dir, "chartserver", "env")
chart_museum_config_dir = os.path.join(config_dir, "chartserver")
chart_museum_env = os.path.join(config_dir, "chartserver", "env")

chart_museum_data_dir = os.path.join(data_dir, 'chart_storage')

def prepare_chartmuseum(config_dict):

core_secret = config_dict['core_secret']
redis_host = config_dict['redis_host']
redis_port = config_dict['redis_port']
redis_password = config_dict['redis_password']
redis_db_index_chart = config_dict['redis_db_index_chart']
storage_provider_name = config_dict['storage_provider_name']
storage_provider_config_map = config_dict['storage_provider_config']

if not os.path.isdir(chartm_config_dir):
print ("Create config folder: %s" % chartm_config_dir)
os.makedirs(chartm_config_dir)
prepare_dir(chart_museum_data_dir, uid=DEFAULT_UID, gid=DEFAULT_GID)
prepare_dir(chart_museum_config_dir)

# process redis info
cache_store = "redis"
Expand Down Expand Up @@ -54,7 +55,7 @@ def prepare_chartmuseum(config_dict):

if storage_provider_config_map.get("keyfile"):
storage_provider_config_options.append('GOOGLE_APPLICATION_CREDENTIALS=%s' % '/etc/chartserver/gcs.key')
elif storage_provider_name == 'gcs':
elif storage_provider_name == 'azure':
# azure storage
storage_driver = "microsoft"
storage_provider_config_options.append("STORAGE_MICROSOFT_CONTAINER=%s" % ( storage_provider_config_map.get("container") or '') )
Expand All @@ -77,9 +78,13 @@ def prepare_chartmuseum(config_dict):
elif storage_provider_name == 'oss':
# aliyun OSS
storage_driver = "alibaba"
storage_provider_config_options.append("STORAGE_ALIBABA_BUCKET=%s" % ( storage_provider_config_map.get("bucket") or '') )
bucket = storage_provider_config_map.get("bucket") or ''
endpoint = storage_provider_config_map.get("endpoint") or ''
if endpoint.startswith(bucket + "."):
endpoint = endpoint.replace(bucket + ".", "")
storage_provider_config_options.append("STORAGE_ALIBABA_BUCKET=%s" % bucket )
storage_provider_config_options.append("STORAGE_ALIBABA_ENDPOINT=%s" % endpoint )
storage_provider_config_options.append("STORAGE_ALIBABA_PREFIX=%s" % ( storage_provider_config_map.get("rootdirectory") or '') )
storage_provider_config_options.append("STORAGE_ALIBABA_ENDPOINT=%s" % ( storage_provider_config_map.get("endpoint") or '') )
storage_provider_config_options.append("ALIBABA_CLOUD_ACCESS_KEY_ID=%s" % ( storage_provider_config_map.get("accesskeyid") or '') )
storage_provider_config_options.append("ALIBABA_CLOUD_ACCESS_KEY_SECRET=%s" % ( storage_provider_config_map.get("accesskeysecret") or '') )
else:
Expand All @@ -90,12 +95,14 @@ def prepare_chartmuseum(config_dict):
all_storage_provider_configs = ('\n').join(storage_provider_config_options)

render_jinja(
chartm_env_temp,
chartm_env,
chart_museum_env_temp,
chart_museum_env,
cache_store=cache_store,
cache_redis_addr=cache_redis_addr,
cache_redis_password=cache_redis_password,
cache_redis_db_index=cache_redis_db_index,
core_secret=core_secret,
core_secret=config_dict['core_secret'],
storage_driver=storage_driver,
all_storage_driver_configs=all_storage_provider_configs)
all_storage_driver_configs=all_storage_provider_configs,
public_url=config_dict['public_url'],
chart_absolute_url=config_dict['chart_absolute_url'])
4 changes: 2 additions & 2 deletions make/photon/prepare/utils/clair.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,12 +2,12 @@

from g import templates_dir, config_dir, DEFAULT_UID, DEFAULT_GID
from .jinja import render_jinja
from .misc import prepare_config_dir
from .misc import prepare_dir

clair_template_dir = os.path.join(templates_dir, "clair")

def prepare_clair(config_dict):
clair_config_dir = prepare_config_dir(config_dir, "clair")
clair_config_dir = prepare_dir(config_dir, "clair")

if os.path.exists(os.path.join(clair_config_dir, "postgresql-init.d")):
print("Copying offline data file for clair DB")
Expand Down
76 changes: 63 additions & 13 deletions make/photon/prepare/utils/configs.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,9 @@
from g import versions_file_path
from .misc import generate_random_string

default_db_max_idle_conns = 2 # NOTE: https://golang.org/pkg/database/sql/#DB.SetMaxIdleConns
default_db_max_open_conns = 0 # NOTE: https://golang.org/pkg/database/sql/#DB.SetMaxOpenConns

def validate(conf, **kwargs):
protocol = conf.get("protocol")
if protocol != "https" and kwargs.get('notary_mode'):
Expand All @@ -13,6 +16,14 @@ def validate(conf, **kwargs):
if not conf.get("cert_key_path"):
raise Exception("Error: The protocol is https but attribute ssl_cert_key is not set")

# log endpoint validate
if ('log_ep_host' in conf) and not conf['log_ep_host']:
raise Exception('Error: must set log endpoint host to enable external host')
if ('log_ep_port' in conf) and not conf['log_ep_port']:
raise Exception('Error: must set log endpoint port to enable external host')
if ('log_ep_protocol' in conf) and (conf['log_ep_protocol'] not in ['udp', 'tcp']):
raise Exception("Protocol in external log endpoint must be one of 'udp' or 'tcp' ")

# Storage validate
valid_storage_drivers = ["filesystem", "azure", "gcs", "s3", "swift", "oss"]
storage_provider_name = conf.get("storage_provider_name")
Expand All @@ -30,12 +41,12 @@ def validate(conf, **kwargs):
redis_host = conf.get("redis_host")
if redis_host is None or len(redis_host) < 1:
raise Exception(
"Error: redis_host in harbor.cfg needs to point to an endpoint of Redis server or cluster.")
"Error: redis_host in harbor.yml needs to point to an endpoint of Redis server or cluster.")

redis_port = conf.get("redis_port")
if redis_host is None or (redis_port < 1 or redis_port > 65535):
raise Exception(
"Error: redis_port in harbor.cfg needs to point to the port of Redis server or cluster.")
"Error: redis_port in harbor.yml needs to point to the port of Redis server or cluster.")


def parse_versions():
Expand All @@ -59,6 +70,7 @@ def parse_yaml_config(config_file_path):
'registry_url': "http://registry:5000",
'registry_controller_url': "http://registryctl:8080",
'core_url': "http://core:8080",
'core_local_url': "http://127.0.0.1:8080",
'token_service_url': "http://core:8080/service/token",
'jobservice_url': 'http://jobservice:8080',
'clair_url': 'http://clair:6060',
Expand Down Expand Up @@ -103,6 +115,8 @@ def parse_yaml_config(config_file_path):
config_dict['harbor_db_username'] = 'postgres'
config_dict['harbor_db_password'] = db_configs.get("password") or ''
config_dict['harbor_db_sslmode'] = 'disable'
config_dict['harbor_db_max_idle_conns'] = db_configs.get("max_idle_conns") or default_db_max_idle_conns
config_dict['harbor_db_max_open_conns'] = db_configs.get("max_open_conns") or default_db_max_open_conns
# clari db
config_dict['clair_db_host'] = 'postgresql'
config_dict['clair_db_port'] = 5432
Expand Down Expand Up @@ -162,39 +176,71 @@ def parse_yaml_config(config_file_path):
if storage_config.get('redirect'):
config_dict['storage_redirect_disabled'] = storage_config['redirect']['disabled']

# Clair configs
# Global proxy configs
proxy_config = configs.get('proxy') or {}
proxy_components = proxy_config.get('components') or []
for proxy_component in proxy_components:
config_dict[proxy_component + '_http_proxy'] = proxy_config.get('http_proxy') or ''
config_dict[proxy_component + '_https_proxy'] = proxy_config.get('https_proxy') or ''
config_dict[proxy_component + '_no_proxy'] = proxy_config.get('no_proxy') or '127.0.0.1,localhost,core,registry'

# Clair configs, optional
clair_configs = configs.get("clair") or {}
config_dict['clair_db'] = 'postgres'
config_dict['clair_updaters_interval'] = clair_configs.get("updaters_interval") or 12
config_dict['clair_http_proxy'] = clair_configs.get('http_proxy') or ''
config_dict['clair_https_proxy'] = clair_configs.get('https_proxy') or ''
config_dict['clair_no_proxy'] = clair_configs.get('no_proxy') or '127.0.0.1,localhost,core,registry'

# Chart configs
chart_configs = configs.get("chart") or {}
config_dict['chart_absolute_url'] = chart_configs.get('absolute_url') or ''

# jobservice config
js_config = configs.get('jobservice') or {}
config_dict['max_job_workers'] = js_config["max_job_workers"]
config_dict['jobservice_secret'] = generate_random_string(16)

# notification config
notification_config = configs.get('notification') or {}
config_dict['notification_webhook_job_max_retry'] = notification_config["webhook_job_max_retry"]

# Log configs
allowed_levels = ['debug', 'info', 'warning', 'error', 'fatal']
log_configs = configs.get('log') or {}
config_dict['log_location'] = log_configs["location"]
config_dict['log_rotate_count'] = log_configs["rotate_count"]
config_dict['log_rotate_size'] = log_configs["rotate_size"]
config_dict['log_level'] = log_configs['level']

log_level = log_configs['level']
if log_level not in allowed_levels:
raise Exception('log level must be one of debug, info, warning, error, fatal')
config_dict['log_level'] = log_level.lower()

# parse local log related configs
local_logs = log_configs.get('local') or {}
if local_logs:
config_dict['log_location'] = local_logs.get('location') or '/var/log/harbor'
config_dict['log_rotate_count'] = local_logs.get('rotate_count') or 50
config_dict['log_rotate_size'] = local_logs.get('rotate_size') or '200M'

# parse external log endpoint related configs
if log_configs.get('external_endpoint'):
config_dict['log_external'] = True
config_dict['log_ep_protocol'] = log_configs['external_endpoint']['protocol']
config_dict['log_ep_host'] = log_configs['external_endpoint']['host']
config_dict['log_ep_port'] = log_configs['external_endpoint']['port']
else:
config_dict['log_external'] = False

# external DB, if external_db enabled, it will cover the database config
# external DB, optional, if external_db enabled, it will cover the database config
external_db_configs = configs.get('external_database') or {}
if external_db_configs:
config_dict['external_database'] = True
# harbor db
config_dict['harbor_db_host'] = external_db_configs['harbor']['host']
config_dict['harbor_db_port'] = external_db_configs['harbor']['port']
config_dict['harbor_db_name'] = external_db_configs['harbor']['db_name']
config_dict['harbor_db_username'] = external_db_configs['harbor']['username']
config_dict['harbor_db_password'] = external_db_configs['harbor']['password']
config_dict['harbor_db_sslmode'] = external_db_configs['harbor']['ssl_mode']
# clari db
config_dict['harbor_db_max_idle_conns'] = external_db_configs['harbor'].get("max_idle_conns") or default_db_max_idle_conns
config_dict['harbor_db_max_open_conns'] = external_db_configs['harbor'].get("max_open_conns") or default_db_max_open_conns
# clair db
config_dict['clair_db_host'] = external_db_configs['clair']['host']
config_dict['clair_db_port'] = external_db_configs['clair']['port']
config_dict['clair_db_name'] = external_db_configs['clair']['db_name']
Expand All @@ -215,11 +261,14 @@ def parse_yaml_config(config_file_path):
config_dict['notary_server_db_username'] = external_db_configs['notary_server']['username']
config_dict['notary_server_db_password'] = external_db_configs['notary_server']['password']
config_dict['notary_server_db_sslmode'] = external_db_configs['notary_server']['ssl_mode']
else:
config_dict['external_database'] = False


# redis config
redis_configs = configs.get("external_redis")
if redis_configs:
config_dict['external_redis'] = True
# using external_redis
config_dict['redis_host'] = redis_configs['host']
config_dict['redis_port'] = redis_configs['port']
Expand All @@ -228,6 +277,7 @@ def parse_yaml_config(config_file_path):
config_dict['redis_db_index_js'] = redis_configs.get('jobservice_db_index') or 2
config_dict['redis_db_index_chart'] = redis_configs.get('chartmuseum_db_index') or 3
else:
config_dict['external_redis'] = False
## Using local redis
config_dict['redis_host'] = 'redis'
config_dict['redis_port'] = 6379
Expand All @@ -253,4 +303,4 @@ def parse_yaml_config(config_file_path):
# UAA configs
config_dict['uaa'] = configs.get('uaa') or {}

return config_dict
return config_dict
14 changes: 9 additions & 5 deletions make/photon/prepare/utils/core.py
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
import shutil, os

from g import config_dir, templates_dir
from utils.misc import prepare_config_dir, generate_random_string
from g import config_dir, templates_dir, data_dir, DEFAULT_GID, DEFAULT_UID
from utils.misc import prepare_dir, generate_random_string
from utils.jinja import render_jinja

core_config_dir = os.path.join(config_dir, "core", "certificates")
Expand All @@ -10,8 +10,14 @@
core_conf_template_path = os.path.join(templates_dir, "core", "app.conf.jinja")
core_conf = os.path.join(config_dir, "core", "app.conf")

ca_download_dir = os.path.join(data_dir, 'ca_download')
psc_dir = os.path.join(data_dir, 'psc')


def prepare_core(config_dict, with_notary, with_clair, with_chartmuseum):
prepare_core_config_dir()
prepare_dir(psc_dir, uid=DEFAULT_UID, gid=DEFAULT_GID)
prepare_dir(ca_download_dir, uid=DEFAULT_UID, gid=DEFAULT_GID)
prepare_dir(core_config_dir)
# Render Core
# set cache for chart repo server
# default set 'memory' mode, if redis is configured then set to 'redis'
Expand All @@ -32,8 +38,6 @@ def prepare_core(config_dict, with_notary, with_clair, with_chartmuseum):
# Copy Core app.conf
copy_core_config(core_conf_template_path, core_conf)

def prepare_core_config_dir():
prepare_config_dir(core_config_dir)

def copy_core_config(core_templates_path, core_config_path):
shutil.copyfile(core_templates_path, core_config_path)
Expand Down
12 changes: 5 additions & 7 deletions make/photon/prepare/utils/db.py
Original file line number Diff line number Diff line change
@@ -1,20 +1,18 @@
import os

from g import config_dir, templates_dir
from utils.misc import prepare_config_dir
from g import config_dir, templates_dir, data_dir, PG_UID, PG_GID
from utils.misc import prepare_dir
from utils.jinja import render_jinja

db_config_dir = os.path.join(config_dir, "db")
db_env_template_path = os.path.join(templates_dir, "db", "env.jinja")
db_conf_env = os.path.join(config_dir, "db", "env")
database_data_path = os.path.join(data_dir, 'database')

def prepare_db(config_dict):
prepare_db_config_dir()

prepare_dir(database_data_path, uid=PG_UID, gid=PG_GID, mode=0o700)
prepare_dir(db_config_dir)
render_jinja(
db_env_template_path,
db_conf_env,
harbor_db_password=config_dict['harbor_db_password'])

def prepare_db_config_dir():
prepare_config_dir(db_config_dir)
16 changes: 13 additions & 3 deletions make/photon/prepare/utils/docker_compose.py
Original file line number Diff line number Diff line change
Expand Up @@ -13,8 +13,8 @@ def prepare_docker_compose(configs, with_clair, with_notary, with_chartmuseum):
VERSION_TAG = versions.get('VERSION_TAG') or 'dev'
REGISTRY_VERSION = versions.get('REGISTRY_VERSION') or 'v2.7.1'
NOTARY_VERSION = versions.get('NOTARY_VERSION') or 'v0.6.1'
CLAIR_VERSION = versions.get('CLAIR_VERSION') or 'v2.0.7'
CHARTMUSEUM_VERSION = versions.get('CHARTMUSEUM_VERSION') or 'v0.8.1'
CLAIR_VERSION = versions.get('CLAIR_VERSION') or 'v2.0.9'
CHARTMUSEUM_VERSION = versions.get('CHARTMUSEUM_VERSION') or 'v0.9.0'

rendering_variables = {
'version': VERSION_TAG,
Expand All @@ -28,22 +28,32 @@ def prepare_docker_compose(configs, with_clair, with_notary, with_chartmuseum):
'protocol': configs['protocol'],
'http_port': configs['http_port'],
'registry_custom_ca_bundle_path': configs['registry_custom_ca_bundle_path'],
'external_redis': configs['external_redis'],
'external_database': configs['external_database'],
'with_notary': with_notary,
'with_clair': with_clair,
'with_chartmuseum': with_chartmuseum
}

# for gcs
storage_config = configs.get('storage_provider_config') or {}
if storage_config.get('keyfile') and configs['storage_provider_name'] == 'gcs':
rendering_variables['gcs_keyfile'] = storage_config['keyfile']

# for http
if configs['protocol'] == 'https':
rendering_variables['cert_key_path'] = configs['cert_key_path']
rendering_variables['cert_path'] = configs['cert_path']
rendering_variables['https_port'] = configs['https_port']

# for uaa
uaa_config = configs.get('uaa') or {}
if uaa_config.get('ca_file'):
rendering_variables['uaa_ca_file'] = uaa_config['ca_file']

render_jinja(docker_compose_template_path, docker_compose_yml_path, **rendering_variables)
# for log
log_ep_host = configs.get('log_ep_host')
if log_ep_host:
rendering_variables['external_log_endpoint'] = True

render_jinja(docker_compose_template_path, docker_compose_yml_path, mode=0o644, **rendering_variables)
2 changes: 1 addition & 1 deletion make/photon/prepare/utils/jinja.py
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
from jinja2 import Environment, FileSystemLoader
from .misc import mark_file

jinja_env = Environment(loader=FileSystemLoader('/'), trim_blocks=True)
jinja_env = Environment(loader=FileSystemLoader('/'), trim_blocks=True, lstrip_blocks=True)

def render_jinja(src, dest,mode=0o640, uid=0, gid=0, **kw):
t = jinja_env.get_template(src)
Expand Down
13 changes: 7 additions & 6 deletions make/photon/prepare/utils/jobservice.py
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
import os

from g import config_dir, DEFAULT_GID, DEFAULT_UID, templates_dir
from utils.misc import prepare_config_dir
from utils.misc import prepare_dir
from utils.jinja import render_jinja

job_config_dir = os.path.join(config_dir, "jobservice")
Expand All @@ -10,14 +10,14 @@
job_service_conf_template_path = os.path.join(templates_dir, "jobservice", "config.yml.jinja")
jobservice_conf = os.path.join(config_dir, "jobservice", "config.yml")


def prepare_job_service(config_dict):
prepare_config_dir(job_config_dir)
prepare_dir(job_config_dir, uid=DEFAULT_UID, gid=DEFAULT_GID)

log_level = config_dict['log_level'].upper()

# Job log is stored in data dir
job_log_dir = os.path.join('/data', "job_logs")
prepare_config_dir(job_log_dir)

prepare_dir(job_log_dir, uid=DEFAULT_UID, gid=DEFAULT_GID)
# Render Jobservice env
render_jinja(
job_service_env_template_path,
Expand All @@ -31,4 +31,5 @@ def prepare_job_service(config_dict):
uid=DEFAULT_UID,
gid=DEFAULT_GID,
max_job_workers=config_dict['max_job_workers'],
redis_url=config_dict['redis_url_js'])
redis_url=config_dict['redis_url_js'],
level=log_level)
21 changes: 18 additions & 3 deletions make/photon/prepare/utils/log.py
Original file line number Diff line number Diff line change
@@ -1,20 +1,35 @@
import os

from g import config_dir, templates_dir, DEFAULT_GID, DEFAULT_UID
from utils.misc import prepare_config_dir
from utils.misc import prepare_dir
from utils.jinja import render_jinja

log_config_dir = os.path.join(config_dir, "log")

# logrotate config file
logrotate_template_path = os.path.join(templates_dir, "log", "logrotate.conf.jinja")
log_rotate_config = os.path.join(config_dir, "log", "logrotate.conf")

# syslog docker config file
log_syslog_docker_template_path = os.path.join(templates_dir, 'log', 'rsyslog_docker.conf.jinja')
log_syslog_docker_config = os.path.join(config_dir, 'log', 'rsyslog_docker.conf')

def prepare_log_configs(config_dict):
prepare_config_dir(log_config_dir)
prepare_dir(log_config_dir)

# Render Log config
render_jinja(
logrotate_template_path,
log_rotate_config,
uid=DEFAULT_UID,
gid=DEFAULT_GID,
**config_dict)
**config_dict)

# Render syslog docker config
render_jinja(
log_syslog_docker_template_path,
log_syslog_docker_config,
uid=DEFAULT_UID,
gid=DEFAULT_GID,
**config_dict
)
64 changes: 54 additions & 10 deletions make/photon/prepare/utils/misc.py
Original file line number Diff line number Diff line change
@@ -1,6 +1,7 @@
import os
import string
import random
from pathlib import Path

from g import DEFAULT_UID, DEFAULT_GID

Expand Down Expand Up @@ -56,12 +57,12 @@ def validate(conf, **kwargs):
redis_host = conf.get("configuration", "redis_host")
if redis_host is None or len(redis_host) < 1:
raise Exception(
"Error: redis_host in harbor.cfg needs to point to an endpoint of Redis server or cluster.")
"Error: redis_host in harbor.yml needs to point to an endpoint of Redis server or cluster.")

redis_port = conf.get("configuration", "redis_port")
if len(redis_port) < 1:
raise Exception(
"Error: redis_port in harbor.cfg needs to point to the port of Redis server or cluster.")
"Error: redis_port in harbor.yml needs to point to the port of Redis server or cluster.")

redis_db_index = conf.get("configuration", "redis_db_index").strip()
if len(redis_db_index.split(",")) != 3:
Expand All @@ -78,11 +79,33 @@ def generate_random_string(length):
return ''.join(random.choice(string.ascii_letters + string.digits) for _ in range(length))


def prepare_config_dir(root, *name):
absolute_path = os.path.join(root, *name)
if not os.path.exists(absolute_path):
os.makedirs(absolute_path)
return absolute_path
def prepare_dir(root: str, *args, **kwargs) -> str:
gid, uid = kwargs.get('gid'), kwargs.get('uid')
absolute_path = Path(os.path.join(root, *args))
if absolute_path.is_file():
raise Exception('Path exists and the type is regular file')
mode = kwargs.get('mode') or 0o755

# we need make sure this dir has the right permission
if not absolute_path.exists():
absolute_path.mkdir(mode=mode, parents=True)
elif not check_permission(absolute_path, mode=mode):
absolute_path.chmod(mode)

# if uid or gid not None, then change the ownership of this dir
if not(gid is None and uid is None):
dir_uid, dir_gid = absolute_path.stat().st_uid, absolute_path.stat().st_gid
if uid is None:
uid = dir_uid
if gid is None:
gid = dir_gid
# We decide to recursively chown only if the dir is not owned by correct user
# to save time if the dir is extremely large
if not check_permission(absolute_path, uid, gid):
recursive_chown(absolute_path, uid, gid)

return str(absolute_path)



def delfile(src):
Expand All @@ -93,6 +116,27 @@ def delfile(src):
except Exception as e:
print(e)
elif os.path.isdir(src):
for item in os.listdir(src):
itemsrc = os.path.join(src, item)
delfile(itemsrc)
for dir_name in os.listdir(src):
dir_path = os.path.join(src, dir_name)
delfile(dir_path)


def recursive_chown(path, uid, gid):
os.chown(path, uid, gid)
for root, dirs, files in os.walk(path):
for d in dirs:
os.chown(os.path.join(root, d), uid, gid)
for f in files:
os.chown(os.path.join(root, f), uid, gid)


def check_permission(path: str, uid:int = None, gid:int = None, mode:int = None):
if not isinstance(path, Path):
path = Path(path)
if uid is not None and uid != path.stat().st_uid:
return False
if gid is not None and gid != path.stat().st_gid:
return False
if mode is not None and (path.stat().st_mode - mode) % 0o1000 != 0:
return False
return True
82 changes: 58 additions & 24 deletions make/photon/prepare/utils/nginx.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,11 +2,13 @@
from fnmatch import fnmatch
from pathlib import Path

from g import config_dir, templates_dir
from utils.misc import prepare_config_dir, mark_file
from g import config_dir, templates_dir, host_root_dir, DEFAULT_GID, DEFAULT_UID, data_dir
from utils.misc import prepare_dir, mark_file
from utils.jinja import render_jinja
from utils.cert import SSL_CERT_KEY_PATH, SSL_CERT_PATH

host_ngx_real_cert_dir = Path(os.path.join(data_dir, 'secret', 'cert'))

nginx_conf = os.path.join(config_dir, "nginx", "nginx.conf")
nginx_confd_dir = os.path.join(config_dir, "nginx", "conf.d")
nginx_https_conf_template = os.path.join(templates_dir, "nginx", "nginx.https.conf.jinja")
Expand All @@ -17,44 +19,76 @@
CUSTOM_NGINX_LOCATION_FILE_PATTERN_HTTP = 'harbor.http.*.conf'

def prepare_nginx(config_dict):
prepare_config_dir(nginx_confd_dir)
prepare_dir(nginx_confd_dir, uid=DEFAULT_UID, gid=DEFAULT_GID)
render_nginx_template(config_dict)


def prepare_nginx_certs(cert_key_path, cert_path):
"""
Prepare the certs file with proper ownership
1. Remove nginx cert files in secret dir
2. Copy cert files on host filesystem to secret dir
3. Change the permission to 644 and ownership to 10000:10000
"""
host_ngx_cert_key_path = Path(os.path.join(host_root_dir, cert_key_path.lstrip('/')))
host_ngx_cert_path = Path(os.path.join(host_root_dir, cert_path.lstrip('/')))

if host_ngx_real_cert_dir.exists() and host_ngx_real_cert_dir.is_dir():
shutil.rmtree(host_ngx_real_cert_dir)

os.makedirs(host_ngx_real_cert_dir, mode=0o755)
real_key_path = os.path.join(host_ngx_real_cert_dir, 'server.key')
real_crt_path = os.path.join(host_ngx_real_cert_dir, 'server.crt')
shutil.copy2(host_ngx_cert_key_path, real_key_path)
shutil.copy2(host_ngx_cert_path, real_crt_path)

os.chown(host_ngx_real_cert_dir, uid=DEFAULT_UID, gid=DEFAULT_GID)
mark_file(real_key_path, uid=DEFAULT_UID, gid=DEFAULT_GID)
mark_file(real_crt_path, uid=DEFAULT_UID, gid=DEFAULT_GID)


def render_nginx_template(config_dict):
if config_dict['protocol'] == "https":
render_jinja(nginx_https_conf_template, nginx_conf,
"""
1. render nginx config file through protocol
2. copy additional configs to cert.d dir
"""
if config_dict['protocol'] == 'https':
prepare_nginx_certs(config_dict['cert_key_path'], config_dict['cert_path'])
render_jinja(
nginx_https_conf_template,
nginx_conf,
uid=DEFAULT_UID,
gid=DEFAULT_GID,
ssl_cert=SSL_CERT_PATH,
ssl_cert_key=SSL_CERT_KEY_PATH)
location_file_pattern = CUSTOM_NGINX_LOCATION_FILE_PATTERN_HTTPS
cert_dir = Path(os.path.join(config_dir, 'cert'))
ssl_key_path = Path(os.path.join(cert_dir, 'server.key'))
ssl_crt_path = Path(os.path.join(cert_dir, 'server.crt'))
cert_dir.mkdir(parents=True, exist_ok=True)
ssl_key_path.touch()
ssl_crt_path.touch()

else:
render_jinja(
nginx_http_conf_template,
nginx_conf)
nginx_conf,
uid=DEFAULT_UID,
gid=DEFAULT_GID)
location_file_pattern = CUSTOM_NGINX_LOCATION_FILE_PATTERN_HTTP
copy_nginx_location_configs_if_exist(nginx_template_ext_dir, nginx_confd_dir, location_file_pattern)

def add_additional_location_config(src, dst):
"""
These conf files is used for user that wanna add additional customized locations to harbor proxy
:params src: source of the file
:params dst: destination file path
"""
if not os.path.isfile(src):
return
print("Copying nginx configuration file {src} to {dst}".format(
src=src, dst=dst))
shutil.copy2(src, dst)
mark_file(dst, mode=0o644)

def copy_nginx_location_configs_if_exist(src_config_dir, dst_config_dir, filename_pattern):
if not os.path.exists(src_config_dir):
return

def add_additional_location_config(src, dst):
"""
These conf files is used for user that wanna add additional customized locations to harbor proxy
:params src: source of the file
:params dst: destination file path
"""
if not os.path.isfile(src):
return
print("Copying nginx configuration file {src} to {dst}".format(src=src, dst=dst))
shutil.copy2(src, dst)
mark_file(dst, mode=0o644)

map(lambda filename: add_additional_location_config(
os.path.join(src_config_dir, filename),
os.path.join(dst_config_dir, filename)),
Expand Down
15 changes: 10 additions & 5 deletions make/photon/prepare/utils/notary.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
from g import templates_dir, config_dir, root_crt_path, secret_key_dir,DEFAULT_UID, DEFAULT_GID
from .cert import openssl_installed, create_cert, create_root_cert, get_alias
from .jinja import render_jinja
from .misc import mark_file, prepare_config_dir
from .misc import mark_file, prepare_dir

notary_template_dir = os.path.join(templates_dir, "notary")
notary_signer_pg_template = os.path.join(notary_template_dir, "signer-config.postgres.json.jinja")
Expand All @@ -20,12 +20,12 @@


def prepare_env_notary(nginx_config_dir):
notary_config_dir = prepare_config_dir(config_dir, "notary")
notary_config_dir = prepare_dir(config_dir, "notary")
old_signer_cert_secret_path = pathlib.Path(os.path.join(config_dir, 'notary-signer.crt'))
old_signer_key_secret_path = pathlib.Path(os.path.join(config_dir, 'notary-signer.key'))
old_signer_ca_cert_secret_path = pathlib.Path(os.path.join(config_dir, 'notary-signer-ca.crt'))

notary_secret_dir = prepare_config_dir('/secret/notary')
notary_secret_dir = prepare_dir('/secret/notary')
signer_cert_secret_path = pathlib.Path(os.path.join(notary_secret_dir, 'notary-signer.crt'))
signer_key_secret_path = pathlib.Path(os.path.join(notary_secret_dir, 'notary-signer.key'))
signer_ca_cert_secret_path = pathlib.Path(os.path.join(notary_secret_dir, 'notary-signer-ca.crt'))
Expand Down Expand Up @@ -72,9 +72,12 @@ def prepare_env_notary(nginx_config_dir):


print("Copying nginx configuration file for notary")
shutil.copy2(

render_jinja(
os.path.join(templates_dir, "nginx", "notary.upstream.conf.jinja"),
os.path.join(nginx_config_dir, "notary.upstream.conf"))
os.path.join(nginx_config_dir, "notary.upstream.conf"),
gid=DEFAULT_GID,
uid=DEFAULT_UID)

mark_file(os.path.join(notary_secret_dir, "notary-signer.crt"))
mark_file(os.path.join(notary_secret_dir, "notary-signer.key"))
Expand All @@ -88,6 +91,8 @@ def prepare_notary(config_dict, nginx_config_dir, ssl_cert_path, ssl_cert_key_pa
render_jinja(
notary_server_nginx_config_template,
os.path.join(nginx_config_dir, "notary.server.conf"),
gid=DEFAULT_GID,
uid=DEFAULT_UID,
ssl_cert=ssl_cert_path,
ssl_cert_key=ssl_cert_key_path)

Expand Down
9 changes: 9 additions & 0 deletions make/photon/prepare/utils/redis.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
import os

from g import data_dir, REDIS_UID, REDIS_GID
from utils.misc import prepare_dir

redis_data_path = os.path.join(data_dir, 'redis')

def prepare_redis(config_dict):
prepare_dir(redis_data_path, uid=REDIS_UID, gid=REDIS_GID)
Loading