Skip to content
An open source object storage server with Amazon S3 compatible API
Go Python Lua Shell TSQL Makefile
Branch: master
Clone or download
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
.github/ISSUE_TEMPLATE
api Remove some unwanted code Oct 30, 2019
backend Abstract storage backend Oct 9, 2019
ceph Fix bug that some objects cannot be deleted completely when object si… Dec 26, 2019
circuitbreak Adjust parameters of storage, db and circuitbreak to yig.toml Sep 3, 2019
conf add pprof Oct 23, 2019
crypto Log overhaul Oct 8, 2019
doc Log overhaul Oct 8, 2019
error Modify the location of the parameter and modify the parameter name Oct 29, 2019
helper add pprof Oct 23, 2019
iam WIP-Fixed some bugs that isolated by static code detection tools Oct 30, 2019
integrate update to go 1.13 Oct 29, 2019
log Log overhaul Oct 8, 2019
messagebus Log overhaul Oct 8, 2019
meta Subtle adjustments to the structure Oct 29, 2019
mods Log overhaul Oct 8, 2019
mtail support the labels of billing in access log Jul 31, 2019
package
plugins
redis Provide usage to Prometheus Dec 26, 2019
signature
storage
test
tools
.gitignore Add plugins; Modify iam client to plug-in mode;Add markdown. (#132) Jul 11, 2019
.travis.yml
LICENSE Create LICENSE Apr 27, 2018
Makefile Rearrange the build path. Fix the bug that plugin can't be loaded by … Aug 20, 2019
README.markdown Update README.markdown Oct 13, 2019
admin-server.go remove pprof in admin-server Oct 23, 2019
api-server.go
collector.go
go.mod
go.sum Test case for put object meta Oct 29, 2019
jwt-middleware.go Log overhaul Oct 8, 2019
main.go add pprof Oct 23, 2019
payload.json add sses3 encrypt Nov 29, 2018
transit.hcl add sses3 encrypt Nov 29, 2018

README.markdown

YIG

Build Status license FOSSA Status

Yet another Index Gateway

Introduction

A completely new designed object storage gateway framework that fully compatible with Amazon S3

At its core, Yig extend minio backend storage to allow more than one ceph cluster work together and form a super large storage resource pool, users could easily enlarge the pool`s capacity to EB level by adding a new ceph cluser to this pool. Benifits are avoiding data movement and IO drop down caused by adding new host or disks to old ceph cluster as usual way. To accomplish this goal, Yig need a distribute database to store meta infomation. Now already Support Tidb,MySql.

arch

Getting Started

Build

How to build?

Require:

  • ceph-devel
  • go(>=1.7)

Steps:

mkdir -p $GOPATH/src/github.com/journeymidnight
cd $GOPATH/src/github.com/journeymidnight
git clone git@github.com:yig/yig.git
cd $YIG_DIR
go get ./...
go build

build rpm package

yum install ceph-devel
sh package/rpmbuild.sh

Dependency

Before running Yig, requirments below are needed:

  • Deploy at least a ceph cluster with two specify pools named 'tiger' and 'rabbit' are created. About how to deploy ceph, please refer https://ceph.com or our [Sample]

  • Deploy a TiDB/Mysql, then create tables. [Sample]

    • Tidb/Mysql:
     MariaDB [(none)]> create database yig
     MariaDB [(none)]> source ../yig/integrate/yig.sql
    
  • Deploy yig-iam used for user management and authorize request. If Yig is running in Debug Mode, request will not sent to yig-iam. So this deployment is optional, but in real factory environment, you still need it.

  • Deploy a standalone Redis instance used as cache for better performance. This deployment is optional but strong recommend

yum install redis

Config files

Main config file of Yig is located at /etc/yig/yig.toml by default

s3domain = ["s3.test.com","s3-internal.test.com"]
region = "cn-bj-1"
log_path = "/var/log/yig/yig.log"
access_log_path = "/var/log/yig/access.log"
access_log_format = "{combined}"
panic_log_path = "/var/log/yig/panic.log"
log_level = 20
pid_file = "/var/run/yig/yig.pid"
api_listener = "0.0.0.0:8080"
admin_listener = "0.0.0.0:9000"
admin_key = "secret"
ssl_key_path = ""
ssl_cert_path = ""

# DebugMode
lcdebug = true
debug_mode = true
reserved_origins = "s3.test.com,s3-internal.test.com"

# Meta Config
meta_cache_type = 2
meta_store = "tidb"
tidb_info = "root:@tcp(10.5.0.17:4000)/yig"
keepalive = true
zk_address = "hbase:2181"
redis_address = "redis:6379"
redis_password = "hehehehe"
redis_connection_number = 10
memory_cache_max_entry_count = 100000
enable_data_cache = true
redis_connect_timeout = 1
redis_read_timeout = 1
redis_write_timeout = 1
redis_keepalive = 60
redis_pool_max_idle = 3
redis_pool_idle_timeout = 30
cache_circuit_check_interval = 3
cache_circuit_close_sleep_window = 1
cache_circuit_close_required_count = 3
cache_circuit_open_threshold = 1


# Ceph Config
ceph_config_pattern = "/etc/ceph/*.conf"

Meanings of options above:

S3Domain: your s3 service domain
Region: doesn`t matter
IamEndpoint: address of iam service
IamKey: specify as your wish, but must be the same as iam config files
IamSecret: specify as your wish, but must be the same as iam config files
LogPath: location of yig access log file
PanicLogPath: location of yig panic log file
BindApiAddress: your s3 service endpoint
BindAdminAddress: end point for tools/admin
SSLKeyPath: SSL key location 
SSLCertPath: SSL Cert location
ZookeeperAddress: zookeeper address if you choose hbase
RedisAddress: Redis access address
DebugMode: if this is set true, only requestes signed by [AK/SK:hehehehe/hehehehe] are valid
AdminKey: used for tools/admin
MetaCacheType: 
EnableDataCache:
CephConfigPattern: ceph config files for yig
GcThread: control gc speed when tools/lc is running
LogLevel: [1-20] the bigger number is, the more log output to log file
ReservedOrigins: set CORS when s3 request are from web browser
TidbInfo:

Ceph config files

Combine your ceph cluster config file [/etc/ceph/ceph.conf] with [/etc/ceph/ceph.client.admin.keyring] together, then put it to the location which 'CephConfigPattern' specified, a sample is below

[global]
fsid = 7b3c9d3a-65f3-4024-aaf1-a29b9422665c
mon_initial_members = ceph57
mon_host = 10.180.92.57
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
filestore_xattr_use_omap = true
osd pool default size = 3
osd pool default min size = 2
osd pool default pg num = 128
osd pool default pgp num = 128

[client.admin]
        key = AQCulvxWKAl/MRAA0weYOmmkArUm/CGBHX0eSA==

Run

Start server:

cd $YIG_DIR
sudo ./yig

OR

systemctl start yig

Documentation

Please refer our wiki for other information

License

FOSSA Status

You can’t perform that action at this time.