Skip to content

IBM/Grafana-zos

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Grafana on z/OS

This is a WIP fork of Grafana version 8.4.4. The binaries for the server and CLI build successfully but will fail when you try to run them. The issue most likely lies with the implementation of Sqlite (see below) included in this port.

The main difference of this port from the actual Grafana is its dependence on Sqlite. By default, Grafana uses Sqlite as its database. Sqlite itself been documented to work on z/OS, but the drivers required to make it work in Go programs use CGO callbacks. At the time of writing, Go on z/OS does not support CGO callbacks which is required in order to use the default Sqlite driver that Grafana uses.

To get around this problem, this port of Grafana makes use of a pure-Go implementation of Sqlite. However, it is not as straightforward as adding the package as a dependency. The pure-Go Sqlite package, as well as a few of it's dependencies also needs to be ported to z/OS. These dependencies include:

On other platforms the libc and sqlite packages are generated by transpiling C-code into Go, but this process was not possible when porting efforts began. With Go 1.19, it may be possible to do the transpiling like the other platform ports, but this has not been tested and would likely require porting more packages (see https://modernc.org/ccgo). Instead, these packages were ported manually by examining the implementations for darwin/x86 and linux/s390x. As such, there are problems with the z/OS port - functions, types, and values are best guesses as to what they should be and probably need more work.

This repo contains the porting efforts for libc, sqlite, and memory.

Building Grafana on z/OS

Grafana has been built on z/OS 2.4 and 2.5 using Go 1.19.3.

Be aware that this process has lots of room for major improvement and is messy.

  1. To get started create and change into a new directory. Clone this repo there.

  2. Change directories so that you are in the directory containing Grafana. Edit /path/to/grafana/go.mod. Comment out lines 329 to 332:

    replace (
        // github.com/edsrzf/mmap-go => /path/to/mmap-go
        // github.com/hashicorp/go-sockaddr => /path/to/go-sockaddr
        // github.com/prometheus/prometheus => /path/to/prometheus
        // go.opentelemetry.io/otel/exporters/jaeger => /path/to/jaeger
        modernc.org/libc => ./zos/libc
        modernc.org/memory => ./zos/memory
        modernc.org/sqlite => ./zos/sqlite
    )
  3. Run go mod download.

  4. Copy the following directories from $GOPATH/mod (or $HOME/go/mod if $GOPATH is unset) to the directory containing Grafana, libc, sqlite, memory, and sys/unix:

    • cp $GOPATH/mod/github.com/edsrzf/mmap-go -r /path/containing/repos/mmap-go
    • cp $GOPATH/mod/github.com/hashicorp/go-sockaddr -r /path/containing/repos/go-sockaddr
    • cp $GOPATH/mod/github.com/prometheus/prometheus -r /path/containing/repos/prometheus
    • cp $GOPATH/mod/go.opentelemetry.io/otel/exporters/jaeger -r /path/containing/repos/jaeger
  5. Make the following changes:

    • For mmap-go:
      • Edit the file mmap_unix.go by adding a build tag for z/OS and remove references to unix.MAP_ANON
        /* line 4*/
        // +build darwin dragonfly freebsd linux openbsd solaris netbsd zos
        
        ...
        
            if inflags&ANON != 0 {
                // unix.MAP_ANON caused the build to fail, so we use the actual value: 0x0
                 flags |= 0x0       /* line 27 */
            }
    • For go-sockaddr:
      • Apply the fix found here and create the file route_info_zos.go
    • For prometheus:
      • Add a replace statement to the go.mod file:
        replace github.com/edsrzf/mmap-go => "/path/containing/repos/mmap-go"
      • Add build tags to the following files:
        • discovery/file/file.go
          // +build !zos
        • tsdb/fileutil/flock_unix.go
          //go:build darwin || dragonfly || freebsd || linux || netbsd || openbsd || zos
          // +build darwin dragonfly freebsd linux netbsd openbsd zos
      • Create the file discovery/file/file_zos.go
        // Copyright 2015 The Prometheus Authors
        // Licensed under the Apache License, Version 2.0 (the "License");
        // you may not use this file except in compliance with the License.
        // You may obtain a copy of the License at
        //
        // http://www.apache.org/licenses/LICENSE-2.0
        //
        // Unless required by applicable law or agreed to in writing, software
        // distributed under the License is distributed on an "AS IS" BASIS,
        // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
        // See the License for the specific language governing permissions and
        // limitations under the License.
        // +build zos
        
        package file
        
        import (
                "context"
                "encoding/json"
                "fmt"
                "io/ioutil"
                "os"
                "path/filepath"
                "regexp"
                "strings"
                "sync"
                "time"
        
                "github.com/go-kit/log"
                "github.com/go-kit/log/level"
                "github.com/pkg/errors"
                "github.com/prometheus/client_golang/prometheus"
                "github.com/prometheus/common/config"
                "github.com/prometheus/common/model"
                yaml "gopkg.in/yaml.v2"
        
                "github.com/prometheus/prometheus/discovery"
                "github.com/prometheus/prometheus/discovery/targetgroup"
        )
        
        var (
                fileSDScanDuration = prometheus.NewSummary(
                        prometheus.SummaryOpts{
                                Name:       "prometheus_sd_file_scan_duration_seconds",
                                Help:       "The duration of the File-SD scan in seconds.",
                                Objectives: map[float64]float64{0.5: 0.05, 0.9: 0.01, 0.99: 0.001},
                        })
                fileSDReadErrorsCount = prometheus.NewCounter(
                        prometheus.CounterOpts{
                                Name: "prometheus_sd_file_read_errors_total",
                                Help: "The number of File-SD read errors.",
                        })
                fileSDTimeStamp = NewTimestampCollector()
        
                patFileSDName = regexp.MustCompile(`^[^*]*(\*[^/]*)?\.(json|yml|yaml|JSON|YML|YAML)$`)
        
                // DefaultSDConfig is the default file SD configuration.
                DefaultSDConfig = SDConfig{
                        RefreshInterval: model.Duration(5 * time.Minute),
                }
        )
        
        func init() {
                discovery.RegisterConfig(&SDConfig{})
                prometheus.MustRegister(fileSDScanDuration, fileSDReadErrorsCount, fileSDTimeStamp)
        }
        
        // SDConfig is the configuration for file based discovery.
        type SDConfig struct {
                Files           []string       `yaml:"files"`
                RefreshInterval model.Duration `yaml:"refresh_interval,omitempty"`
        }
        
        // Name returns the name of the Config.
        func (*SDConfig) Name() string { return "file" }
        
        // NewDiscoverer returns a Discoverer for the Config.
        func (c *SDConfig) NewDiscoverer(opts discovery.DiscovererOptions) (discovery.Discoverer, error) {
                return NewDiscovery(c, opts.Logger), nil
        }
        
        // SetDirectory joins any relative file paths with dir.
        func (c *SDConfig) SetDirectory(dir string) {
                for i, file := range c.Files {
                        c.Files[i] = config.JoinDir(dir, file)
                }
        }
        
        // UnmarshalYAML implements the yaml.Unmarshaler interface.
        func (c *SDConfig) UnmarshalYAML(unmarshal func(interface{}) error) error {
                *c = DefaultSDConfig
                type plain SDConfig
                err := unmarshal((*plain)(c))
                if err != nil {
                        return err
                }
                if len(c.Files) == 0 {
                        return errors.New("file service discovery config must contain at least one path name")
                }
                for _, name := range c.Files {
                        if !patFileSDName.MatchString(name) {
                                return errors.Errorf("path name %q is not valid for file discovery", name)
                        }
                }
                return nil
        }
        
        const fileSDFilepathLabel = model.MetaLabelPrefix + "filepath"
        
        // TimestampCollector is a Custom Collector for Timestamps of the files.
        type TimestampCollector struct {
                Description *prometheus.Desc
                discoverers map[*Discovery]struct{}
                lock        sync.RWMutex
        }
        
        // Describe method sends the description to the channel.
        func (t *TimestampCollector) Describe(ch chan<- *prometheus.Desc) {
                ch <- t.Description
        }
        
        // Collect creates constant metrics for each file with last modified time of the file.
        func (t *TimestampCollector) Collect(ch chan<- prometheus.Metric) {
                // New map to dedup filenames.
                uniqueFiles := make(map[string]float64)
                t.lock.RLock()
                for fileSD := range t.discoverers {
                        fileSD.lock.RLock()
                        for filename, timestamp := range fileSD.timestamps {
                                uniqueFiles[filename] = timestamp
                        }
                        fileSD.lock.RUnlock()
                }
                t.lock.RUnlock()
                for filename, timestamp := range uniqueFiles {
                        ch <- prometheus.MustNewConstMetric(
                                t.Description,
                                prometheus.GaugeValue,
                                timestamp,
                                filename,
                        )
                }
        }
        
        func (t *TimestampCollector) addDiscoverer(disc *Discovery) {
                t.lock.Lock()
                t.discoverers[disc] = struct{}{}
                t.lock.Unlock()
        }
        
        func (t *TimestampCollector) removeDiscoverer(disc *Discovery) {
                t.lock.Lock()
                delete(t.discoverers, disc)
                t.lock.Unlock()
        }
        
        // NewTimestampCollector creates a TimestampCollector.
        func NewTimestampCollector() *TimestampCollector {
                return &TimestampCollector{
                        Description: prometheus.NewDesc(
                                "prometheus_sd_file_mtime_seconds",
                                "Timestamp (mtime) of files read by FileSD. Timestamp is set at read time.",
                                []string{"filename"},
                                nil,
                        ),
                        discoverers: make(map[*Discovery]struct{}),
                }
        }
        
        // Discovery provides service discovery functionality based
        // on files that contain target groups in JSON or YAML format. Refreshing
        // happens using file watches and periodic refreshes.
        type Discovery struct {
                paths      []string
                interval   time.Duration
                timestamps map[string]float64
                lock       sync.RWMutex
        
                // lastRefresh stores which files were found during the last refresh
                // and how many target groups they contained.
                // This is used to detect deleted target groups.
                lastRefresh map[string]int
                logger      log.Logger
        }
        
        // NewDiscovery returns a new file discovery for the given paths.
        func NewDiscovery(conf *SDConfig, logger log.Logger) *Discovery {
                if logger == nil {
                        logger = log.NewNopLogger()
                }
        
                disc := &Discovery{
                        paths:      conf.Files,
                        interval:   time.Duration(conf.RefreshInterval),
                        timestamps: make(map[string]float64),
                        logger:     logger,
                }
                fileSDTimeStamp.addDiscoverer(disc)
                return disc
        }
        
        // listFiles returns a list of all files that match the configured patterns.
        func (d *Discovery) listFiles() []string {
                var paths []string
                for _, p := range d.paths {
                        files, err := filepath.Glob(p)
                        if err != nil {
                                level.Error(d.logger).Log("msg", "Error expanding glob", "glob", p, "err", err)
                                continue
                        }
                        paths = append(paths, files...)
                }
                return paths
        }
        
        // Run implements the Discoverer interface.
        func (d *Discovery) Run(ctx context.Context, ch chan<- []*targetgroup.Group) {
                d.refresh(ctx, ch)
        
                ticker := time.NewTicker(d.interval)
                defer ticker.Stop()
        
                for {
                        select {
                        case <-ctx.Done():
                                return
        
                        case <-ticker.C:
                                // Setting a new watch after an update might fail. Make sure we don't lose
                                // those files forever.
                                d.refresh(ctx, ch)
        
                        }
                }
        }
        
        func (d *Discovery) writeTimestamp(filename string, timestamp float64) {
                d.lock.Lock()
                d.timestamps[filename] = timestamp
                d.lock.Unlock()
        }
        
        func (d *Discovery) deleteTimestamp(filename string) {
                d.lock.Lock()
                delete(d.timestamps, filename)
                d.lock.Unlock()
        }
        
        // refresh reads all files matching the discovery's patterns and sends the respective
        // updated target groups through the channel.
        func (d *Discovery) refresh(ctx context.Context, ch chan<- []*targetgroup.Group) {
                t0 := time.Now()
                defer func() {
                        fileSDScanDuration.Observe(time.Since(t0).Seconds())
                }()
                ref := map[string]int{}
                for _, p := range d.listFiles() {
                        tgroups, err := d.readFile(p)
                        if err != nil {
                                fileSDReadErrorsCount.Inc()
        
                                level.Error(d.logger).Log("msg", "Error reading file", "path", p, "err", err)
                                // Prevent deletion down below.
                                ref[p] = d.lastRefresh[p]
                                continue
                        }
                        select {
                        case ch <- tgroups:
                        case <-ctx.Done():
                                return
                        }
        
                        ref[p] = len(tgroups)
                }
                // Send empty updates for sources that disappeared.
                for f, n := range d.lastRefresh {
                        m, ok := ref[f]
                        if !ok || n > m {
                                level.Debug(d.logger).Log("msg", "file_sd refresh found file that should be removed", "file", f)
                                d.deleteTimestamp(f)
                                for i := m; i < n; i++ {
                                        select {
                                        case ch <- []*targetgroup.Group{{Source: fileSource(f, i)}}:
                                        case <-ctx.Done():
                                                return
                                        }
                                }
                        }
                }
                d.lastRefresh = ref
        
        }
        
        // readFile reads a JSON or YAML list of targets groups from the file, depending on its
        // file extension. It returns full configuration target groups.
        func (d *Discovery) readFile(filename string) ([]*targetgroup.Group, error) {
                fd, err := os.Open(filename)
                if err != nil {
                        return nil, err
                }
                defer fd.Close()
        
                content, err := ioutil.ReadAll(fd)
                if err != nil {
                        return nil, err
                }
        
                info, err := fd.Stat()
                if err != nil {
                        return nil, err
                }
        
                var targetGroups []*targetgroup.Group
        
                switch ext := filepath.Ext(filename); strings.ToLower(ext) {
                case ".json":
                        if err := json.Unmarshal(content, &targetGroups); err != nil {
                                return nil, err
                        }
                case ".yml", ".yaml":
                        if err := yaml.UnmarshalStrict(content, &targetGroups); err != nil {
                                return nil, err
                        }
                default:
                        panic(errors.Errorf("discovery.File.readFile: unhandled file extension %q", ext))
                }
        
                for i, tg := range targetGroups {
                        if tg == nil {
                                err = errors.New("nil target group item found")
                                return nil, err
                        }
        
                        tg.Source = fileSource(filename, i)
                        if tg.Labels == nil {
                                tg.Labels = model.LabelSet{}
                        }
                        tg.Labels[fileSDFilepathLabel] = model.LabelValue(filename)
                }
        
                d.writeTimestamp(filename, float64(info.ModTime().Unix()))
        
                return targetGroups, nil
        }
        
        // fileSource returns a source ID for the i-th target group in the file.
        func fileSource(filename string, i int) string {
                return fmt.Sprintf("%s:%d", filename, i)
        }
    • For jaeger:
      • Add a // +build !zos build tag to internal/third_party/thrift/lib/go/thrift/socket_unix_conn.go
      • Copy and rename internal/third_party/thrift/lib/go/thrift/socket_windows_conn.go to internal/third_party/thrift/lib/go/socket_zos_conn.go. Change the build tag to // +build zos
  6. Uncomment and edit the lines from step 2:

    replace (
        github.com/edsrzf/mmap-go => /path/to/mmap-go
        github.com/hashicorp/go-sockaddr => /path/to/go-sockaddr
        github.com/prometheus/prometheus => /path/to/prometheus
        go.opentelemetry.io/otel/exporters/jaeger => /path/to/jaeger
        modernc.org/libc => ./zos/libc
        modernc.org/memory => ./zos/memory
        modernc.org/sqlite => ./zos/sqlite
    )
  7. Run go mod tidy

  8. Change directories into the Grafana repo and run make build-go

  9. If the build is successful, two binaries will be available in bin/zos-s390x:

    • grafana-server
    • grafana-cli

Run them at the root of the Grafana directory:

./bin/zos-s390x/grafana-server

Next Steps

Trying to run the grafana-server binary will result in a "out of memory" error. There are a few potential causes. One of them could be that the implementation of memory included is not sufficient enough to work with the port of sqlite. Another likely cause is that the sqlite implementation is flawed. There is a variable ts1 in each os/arch implementation that contains an extremely long string. The implementation of ts1 for z/OS is a copy of the linux/s390x version. It is very likely that this is contributing to the error and fixing it would require that the z/OS version be properly implemented.