Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions docker-compose.mongo.yml
Original file line number Diff line number Diff line change
Expand Up @@ -21,6 +21,7 @@ services:
GITPROXY_DATABASE_TYPE: mongo
GITPROXY_DATABASE_URL: mongodb://gitproxy:gitproxy@mongo:27017
GITPROXY_DATABASE_NAME: gitproxy
GITPROXY_SERVER_SESSION_STORE: mongo
depends_on:
mongo:
condition: service_healthy
1 change: 1 addition & 0 deletions docker-compose.postgres.yml
Original file line number Diff line number Diff line change
Expand Up @@ -22,6 +22,7 @@ services:
GITPROXY_DATABASE_URL: "jdbc:postgresql://postgres:5432/gitproxy?sslmode=disable"
GITPROXY_DATABASE_USERNAME: gitproxy
GITPROXY_DATABASE_PASSWORD: gitproxy
GITPROXY_SERVER_SESSION_STORE: jdbc
depends_on:
postgres:
condition: service_healthy
30 changes: 29 additions & 1 deletion docs/CONFIGURATION.md
Original file line number Diff line number Diff line change
Expand Up @@ -130,6 +130,7 @@ server:
# Options:
# none — in-memory (default); sessions lost on restart, not shared across pods
# jdbc — persisted to the configured JDBC database; zero new infrastructure required
# mongo — persisted to the configured MongoDB database; zero new infrastructure required
# redis — persisted to a Redis or Valkey instance; configure via server.redis.*
# Use jdbc or redis for multi-instance deployments so sessions survive pod restarts
# and remain valid across all replicas.
Expand Down Expand Up @@ -181,7 +182,18 @@ server:

A minimal single-replica Redis or Valkey pod is sufficient — sessions are small and low-throughput. No persistence or clustering required for this use case.

> **MongoDB deployments:** MongoDB-backed session storage is planned (#139). In the meantime, use `session-store: jdbc` with a separate PostgreSQL instance, or stand up a Redis pod.
**MongoDB:**

```yaml
server:
session-store: mongo

database:
type: mongo
url: mongodb://gitproxy:secret@mongo.internal:27017/gitproxy
```

Sessions are stored in the `proxy_sessions` collection alongside the other `proxy_*` collections. A TTL index on `expireAt` lets MongoDB expire idle sessions server-side — no background cleanup task runs in the proxy. The session store reuses the same connection pool as the rest of the MongoDB-backed stores, so no extra configuration is needed. Requires `database.type: mongo`.

## TLS

Expand Down Expand Up @@ -312,6 +324,22 @@ database:
name: gitproxy
```

### MongoDB: coexisting with the upstream Node.js git-proxy

If you are migrating from [finos/git-proxy](https://github.com/finos/git-proxy) (the Node.js implementation) and pointing this proxy at a database that previously held its data, the two applications use incompatible document schemas. The safest path is to **provision a new MongoDB database** (e.g. `gitproxy-java`) and point `database.url` at it. This avoids all collision risk and keeps indexes, backups, and ops tooling cleanly separated.

If provisioning a separate database is not feasible, this proxy now uses collection names that do not collide with the upstream Node.js implementation:

| Collection | Written by | Notes |
| ------------------ | --------------------------------- | ---------------------------------------------------------------------- |
| `proxy_users` | `MongoUserStore` | Renamed from `users` to avoid collision with upstream's `users`. |
| `proxy_pushes` | `MongoPushStore` | Renamed from `pushes` to avoid collision with upstream's `pushes`. |
| `repo_permissions` | `MongoRepoPermissionStore` | No upstream equivalent. |
| `access_rules` | `MongoUrlRuleRegistry` | No upstream equivalent. |
| `fetch_records` | `MongoFetchStore` | No upstream equivalent. |

This means you _can_ point both apps at the same MongoDB database without corrupting each other's data. We still recommend separate databases for operational clarity — shared databases make backups, restores, and index tuning harder to reason about — but it is no longer a correctness hazard. Starting with 1.0.0, these collection names are part of the project's stability contract and will not be renamed without an in-place migration path.

## Authentication

The dashboard supports four authentication providers, selected via `auth.provider`.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -25,6 +25,16 @@ public MongoStoreFactory(String connectionString, String databaseName) {
this.databaseName = databaseName;
}

/** Returns the shared {@link MongoClient} so callers (e.g. session store) can reuse the connection pool. */
public MongoClient getMongoClient() {
return client;
}

/** Returns the configured database name. */
public String getDatabaseName() {
return databaseName;
}

/** Create and initialize a {@link PushStore} backed by this factory's client. */
public PushStore pushStore() {
MongoPushStore store = new MongoPushStore(client, databaseName);
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@
public class MongoPushStore implements PushStore {

private static final Logger log = LoggerFactory.getLogger(MongoPushStore.class);
private static final String COLLECTION_NAME = "pushes";
private static final String COLLECTION_NAME = "proxy_pushes";
private static final ObjectMapper MAPPER = new ObjectMapper();
private static final TypeReference<Map<String, String>> ANSWERS_TYPE = new TypeReference<>() {};

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,7 @@
public class MongoUserStore implements UserStore {

private static final Logger log = LoggerFactory.getLogger(MongoUserStore.class);
private static final String COLLECTION_NAME = "users";
private static final String COLLECTION_NAME = "proxy_users";

private final MongoDatabase database;

Expand Down
4 changes: 3 additions & 1 deletion git-proxy-java-dashboard/build.gradle
Original file line number Diff line number Diff line change
Expand Up @@ -158,7 +158,8 @@ dependencies {
// Database drivers at runtime (whichever store is configured)
runtimeOnly "com.h2database:h2:${h2Version}"
runtimeOnly "org.postgresql:postgresql:${postgresVersion}"
runtimeOnly "org.mongodb:mongodb-driver-sync:${mongoVersion}"
// MongoDB driver — compile-scoped because MongoSessionRepository uses the sync client directly.
implementation "org.mongodb:mongodb-driver-sync:${mongoVersion}"

// Gestalt config (needed to catch GestaltException from GitProxyConfigLoader)
implementation "com.github.gestalt-config:gestalt-core:${gestaltVersion}"
Expand All @@ -180,6 +181,7 @@ dependencies {
// Testcontainers for e2e tests (LDAP + OIDC provider integration)
testImplementation "org.testcontainers:testcontainers:${testContainersVersion}"
testImplementation "org.testcontainers:testcontainers-junit-jupiter:${testContainersVersion}"
testImplementation "org.testcontainers:testcontainers-mongodb:${testContainersVersion}"

// Include core + server coverage in the aggregated JaCoCo report
jacocoAggregation project(':git-proxy-java-core')
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -104,6 +104,7 @@ public void lifeCycleStopping(LifeCycle event) {

// Spring MVC DispatcherServlet at /* - git-specific paths take precedence per servlet spec
var jdbcDataSource = configBuilder.getJdbcDataSourceOrNull();
var mongoFactory = configBuilder.getMongoStoreFactoryOrNull();
registerSpringServlet(
context,
ctx,
Expand All @@ -112,7 +113,8 @@ public void lifeCycleStopping(LifeCycle event) {
configHolder,
liveConfigLoader,
urlRuleRegistry,
jdbcDataSource);
jdbcDataSource,
mongoFactory);

server.setHandler(context);
server.start();
Expand All @@ -133,7 +135,8 @@ private static void registerSpringServlet(
ConfigHolder configHolder,
LiveConfigLoader liveConfigLoader,
UrlRuleRegistry urlRuleRegistry,
javax.sql.DataSource jdbcDataSource) {
javax.sql.DataSource jdbcDataSource,
org.finos.gitproxy.db.MongoStoreFactory mongoFactory) {
var appContext = new AnnotationConfigWebApplicationContext();
appContext.register(SpringWebConfig.class, SecurityConfig.class, SessionStoreConfig.class);
appContext.addBeanFactoryPostProcessor(bf -> {
Expand All @@ -153,6 +156,11 @@ private static void registerSpringServlet(
if (jdbcDataSource != null) {
bf.registerSingleton("dataSource", jdbcDataSource);
}
// Expose the shared MongoClient + database name for session-store=mongo. Null for JDBC deployments.
if (mongoFactory != null) {
bf.registerSingleton("mongoClient", mongoFactory.getMongoClient());
bf.registerSingleton("mongoDatabaseName", mongoFactory.getDatabaseName());
}
});

// Refresh the Spring context inside a ServletContextListener so the ServletContext is set
Expand Down
Original file line number Diff line number Diff line change
@@ -1,13 +1,16 @@
package org.finos.gitproxy.dashboard;

import com.mongodb.client.MongoClient;
import jakarta.servlet.Filter;
import java.time.Duration;
import java.util.concurrent.ConcurrentHashMap;
import javax.sql.DataSource;
import lombok.extern.slf4j.Slf4j;
import org.finos.gitproxy.dashboard.session.MongoSessionRepository;
import org.finos.gitproxy.jetty.config.GitProxyConfig;
import org.finos.gitproxy.jetty.config.ServerConfig.RedisConfig;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.beans.factory.annotation.Qualifier;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.data.redis.connection.RedisStandaloneConfiguration;
Expand All @@ -31,6 +34,7 @@
* <li>{@code none} (default) — in-memory {@link MapSessionRepository}; sessions are lost on restart
* <li>{@code jdbc} — {@link JdbcIndexedSessionRepository}; persisted to the configured JDBC database
* <li>{@code redis} — {@link RedisIndexedSessionRepository}; persisted to Redis/Valkey
* <li>{@code mongo} — {@link MongoSessionRepository}; persisted to the configured MongoDB database
* </ul>
*
* <p>The {@code springSessionRepositoryFilter} bean is always registered so that
Expand All @@ -48,6 +52,14 @@ public class SessionStoreConfig {
@Autowired(required = false)
private DataSource dataSource;

/** Injected only for MongoDB deployments — null for JDBC deployments. */
@Autowired(required = false)
private MongoClient mongoClient;

@Autowired(required = false)
@Qualifier("mongoDatabaseName")
private String mongoDatabaseName;

@Bean
@SuppressWarnings("unchecked")
public SessionRepository<?> sessionRepository() {
Expand All @@ -56,6 +68,7 @@ public SessionRepository<?> sessionRepository() {
return switch (store) {
case "jdbc" -> buildJdbc(timeout);
case "redis" -> buildRedis(timeout);
case "mongo" -> buildMongo(timeout);
default -> {
log.info("Session store: in-memory (server.session-store=none). Sessions will not survive restarts.");
var repo = new MapSessionRepository(new ConcurrentHashMap<>());
Expand All @@ -77,7 +90,7 @@ private JdbcIndexedSessionRepository buildJdbc(Duration timeout) {
if (dataSource == null) {
throw new IllegalStateException(
"server.session-store=jdbc requires a JDBC database (h2-file, h2-mem, or postgres)."
+ " Current database.type is mongo — use session-store: none or provision a JDBC database.");
+ " Current database.type is mongo — use session-store: mongo (or none/redis).");
}
log.info("Session store: JDBC (server.session-store=jdbc)");
var jdbcOps = new JdbcTemplate(dataSource);
Expand Down Expand Up @@ -118,4 +131,15 @@ private RedisIndexedSessionRepository buildRedis(Duration timeout) {
repo.setDefaultMaxInactiveInterval(timeout);
return repo;
}

// ── MongoDB ───────────────────────────────────────────────────────────────

private MongoSessionRepository buildMongo(Duration timeout) {
if (mongoClient == null || mongoDatabaseName == null) {
throw new IllegalStateException("server.session-store=mongo requires database.type=mongo."
+ " Current database.type is not mongo — use session-store: jdbc, redis, or none.");
}
log.info("Session store: MongoDB (server.session-store=mongo)");
return new MongoSessionRepository(mongoClient, mongoDatabaseName, timeout);
}
}
Original file line number Diff line number Diff line change
@@ -0,0 +1,142 @@
package org.finos.gitproxy.dashboard.session;

import com.mongodb.client.MongoClient;
import com.mongodb.client.MongoCollection;
import com.mongodb.client.MongoDatabase;
import com.mongodb.client.model.Filters;
import com.mongodb.client.model.IndexOptions;
import com.mongodb.client.model.Indexes;
import com.mongodb.client.model.ReplaceOptions;
import java.io.ByteArrayInputStream;
import java.io.ByteArrayOutputStream;
import java.io.IOException;
import java.io.ObjectInputStream;
import java.io.ObjectOutputStream;
import java.io.UncheckedIOException;
import java.time.Duration;
import java.util.Date;
import java.util.HashMap;
import java.util.Map;
import java.util.concurrent.TimeUnit;
import org.bson.Document;
import org.bson.types.Binary;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.session.MapSession;
import org.springframework.session.SessionRepository;

/**
* Spring Session {@link SessionRepository} backed directly by the MongoDB sync driver — no spring-data-mongodb
* dependency. Sessions are stored as documents in the {@code proxy_sessions} collection with attributes serialized as
* JDK-serialized {@link HashMap} in a BSON {@link Binary} field. A TTL index on {@code expireAt} lets Mongo handle
* idle-session cleanup server-side.
*
* <p>Document shape:
*
* <pre>{@code
* {
* "_id": "<session-id>",
* "createdAt": <Date>,
* "lastAccessedAt": <Date>,
* "maxInactiveSeconds": <long>,
* "expireAt": <Date>, // TTL anchor — Mongo deletes when now() > expireAt
* "attributes": <Binary> // JDK-serialized HashMap<String,Object>
* }
* }</pre>
*
* <p>JDK serialization is used because Spring Security's {@code SecurityContextImpl} and all standard
* {@code Authentication} tokens are {@link java.io.Serializable}. If a future attribute is not serializable,
* {@link #save(MapSession)} will fail fast with an {@link UncheckedIOException}.
*/
public class MongoSessionRepository implements SessionRepository<MapSession> {

private static final Logger log = LoggerFactory.getLogger(MongoSessionRepository.class);
private static final String COLLECTION_NAME = "proxy_sessions";

private final MongoCollection<Document> collection;
private final Duration defaultMaxInactiveInterval;

public MongoSessionRepository(MongoClient mongoClient, String databaseName, Duration defaultMaxInactiveInterval) {
MongoDatabase database = mongoClient.getDatabase(databaseName);
this.collection = database.getCollection(COLLECTION_NAME);
this.defaultMaxInactiveInterval = defaultMaxInactiveInterval;
collection.createIndex(Indexes.ascending("expireAt"), new IndexOptions().expireAfter(0L, TimeUnit.SECONDS));
log.info(
"Mongo session store initialized: db={}, collection={}, default-timeout={}s",
databaseName,
COLLECTION_NAME,
defaultMaxInactiveInterval.getSeconds());
}

@Override
public MapSession createSession() {
MapSession session = new MapSession();
session.setMaxInactiveInterval(defaultMaxInactiveInterval);
return session;
}

@Override
public void save(MapSession session) {
Map<String, Object> attrs = new HashMap<>();
for (String name : session.getAttributeNames()) {
attrs.put(name, session.getAttribute(name));
}
Date expireAt = Date.from(session.getLastAccessedTime().plus(session.getMaxInactiveInterval()));
Document doc = new Document()
.append("_id", session.getId())
.append("createdAt", Date.from(session.getCreationTime()))
.append("lastAccessedAt", Date.from(session.getLastAccessedTime()))
.append("maxInactiveSeconds", session.getMaxInactiveInterval().getSeconds())
.append("expireAt", expireAt)
.append("attributes", new Binary(serialize(attrs)));
collection.replaceOne(Filters.eq("_id", session.getId()), doc, new ReplaceOptions().upsert(true));
}

@Override
public MapSession findById(String id) {
Document doc = collection.find(Filters.eq("_id", id)).first();
if (doc == null) {
return null;
}
MapSession session = new MapSession(id);
session.setCreationTime(doc.getDate("createdAt").toInstant());
session.setLastAccessedTime(doc.getDate("lastAccessedAt").toInstant());
session.setMaxInactiveInterval(Duration.ofSeconds(doc.getLong("maxInactiveSeconds")));
Binary attrBlob = doc.get("attributes", Binary.class);
if (attrBlob != null) {
Map<String, Object> attrs = deserialize(attrBlob.getData());
attrs.forEach(session::setAttribute);
}
if (session.isExpired()) {
deleteById(id);
return null;
}
return session;
}

@Override
public void deleteById(String id) {
collection.deleteOne(Filters.eq("_id", id));
}

private static byte[] serialize(Map<String, Object> attrs) {
try (ByteArrayOutputStream baos = new ByteArrayOutputStream();
ObjectOutputStream oos = new ObjectOutputStream(baos)) {
oos.writeObject(new HashMap<>(attrs));
oos.flush();
return baos.toByteArray();
} catch (IOException e) {
throw new UncheckedIOException("Failed to serialize session attributes", e);
}
}

@SuppressWarnings("unchecked")
private static Map<String, Object> deserialize(byte[] data) {
try (ObjectInputStream ois = new ObjectInputStream(new ByteArrayInputStream(data))) {
return (Map<String, Object>) ois.readObject();
} catch (IOException | ClassNotFoundException e) {
throw new UncheckedIOException(
"Failed to deserialize session attributes", e instanceof IOException io ? io : new IOException(e));
}
}
}
Loading
Loading