Skip to content
This repository has been archived by the owner on Feb 25, 2023. It is now read-only.

Commit

Permalink
gravel: remove etcd
Browse files Browse the repository at this point in the history
Signed-off-by: Tim Serong <tserong@suse.com>
  • Loading branch information
tserong committed Aug 20, 2021
1 parent f487be0 commit 53f4aa3
Show file tree
Hide file tree
Showing 25 changed files with 17 additions and 735 deletions.
2 changes: 1 addition & 1 deletion doc/dev/gravel/systemd-boot.md
Expand Up @@ -17,7 +17,7 @@ These are

* `/var/log`, so logs are persisted between runs.

* `/var/lib/etcd`, where we keep this node's etcd persistent db.
* `/var/lib/aquarium`, where we keep this node's local kvstore cache.

* `/var/lib/containers`, where container images are kept.

Expand Down
8 changes: 4 additions & 4 deletions doc/project-plan/roadmap.rst
Expand Up @@ -73,7 +73,7 @@ M6

* Add events widget (frontend)

* Basic event stashing on etcd (backend)
* Basic event stashing (backend)
* node join
* ???

Expand Down Expand Up @@ -171,7 +171,7 @@ A list of items brought to you in some unkempt, roughly prioritized order.

* Obtain cluster events (backend)

* Store them in etcd?
* Store them somewhere?
* Mutual exclusion access to ceph cluster operations

* Dashboard
Expand All @@ -185,7 +185,7 @@ A list of items brought to you in some unkempt, roughly prioritized order.

* Figure out what is an event (backend)
* Figure out how to display Ceph status updates as events (backend)
* Store events in etcd (backend)
* Store events (backend)
* Display events (frontend)

* Hosts
Expand All @@ -199,7 +199,7 @@ A list of items brought to you in some unkempt, roughly prioritized order.
* Obtain logs for each node (frontend, backend)
* Obtain all logs (frontend, backend)

* Likely rely on etcd to do keep obtained logs from all nodes (backend)
* Where to keep obtained logs from all nodes? (backend)

* or on websockets to connect to each node and obtain those logs?

Expand Down
4 changes: 2 additions & 2 deletions doc/project-plan/testing-plan.rst
Expand Up @@ -10,7 +10,7 @@ ceph tests should either be sufficient in this regard or expanded upon.
Our primary scope is Aquarium. That is, we want to ensure aquarium does the
right things, makes the right/expected decisions, and executes the orchestrated
plan as expected. We also want to test the integration between Aquarium, the
host OS, Ceph, and any other services (etcd for example) to ensure that the
host OS, Ceph, and any other services to ensure that the
deployment and operation of such services is functional and works with
Aquarium's directives (ie, opinionated configuration).

Expand Down Expand Up @@ -130,4 +130,4 @@ aqrtest

This is not run as part of any CI yet.

TODO
TODO
2 changes: 1 addition & 1 deletion images/README.md
Expand Up @@ -24,7 +24,7 @@ will be then be started on boot.
The latter, `disk.sh`, is run within a chrooted image mount, but before the
image is finalized. During this step we will be obtaining container images
needed for Aquarium's execution, so we have them available upon first run,
including an image for `Ceph` and an image for `etcd`.
including an image for `Ceph`.


## Containerized build environment
Expand Down
3 changes: 1 addition & 2 deletions images/aquarium/config.sh
Expand Up @@ -81,8 +81,7 @@ if [[ "$kiwi_profiles" == *"Vagrant"* ]]; then
fi

pip install fastapi==0.63.0 uvicorn==0.13.3 websockets==8.1 \
bcrypt==3.2.0 pyjwt==2.1.0 python-multipart==0.0.5 \
git+https://github.com/aquarist-labs/aetcd3/@edf633045ce61c7bbac4d4a6ca15b14f8acfe9cd
bcrypt==3.2.0 pyjwt==2.1.0 python-multipart==0.0.5
baseInsertService aquarium-boot
baseInsertService sshd
baseInsertService aquarium
Expand Down
1 change: 0 additions & 1 deletion images/aquarium/config.xml
Expand Up @@ -179,7 +179,6 @@
<package name="python38-requests"/>
<package name="python38-PyYAML"/>
<package name="python3-rados"/>
<package name="etcdctl"/>
<archive name="aquarium.tar.gz"/>
<archive name="root.tar.gz"/>
</packages>
Expand Down
1 change: 0 additions & 1 deletion images/aquarium/disk.sh
Expand Up @@ -39,7 +39,6 @@ mount -t tmpfs none /run

# setting "--events-backend none" means podman doesn't try
# (and fail) to log a "system refresh" event to the journal
/usr/bin/podman --events-backend none pull quay.io/coreos/etcd:latest
# we don't get to use cephadm directly because it will
# try running a container inside the chroot, and that
# fails with a bang.
Expand Down
1 change: 0 additions & 1 deletion src/boot/aqrbootsetup.sh
Expand Up @@ -50,7 +50,6 @@ mount /dev/mapper/aquarium-systemdisk /aquarium
# overlay
overlay /etc etc || exit 1
overlay /var/log logs || exit 1
overlay /var/lib/etcd etcd || exit 1
overlay /var/lib/aquarium aquarium || exit 1
overlay /var/lib/containers containers || exit 1
overlay /root roothome || exit 1
Expand Down
9 changes: 0 additions & 9 deletions src/gravel/controllers/config.py
Expand Up @@ -53,14 +53,6 @@ class ServicesOptionsModel(BaseModel):
probe_interval: float = Field(1.0, title="Services Probe Interval")


class EtcdOptionsModel(BaseModel):
registry: str = Field(
"quay.io/coreos/etcd", title="Container Image Registry"
)
version: str = Field("latest", title="Container Version Label")
data_dir: str = Field("/var/lib/etcd", title="Etcd Data Dir")


class AuthOptionsModel(BaseModel):
jwt_secret: str = Field(
title="The access token secret",
Expand All @@ -81,7 +73,6 @@ class OptionsModel(BaseModel):
devices: DevicesOptionsModel = Field(DevicesOptionsModel())
status: StatusOptionsModel = Field(StatusOptionsModel())
services: ServicesOptionsModel = Field(ServicesOptionsModel())
etcd: EtcdOptionsModel = Field(EtcdOptionsModel())
auth: AuthOptionsModel = Field(AuthOptionsModel())


Expand Down
35 changes: 0 additions & 35 deletions src/gravel/controllers/nodes/deployment.py
Expand Up @@ -36,7 +36,6 @@
NodeHasBeenDeployedError,
NodeHasJoinedError,
)
from gravel.controllers.nodes.etcd import spawn_etcd
from gravel.controllers.nodes.host import HostnameCtlError, set_hostname
from gravel.controllers.nodes.messages import (
ErrorMessageModel,
Expand Down Expand Up @@ -307,7 +306,6 @@ async def join(
assert welcome.pubkey
assert welcome.cephconf
assert welcome.keyring
assert welcome.etcd_peer

# create system disk after we are certain we are joining.
# ensure all state writes happen only after the disk has been created.
Expand All @@ -321,17 +319,6 @@ async def join(
self._state.mark_join()
await self._set_hostname(hostname)

my_url: str = f"{hostname}=http://{address}:2380"
initial_cluster: str = f"{welcome.etcd_peer},{my_url}"
await spawn_etcd(
self._gstate,
new=False,
token=None,
hostname=hostname,
address=address,
initial_cluster=initial_cluster,
)

authorized_keys: Path = Path("/root/.ssh/authorized_keys")
if not authorized_keys.parent.exists():
authorized_keys.parent.mkdir(0o700)
Expand Down Expand Up @@ -380,23 +367,6 @@ async def join(
self._state.mark_ready()
return True

async def _prepare_etcd(
self, hostname: str, address: str, token: str
) -> None:
assert self._state
if self._state.bootstrapping:
raise NodeCantDeployError("node being deployed")
elif not self._state.nostage:
raise NodeCantDeployError("node can't be deployed")

await spawn_etcd(
self._gstate,
new=True,
token=token,
hostname=hostname,
address=address,
)

async def deploy(
self,
config: DeploymentConfig,
Expand Down Expand Up @@ -485,11 +455,6 @@ async def _assimilate_devices() -> None:
self._progress = ProgressEnum.PREPARING

await self._set_hostname(hostname)
try:
await self._prepare_etcd(hostname, address, token)
except NodeError as e:
logger.error(f"bootstrap prepare error: {e.message}")
raise e

await self._set_ntp_addr(ntp_addr)

Expand Down
163 changes: 0 additions & 163 deletions src/gravel/controllers/nodes/etcd.py

This file was deleted.

1 change: 0 additions & 1 deletion src/gravel/controllers/nodes/messages.py
Expand Up @@ -41,7 +41,6 @@ class WelcomeMessageModel(BaseModel):
pubkey: str
cephconf: str
keyring: str
etcd_peer: str


class ReadyToAddMessageModel(BaseModel):
Expand Down

0 comments on commit 53f4aa3

Please sign in to comment.