-
Notifications
You must be signed in to change notification settings - Fork 7
Multi node ceph #2
Comments
We have created an issue in Pivotal Tracker to manage this: https://www.pivotaltracker.com/story/show/150474538 The labels on this github issue will be updated when the story is started. |
kind of....we install 3 nodes but we put them all on the same VM, so we are definitely not set up for production. This was the first volume service we developed back in the very early days of CF volume services. Also we only provision CephFS, not block or object, FWIW. |
3 nodes mean 3osd,right? When I tested 3osd are located at same volume
also. Do you have the plan to create qouta plan and multi vm cluster?
On Aug 24, 2017 2:58 AM, "Julian Hjortshoj" <notifications@github.com> wrote:
kind of....we install 3 nodes but we put them all on the same VM, so we are
definitely not set up for production. This was the first volume service we
developed back in the very early days of CF volume services. Also we only
provision CephFS, not block or object, FWIW.
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
<#2 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AVx7oakOWwa15BaTo9sB5-iGBpqI7dssks5sbGhhgaJpZM4O_t39>
.
|
As of today, we don't currently plan to make any further enhancements to this release. We are open to pull requests though :) |
ok I am a newbumie..I will try without bosh first.
…On Aug 24, 2017 8:38 AM, "Julian Hjortshoj" ***@***.***> wrote:
As of today, we don't currently plan to make any further enhancements to
this release. We are open to pull requests though :)
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
<#2 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AVx7obR5OiHRDGJyENvM1t9UFLPuhEABks5sbLfogaJpZM4O_t39>
.
|
I updated to jewel and tested with quota option. You can refer https://github.com/marcus-sds/cf-ceph-sds-release. |
Hi @marcus-sds, we were actually thinking about stopping support for this bosh release and moving it into the cloudfoundry-attic github org, as we have not seen much usage for it, or even issues like this one posted. But it sounds as though you have a real use case? If so, can you describe what your use case looks like? And are you interested to contribute your code changes back to the cloud foundry foundation? |
Hi.We are using aws and scalable nas which support restricting quota to
user with small shared persistent fs. Also wanted no dependency with iaas
and no additional money. If I can, I wish to contribute.
…On Oct 20, 2017 02:44, "Julian Hjortshoj" ***@***.***> wrote:
Hi @marcus-sds <https://github.com/marcus-sds>, we were actually thinking
about stopping support for this bosh release and moving it into the
cloudfoundry-attic github org, as we have not seen much usage for it, or
even issues like this one posted.
But it sounds as though you have a real use case? If so, can you describe
what your use case looks like? And are you interested to contribute your
code changes back to the cloud foundry foundation?
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#2 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AVx7oacIVPRTVruuBT1rcvfc9vNTbxtGks5st4qBgaJpZM4O_t39>
.
|
Is this release for ceph multi node?
The text was updated successfully, but these errors were encountered: