Skip to content

Rough Roadmap #1250

Closed
tokengeek opened this Issue Nov 5, 2012 · 27 comments

10 participants

@tokengeek
fog member

Fog's just hit 1.7.0

It's been discussed a few times about bigger changes that we really should schedule.

Immediately I can think of dropping Ruby 1.8 support and splitting Fog up somehow.

I quite like the idea of Fog 1.9.0 dropping 1.8 support and Fog 2.0 being modular. Of course you can argue dropping 1.8 support is big enough to be a 2.0 change as well.

What's the best way for everyone supporting Fog to throw ideas into a hat so us core folks can come up with milestones we can work to?

@ohadlevy
fog member
ohadlevy commented Nov 5, 2012

👎 to dropping 1.87 support, is there a real issue with fog supporting 1.8.7 at the moment?

👍 to make it modular, probably as separate gems

@bradgignac
fog member

I'm with @ohadlevy on 1.8.7 support. I don't have any particular love for 1.8.7, and I don't use it personally. However, I've only ever had one bug related to supporting 1.8.7.

One other issue I'm dealing with for Rackspace (and potentially OpenStack) is determining the best method for handling API versioning. Is this something you guys have dealt with in the Brightbox or VMWare providers? It might require some minor breaking changes to Rackspace, so I'd like to wait until 1.9/2.0 for the change.

@freeformz

Re: 1.8.7 .... http://www.ruby-lang.org/en/news/2011/10/06/plans-for-1-8-7/

So maybe not something we should care about right now, but soon.

I am for dropping 1.8.7 support ASAP though.

@freeformz

Related ...

👍 Ruby 2.0 support

@tokengeek
fog member

Re: 1.8.7 - I'm not actually keen on dropping 1.8.7 support. I fixed some tests with hashes that had been written in 1.9's new syntax but not found anything to warrant dropping support.

I mentioned that purely because I've seen it mentioned a few times on tickets and dropping 1.8.7 is something that all providers need to be aware of!

@bradgignac - We've been looking at the best way to do versioning but not had to worry about it too much for the moment. Most of our changes have been additive so far.

Part of me thinks that it could be linked into the modular work. So you can lock a script to an early version of fog-openstack and get 1.0 behaviour or a later version for 2.0.

Obviously when we just have the master fog version it's difficult to tell customers they need to stick with 1.7.0 of fog for Openstack 1.0 support and can't then use a new AWS feature in 1.8.0!

I don't know whether it would ever be a use case that you would want to load and use two different versions of Rackspace/OpenStack at the same time. That might dictate if you create a new "provider" for the new version of the API so you can work with both.

@geemus wanted particular, smaller issues for breaking up fog into modules so I can put a few of them in.

Of course when we plan to do that comes back to setting up milestones.

@ahmeij
ahmeij commented Nov 5, 2012

Is a stricter API on the models not a prerequisite for splitting up and version management? I think that a more contract based model will help defining the fog standard.

Every offered service could have a 'fog standard model' like the server model now has, and those models can define a minimum implementation. Versioning could then happen on 2 levels, 1 the individual services can implement more of the existing model methods, or 2, new standard model methods can be added. This way fog can version independently from the services, however following semver, breaking changes should have a mayor version number change.

So v1.9 fog will work with Aws 1.1 and Aws 1.14, not with Aws 2.0 and visa versa.

Having this stricter model api would allow for (way) more generic apps/webfrontends to control all the different services as if they where the same, just not all features everywhere.

Just my two cents, feel free to ignore, I'm still a noob on fog development

@nirvdrum
nirvdrum commented Nov 5, 2012

I'd love a unification of model APIs. Arguably this has been a goal right along. But, as an example, the DNS models all have different ways of looking up records. Likewise, some are list-based, some are single-record based. I'd like to audit all high level interfaces for places where we could do better.

@ahmeij
ahmeij commented Nov 5, 2012

If I think about the server model (probably the most used model?) at least the one I have worked with most, I think an (extra?) layer of standardization should be possible

almost all have flavors, datacenters and images albeit with different properties, the server model could start proposing a standard relation between the server, flavors, datacenters, and images collections, the setup and bootstap logic can be standardized (overridden in the places where need be)

a new server could be instantiated with a method requiring a flavor and an image,optionally datacenter (actual flavor and image objects) and a hash of options that could be implementation specific. We can then instantiate, and bootstrap / setup, a new instance storing the metadata and other properties and giving a unified way of accessing those.

Properties that are not supported by the implementation (server instance 'name' for example) could be stored in that attributes/metadata, and when requesting a serverlist for those implementations not supporting these properties we could retrieve them by ssh'ing to the servers, obviously caching the result.

Some servers now have attributes/metadata implementation, I have yet to figure out how that exactly works, however that could be standardized, retrieval of properties not available from the api (thus stored on the server in json) could be done lazily on request of that info

class Server
flavor (model)
image (model)
datacenter (model) (aws region?)
private_ip_address
public_ip_address
ip_address (default ssh address, pointer to either public of private)
attributes (hash of predefined fields, stored on server where needed?)
metadata (hash of freely definable info?)
setup method(do stuff to store authorized keys and other info)
end

Going from here I think we can define a good set of default behaviors for the compute related models. Likely all implementations will have 'extras' that are available, however, those should probably be identified as such, as not part of the official fog api.

Thinking about it, it might be easier to introduce a new layer next to the current 'requests' and 'models' since that is all there, and people are depending on the (currently a tad custom) implementations.

The images can probably also be split in 2: standard fog recognized images: Ubuntu 12.04 LTS 64bit, Debian X, Redhat Y. etc, and custom images, either specific to the customer or to the implementation. For AWS it would be nice to point these defaults to default AMI's specific to the active region

Server flavors are probably too different to standardize, but we can standardize the available information (number of cores, gigaherts, memory, disks)

Maybe a mockup of a set of classes should be made describing a 'generic' implementation for compute first, describing the classes, methods and attributes and their interaction, then we could hold this against the different implementations available now, see how far this mockup will hold up.

I am willing to give a shot at making the mockup, however I might not be the most suited for the job given my lack of fog developer experience.

@geemus
fog member
geemus commented Nov 6, 2012

😱 Holy roadmap Batman! But seriously, my thoughts follow.

Dropping 1.8.7 would be nice in some ways, but mostly it doesn't seem to be a burden so I'm not too concerned.

Ruby 2.0 would be great, would be happy to throw it into the build list when travis starts supporting it.

Modularity is definitely one of the features that I want most. It seems like it could certainly help with the versioning issue and I presume it would greatly alleviate the pain of managing releases.

I think standardization is important and would agree we have drifted a bit. The idea, at least historically, was to have things like https://github.com/fog/fog/tree/master/tests/compute provide a basis of tests that ALL compute providers should pass (and that this would therefore define/enforce standardization). I think these tests are both not up to date and not as extensive as perhaps they should be, but it still seems like it may be the easiest way to make headway on this in the short term. I had envisaged these tests living in fog-core (which other things would have as a dependency and could then run these tests also). Anyway, lots of moving parts there, but that is what I had thought.

Certainly happy to continue discussing any/all points. I've thought about modularity a bit and have at least a rough idea, I've just done a poor job of communicating it.

@tokengeek - thanks for starting the discussion, definitely should have happened, probably a while ago, so thanks for the nudge.

@tokengeek
fog member

@geemus Not a problem. We're starting to lean on Fog more internally so it benefits us to get it streamlined and not face any sudden surprises.

Can't argue that standardisation needs to be in place before we modularise things. Part of that will inform how we modularise things.

libcloud has load balancers as a separate service. It's difficult to say we've got it in Fog. Brightbox has it as part of Compute. Rackspace has a new LoadBalancer service and AWS has it as their own branded ELB service.

If we are moving things around then how we split these will have an impact about how backward compatible we can be.

So thus far we've got the following:

  • Clearer service definitions and interfaces (and tests)
  • Modularisation (remove Fog bin/core dependencies on providers being present, move providers to gems)
  • Ruby 2.0 support
  • Dropping Ruby 1.8.7 support

I've create issues for the first two so we can focus the discussions there and leave this open so we can add any extra points.

@geemus
fog member
geemus commented Nov 6, 2012

@tokengeek - sounds good, thanks again for pushing this forward.

@tokengeek
fog member

Other possible things we may want to consider.

1) Replacing shindo testing with minitest to reduce barriers to collaborating. Worth considering if we are starting to write new tests for compliance.

2) ActiveModel compliance. It seems like Fog's models work just enough like ActiveRecord to catch people out. Is it worth looking at doing this?

@nirvdrum
nirvdrum commented Nov 9, 2012

I really like the fact that fog has a very small dependency list. I wouldn't want to add ActiveModel as a dependency.

@geemus
fog member
geemus commented Nov 9, 2012

I'm also not keen on ActiveModel. I didn't know it was catching people up though, do you have examples?

@rupakg
fog member
rupakg commented Nov 9, 2012
@tokengeek
fog member

@geemus - We discussed this (not ActiveModel itself) waaaay back when I just started on the Brightbox provider.

Pretty much the second or third thing I did at the fog console was try to update an attribute and "save" it. Which (back then) then created a second server.

Think everyone I've spoken to has done something similar.

We put the raise error code on saves to stop that so that no longer an issue.

That behaviour is of course not ActiveModel itself generally though my discussion point is are they any other behaviours people expect that we can get from ActiveModel easily?

I'm not sold myself on the idea. I'm just throwing ideas out for discussion 😃

I get the dependency argument by the way. I'm the guy moaning about lugging libxml around most of the time!

@tokengeek
fog member

re: ActiveModel. I hadn't seen this when I mentioned it but on the fog google group someone asked and implemented it separately - https://groups.google.com/forum/#!topic/ruby-fog/S8PS5ZTjV00/discussion

So it can be done (so we don't have to do it) but it is being done. So there's an example for you @geemus 😉

Really I was thinking more about ActiveModel compliance so fog's models implement the minimum required to pass the lint tests.

Then people can require fog and active model and include useful modules into the models.

So we wouldn't actually need to have it as a dependency.

Anyway - keeping the discussion going.

I've discussed with @geemus about switching from shindo - Is minitest the popular choice? test form / spec form?

@geemus
fog member
geemus commented Nov 12, 2012

@tokengeek - Good point. I think the save thing actually does the right thing for most things (non-servers). It is just a side effect of servers tending not to be-updatable from the api post-boot. Perhaps we can do something to make this more clear (and/or make sure the raise should be evident there), maybe just documentation. Compliance sounds interesting, depending on how much extra work it would take, I'd entertain that.

I'm partial toward minitest, test form, as it is very simple and to the point.

@rubiojr
fog member
rubiojr commented Nov 13, 2012

Hey guys,

Four LTS distributions (RHEL 5, 6, Ubuntu 12.04 and 10.04) will probably never ship Ruby 1.9 (since 1.9 breaks compat), at least replacing system ruby.

Not supporting 1.8.7 any more means that fog users using LTS releases will have to:

  • Use something like rvm/rbenv, 3rd party ruby packages or not very well maintained 1.9 native packages (when available like in Ubuntu 12.04), which is far from ideal and complicates deployments, specially if they mix things like Puppet and/or Chef.
  • Upgrade/redeploy if they want to move to a newer fog version and they're using 1.8.7.
  • Break upstream support if they replace system ruby

I can definitely understand that it's a good idea to move forward and save some time in testing, but since the ruby 1.9 landscape in stable distributions is pretty ugly, It would be great to keep the compat and save our Ops guys some hair.

I definitely want to put my money where my mouth is so I'll gladly accept any compat related issue that you need me to fix.

@tokengeek
fog member

@rubiojr - Excellent point.

I think that dropping 1.8.7 support will end up pretty low down on the roadmap.

If anything this discussion probably means we need to document for contributors that decision and make sure that all patches and perhaps more importantly any third party dependencies remain supported.

Otherwise that might be overlooked when we break fog into smaller modules and some providers use 1.9 only stuff since we never really said they couldn't!

@rubiojr
fog member
rubiojr commented Nov 14, 2012

Thank you.

👍 to modular Fog. I did have a look at porting Fog to Ruboto a couple of months ago, and splitting Fog up would definitely help with that!

@jperry
jperry commented Nov 19, 2012

Hey guys,

I've been using activemodel in my project combined with fog and has worked out great so far. If this was built it in that would be great but not required. At the least if you didn't want to add a dependency on activemodel maybe documenting on how someone would use fog with ActiveModel would be a good start.

As far as making Fog more modular, 👍 since I'm only interested in the aws stuff and only having to include the things I want would be great. I'd be happy to help out in any way for active model compliance or modular work.

Thanks!

@geemus
fog member
geemus commented Nov 19, 2012

Yeah, reducing the footprint will be good. FWIW most of the library is not actually required when used (it loads provider related stuff when you initialize a connection), it is still a big download and more dependencies than you really need often.

@tokengeek
fog member

Okay. I think it's time to do something about this.

The things we have mentioned all seem to converge around standardising interfaces and going modular.

Both have a pretty good chance of breaking something for someone unless we leave lots of stuff to be backwards compatible.

I mentioned to @geemus on #1418 that we should look at going to version 2.0 and making that version a bigger, backwards compatible version of fog. Then version 3.0 we remove all the deprecated, backwards compatible stuff.

So what I'm proposing is that very soon (after fog 1.9) we declare master as where we work on fog 2.0.

fog 2.0:

  • #1418 A new Connection object is added wrapping Excon/Virtualisers
  • Mocking moved to lower level (connection)
  • #1252 Adds a set of clearer definitions for models, separate from current or compatible with
  • existing interfaces still work but are deprecated
  • #1266 Bring in mini-test
  • #1253 Main files to require are left in place but forward to relocated files in a providers area
  • providers tests are isolated
  • fog's core code is extracted
  • fog-core is required by fog
  • providers code is rewritten to be tested by mini-test and using the newer interfaces/models

fog 3.0:

  • Remove deprecated interfaces for old models
  • Providers are removed from fog repo and referenced directly
  • Remove shindo

Version 2.0 should work exactly the same as 1.9 does except with additions so we can still release versions on a monthly basis and it should still be stable.

By deprecation I don't think initially we want to be outputting "this is deprecated" everywhere but we document the code as such. We can add the deprecation output at a later stage.

People that need real stability can of course use the 1.x series. We can be a bit more experimental with 2.x

By the end of 2.0 every provider can be maintaining their own code in their own repo OR a fog organisation repo using a hopefully cleaner, standard and isolated interface (defined in fog-core) with standalone test suites.

Version 3.0 is the end of the version 2.0 series with all the deprecation stuff ripped out.

With so many providers and their new services it's going to be very difficult to switch stuff around once everyone has broken into their own gems OR maintain a 1.x series branch that includes new regions/providers etc.

So does that sound like a plan?

I've tried to cover the major problems I can see - giving libraries a stable point to link to and giving developers time to work on this.

@geemus
fog member
geemus commented Jan 7, 2013

@tokengeek - seems reasonable to me, but then again I oversee more than I write code these days, so perhaps provider level people will have more informed decisions about how impactful this would be for them.

@rubiojr
fog member
rubiojr commented Jan 7, 2013

👍

@tokengeek, I don't have enough experience in fog to weigh on all the 2.0 issues detailed, but count me in to move forward with the fog/xenserver provider with whatever is required.

I definitely like the evolutionary approach and I'd like to thank you for taking the time to write such a great explanation.

I do have a few projects built around fog, so that will be useful to report breakage if that helps.

@rubiojr rubiojr referenced this issue Apr 2, 2013
@cainlevy cainlevy convert lib/fog to simply include all providers
this forces each provider to set up its own requires and share
through fog/core.
e5c438a
@tokengeek tokengeek added the summit label Mar 23, 2014
@tokengeek
fog member

Okay. Going to close this since big chunks have been done and we're going to revisit and discuss during the fog summit.

@tokengeek tokengeek closed this Apr 22, 2014
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Something went wrong with that request. Please try again.