Skip to content
This repository has been archived by the owner on Jul 16, 2020. It is now read-only.

Weekly Meeting 2016 09 29

Kristen Carlson Accardi edited this page Sep 29, 2016 · 2 revisions

Agenda

##Minutes

#ciao-project: Weekly_meeting

Meeting started by kristenc at 16:00:40 UTC. The full logs are available at ciao-project/2016/ciao-project.2016-09-29-16.00.log.html .

Meeting summary

Meeting ended at 16:52:25 UTC.

Action Items

  • revisit ciao-storage testing after sprint 3 is complete.

Action Items, by person

  • UNASSIGNED
    • revisit ciao-storage testing after sprint 3 is complete.

People Present (lines said)

  • kristenc (97)
  • markusry (48)
  • obedmr- (11)
  • mcastelino (5)
  • ciaomtgbot (3)
  • rbradford (2)
  • jvillalo (1)
  • sameo (1)

Generated by MeetBot_ 0.1.4

.. _MeetBot: http://wiki.debian.org/MeetBot

###Full IRC Log

16:00:40 <kristenc> #startmeeting Weekly_meeting
16:00:40 <ciaomtgbot> Meeting started Thu Sep 29 16:00:40 2016 UTC.  The chair is kristenc. Information about MeetBot at http://wiki.debian.org/MeetBot.
16:00:40 <ciaomtgbot> Useful Commands: #action #agreed #help #info #idea #link #topic.
16:00:40 <ciaomtgbot> The meeting name has been set to 'weekly_meeting'
16:00:47 <kristenc> #topic Roll Call
16:00:53 <kristenc> o/
16:00:55 <jvillalo> o/
16:00:58 <rbradford> o/
16:01:08 <markusry> o/
16:01:47 <kristenc> this might be a short meeting today.
16:01:54 <kristenc> #topic Opens
16:01:57 <kristenc> anyone?
16:02:43 <rbradford> kristenc, none from me, but mark's just dashed out...
16:02:53 <markusry> He's back and has no opens
16:03:01 <kristenc> well - our schedule says opens last until 9:05 :)
16:03:46 <kristenc> ok, silence. let's do the bug triage then.
16:05:00 <kristenc> #topic Bug Triage
16:05:07 <kristenc> #link https://github.com/01org/ciao/issues?utf8=0.000000E+002    16:05:17 <kristenc> there's all the new issues filed in the last week.
16:05:50 <markusry> All opened by me
16:05:55 <kristenc> #597 has a priority already.
16:06:00 <kristenc> shall we start at the bottom?
16:06:06 <markusry> Okay
16:06:13 <kristenc> https://github.com/01org/ciao/issues/606
16:06:18 <markusry> 606, 607, 608 are all related
16:06:25 * kristenc reads
16:06:30 <markusry> These are all storage issues
16:06:53 <markusry> Or rather enhancements we can make to our current implementation of storage
16:07:03 <mcastelino> o/
16:07:05 <kristenc> I agree with "enhancement" for 606 for sure.
16:07:13 <sameo> o/
16:07:15 <kristenc> markusry, what do you think, p2?
16:07:29 <markusry> Yes, I think so.  P2 for this sprint anyway
16:07:30 <kristenc> I would like to first get a single volume to work :).
16:07:49 <markusry> I was thinking we wouldn't look at them until the next sprint
16:08:01 * kristenc looks at 607
16:08:56 <kristenc> markusry, 607 seems like a p2 as well
16:09:05 <markusry> Yep, agreed.
16:09:44 <kristenc> markusry, 608 is tricky.
16:09:55 <kristenc> you are basically asking for multiattach support.
16:10:02 <markusry> Even for read only volumes?
16:10:30 <markusry> I entered it as the launcher code for handling volumes will race in multi attach right now
16:10:50 <markusry> So I wanted to keep track of this fact.
16:11:24 <markusry> I know controller won't ask launcher to do multi-attach but still
16:11:34 <kristenc> makes sense.
16:11:53 <markusry> Should we make it a P3 then, until we hear otherwise
16:11:59 <kristenc> I was thinking that.
16:12:03 <markusry> Until we get a real request to support this
16:12:27 <kristenc> I think we should consider more carefully how we treat read only volumes though.
16:12:48 <markusry> Yes.  Right now we have no handling for them
16:13:31 <kristenc> yeah - we just treat them like anything else.
16:13:55 <kristenc> i made a note to consider splitting that into 2 one day.
16:14:04 <markusry> Okay.
16:14:27 <markusry> Sounds like a good idea.
16:14:29 <kristenc> ok, 609
16:14:48 <markusry> There seems to be some dispute as to whether this is a bug
16:15:04 <markusry> This isn't really my area but it looked weird
16:15:37 <kristenc> markusry, seems like a bug to me.
16:15:39 <markusry> I got this error on a cluster that had just been set up with ansible.
16:16:13 <kristenc> I think you should be able to list tenants - although I've always disagreed with using controller to list tenants. I think that request should go to keystone.
16:16:22 <kristenc> but it is implemented as a controller endpoint.
16:16:40 <kristenc> the admin user should be able to get the list of tenants.
16:16:54 <kristenc> I wonder if this is something I broke when I moved around all the ciao apis?
16:17:48 <kristenc> I guess I should take a look at this next week.
16:18:05 <markusry> Well, it's not too serious.
16:18:38 <kristenc> markusry, p2?
16:19:17 <kristenc> markusry, just curious, does it work ok on the test clusters that were setup without ansible?
16:19:22 <markusry> Sounds good.  I think leoswaldo if familiar with this code.  Maybe he could have a look?
16:19:42 <kristenc> just wondering if the problem is in the setup of keystone
16:20:01 <kristenc> I'll make a note to check that out and assign it to leoswaldo
16:20:08 <markusry> kristenc: I don't know.
16:20:48 <markusry> Let me add a note about how I set up the cluster
16:22:20 <kristenc> ok - 611
16:22:53 <kristenc> 611 sounds fairly straight forward.
16:23:02 <kristenc> should we call it a p2 and assign it to obedmr- ?
16:23:06 <markusry> Yes I think so.
16:23:09 <markusry> Yep sounds good.
16:23:17 <markusry> I don't see any reason not to do this.
16:23:25 <obedmr-> yep
16:23:48 <kristenc> ok - 613
16:24:13 <markusry> This is needed to run SingleVM in travis
16:24:53 <mcastelino> I think we should try and add this... looks like once we have this travis will be able to run single VM and maybe even BAT.... other issues have been fixed/worked around
16:24:59 <kristenc> markusry, since I know you are mcastelino are working on this now - would you say it's a P1?
16:25:17 <markusry> Sure, it should be easy.
16:25:33 <markusry> Best of all it's assigned to rbradford
16:25:45 <markusry> So P1 all the way :-)
16:25:52 <kristenc> heh.
16:25:59 <kristenc> drop everything!!!
16:26:03 <mcastelino> kristenc: one question... I know we implemented a fake identity service in controller... I think now that we have keystone container for Ciao we may want to use that for single VM
16:26:26 <kristenc> mcastelino, good - thanks for reminding me. I was thinking about this yesterday.
16:26:43 <kristenc> we can move single vm to a keystone container and just use the fake identity for unit tests.
16:26:45 <markusry> mcastelino: That might solve our race condition
16:27:11 <mcastelino> markusry: that was what I was thinking.. instead of faking it... we can use real keystone
16:27:32 <kristenc> mcastelino, and that would be a good test case for whether we can use the ceph demo container in single vm/travis as well.
16:27:35 <kristenc> for adding storage.
16:28:34 <kristenc> ok, we are done with triage.
16:28:40 <kristenc> I guess we can scrub our bugs now.
16:28:54 <markusry> Sure we start with the Sprint 3 bugs
16:28:58 <markusry> Should I mean
16:29:39 <kristenc> markusry, we should just do any P1 bugs, then P2s that are sprint 3 - seem ok?
16:29:48 <kristenc> p2 bugs.
16:29:49 <markusry> OKay.
16:30:01 <kristenc> normally we don't cover non-bugs (i.e. features)
16:30:18 <kristenc> so there are zero p1 bugs. hurray!
16:30:27 <kristenc> oops - forgot to set the topic.
16:30:33 <kristenc> #topic Bug Scrub
16:30:47 <kristenc> #link https://github.com/01org/ciao/issues?q=is0X0P+0open+is0X0P+0issue+label0X0P+0bug+label0X0P+0P1
16:31:10 <kristenc> I think our query for p2 in the agenda isn't quite right.
16:31:12 <kristenc> let me fix it.
16:31:51 <kristenc> #link https://github.com/01org/ciao/issues?q=is0X0P+0open+is0X0P+0issue+label0X0P+0bug+label0X0P+0P2+milestone0X0P+0    16:32:10 <kristenc> of course, I'm not sure how good we've been about assigning P2 bugs to a milestone.
16:33:23 <kristenc> https://github.com/01org/ciao/issues/566
16:33:30 <kristenc> obedmr-, any update on this one?
16:33:45 * obedmr- taking a look
16:34:14 <kristenc> obedmr-, 543 is also yours.
16:34:31 <kristenc> probably because you are the only person who bothers to add the milestone label :).
16:34:42 <obedmr-> will work on 566 this week
16:35:04 <kristenc> obedmr-, does test cases test ciao-image now?
16:35:11 <obedmr-> kristenc: that;s ok, for the 543, I may do some progress there on next week
16:35:20 <obedmr-> kristenc: yes, I updated the pull request
16:35:40 <obedmr-> kristenc: I added it for the deleteImage function
16:36:16 <kristenc> obedmr-, I was talking about this issue here: https://github.com/01org/ciao/issues/571
16:36:32 <kristenc> meaning - we did not have unit testing enabled for ciao-image.
16:36:40 <kristenc> the openstack stuff is enabled.
16:37:07 <obedmr-> kristenc: o yeah, sure, will work on this next week, alongside the persistent data work I'm already doing
16:37:30 <kristenc> I was wondering if we could elevate the priority on that and make it a p2 and part of this sprint?
16:37:46 <kristenc> It makes me uncomfortable that we have so little unit testing on ciao-image.
16:38:10 <obedmr-> kristenc: agree, next week I can focus on testing and persistent datea
16:38:13 <obedmr-> *data
16:38:23 <kristenc> thanks obedmr- I'll update the issue.
16:38:27 <obedmr-> sure
16:39:17 <kristenc> ok - I think that's it for our bug scrub unless people want to go over p2s not assigned to milestone.
16:39:30 <kristenc> I don't really.
16:39:44 <markusry> getting ciao-image into SingleVM will help as well
16:39:54 <obedmr-> sure
16:39:56 <kristenc> ah - let me mark that one too.
16:40:53 <kristenc> assigned that one to sprint 3
16:41:52 <kristenc> ok - it looks like next on the agenda that tim put together was to discuss using the ceph docker image for testing with travis.
16:41:53 <markusry> kristen: Sorry, I just noticed we already had an issue for SingleVM and ciao-image
16:41:54 <markusry> 389
16:42:09 * kristenc checks
16:42:46 <markusry> Sorry, we can close one
16:43:19 <kristenc> markusry, oh yeah - and that answers my question about whether we have an issue filed for using ceph container in single vm.
16:43:39 <kristenc> markusry, let's leave yours open, but change this one to only deal with ceph, not both ceph and image.
16:43:52 <markusry> Okay sounds good.
16:44:50 <kristenc> markusry, I edited 389 to say Modify single VM to use a container for ceph support for the storage stack.
16:45:19 <markusry> Great
16:45:43 <kristenc> markusry, we do have a separate issue for travis support for ciao-storage as well.
16:45:58 <markusry> No. I didn't enter one
16:46:05 <kristenc> however, I'm wondering if since you are merging single vm into travis, just getting it to work in single vm is enough?
16:46:17 <kristenc> markusry, I entered one a couple weeks ago I think.
16:46:44 <kristenc> markusry, https://github.com/01org/ciao/issues/569
16:47:15 * kristenc assigns it to milestone sprint 3
16:47:17 <mcastelino> kristenc: in theory if it works in single VM on your host.. it should work on travis
16:47:31 <markusry> It's not the same as writing unit tests though it is?
16:47:36 <kristenc> mcastelino, but I won't get unit testing for free if we enable it in single vm though, right?
16:47:52 <kristenc> markusry, writing of unit tests is not the problem - there are some unit tests.
16:48:03 <kristenc> the problem is that they require communication with a ceph cluster.
16:48:13 <kristenc> so I couldn't enable them with test-cases.
16:48:24 <markusry> Okay, but should we be adding things to Sprint 3.0
16:48:38 <markusry> We now have 22 open issues
16:49:02 <kristenc> markusry, ah - good point. Thanks for reminding me. I confused "i wish this were done now" with "can we actually get it done". :)
16:49:33 * kristenc clears milestone
16:50:32 <markusry> I need to leave now I'm afraid
16:51:11 <kristenc> markusry, ok. I think we are wrapped up here. I think the conclusion of our discussion on storage/travis/unit testing is that it'll have to wait till after sprint 3.
16:51:31 <kristenc> are there any other topics to discuss? If not, I'll end the meeting.
16:52:08 <kristenc> #action revisit ciao-storage testing after sprint 3 is complete.
Clone this wiki locally