Skip to content
This repository has been archived by the owner on May 7, 2020. It is now read-only.

LIFX : fix #2191, #2193 and other improvements #2201

Merged
merged 1 commit into from Sep 25, 2016

Conversation

kgoderis
Copy link
Contributor

This should fix some concurrency issues.

@wborn If would be helpful if you could test this in your LIFX "swarm". I have put the number of retries for polling to 4. I prefer not to put that as a configurable parameter as it would clutter the simple and clean configuration interface of the bulbs, but instead I would propose to put it to the value that works for you, since you have the biggest collection of LIFX bulbs I know of ;-)

Signed-off-by: Karel Goderis karel.goderis@me.com

@wborn
Copy link
Contributor

wborn commented Sep 20, 2016

@kgoderis thanks for the quick response! I'll put the swarm to work with your updated code and report back on how well it all works with it. With using 4 retries I already know that only very occasionally a bulb goes offline.

#2191 does does not occur frequently. I'll try to reproduce it by adding/removing the bulbs a lot of times.

I think when your code works with my setup it should work with everyone's. I'm still running everything on my fast development PC instead of a less capable Raspberry Pi or so. So this setup is a concurrency nightmare for your code. ;-)

@wborn
Copy link
Contributor

wborn commented Sep 20, 2016

@kgoderis I still ran into deadlock, I'll add some review comments on where I see room for improvement.

This is the thread dump I now got of the deadlock

Name: ESH-discovery-1
State: WAITING on java.util.concurrent.locks.ReentrantLock$NonfairSync@68f44d4b owned by: ESH-thingHandler-3
Total blocked: 0  Total waited: 14

Stack trace: 
sun.misc.Unsafe.park(Native Method)
java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(AbstractQueuedSynchronizer.java:870)
java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(AbstractQueuedSynchronizer.java:1199)
java.util.concurrent.locks.ReentrantLock$NonfairSync.lock(ReentrantLock.java:209)
java.util.concurrent.locks.ReentrantLock.lock(ReentrantLock.java:285)
org.eclipse.smarthome.binding.lifx.internal.LifxNetworkThrottler.lock(LifxNetworkThrottler.java:66)
org.eclipse.smarthome.binding.lifx.internal.LifxLightDiscovery.broadcastPacket(LifxLightDiscovery.java:229)
org.eclipse.smarthome.binding.lifx.internal.LifxLightDiscovery.doScan(LifxLightDiscovery.java:210)
org.eclipse.smarthome.binding.lifx.internal.LifxLightDiscovery$2.run(LifxLightDiscovery.java:144)
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
java.lang.Thread.run(Thread.java:745)


Name: ESH-thingHandler-3
State: WAITING on java.util.concurrent.locks.ReentrantLock$NonfairSync@55ef972f owned by: ESH-discovery-1
Total blocked: 0  Total waited: 9,858

Stack trace: 
sun.misc.Unsafe.park(Native Method)
java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(AbstractQueuedSynchronizer.java:870)
java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(AbstractQueuedSynchronizer.java:1199)
java.util.concurrent.locks.ReentrantLock$NonfairSync.lock(ReentrantLock.java:209)
java.util.concurrent.locks.ReentrantLock.lock(ReentrantLock.java:285)
org.eclipse.smarthome.binding.lifx.internal.LifxNetworkThrottler.lock(LifxNetworkThrottler.java:38)
org.eclipse.smarthome.binding.lifx.handler.LifxLightHandler.sendPacket(LifxLightHandler.java:601)
org.eclipse.smarthome.binding.lifx.handler.LifxLightHandler.access$12(LifxLightHandler.java:589)
org.eclipse.smarthome.binding.lifx.handler.LifxLightHandler$1.run(LifxLightHandler.java:556)
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
java.lang.Thread.run(Thread.java:745)

@@ -25,8 +27,8 @@

public final static long PACKET_INTERVAL = 50;

private static ConcurrentHashMap<String, ReentrantLock> locks = new ConcurrentHashMap<String, ReentrantLock>();
private static ConcurrentHashMap<String, Long> timestamps = new ConcurrentHashMap<String, Long>();
private static Map<String, ReentrantLock> locks = Collections.synchronizedMap(new HashMap<String, ReentrantLock>());
Copy link
Contributor

@wborn wborn Sep 20, 2016

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This data structure still does not solve the issue that when as the Map grows it starts returning the locks in a different order when values() is called. All threads should really lock in the same order to get rid of the deadlock.

You can clearly see it happen when you make a simple test printing Strings as map values. It kicks even in earlier than I thought. Especially when there are a lot of bulbs, the order starts changing a lot each time a new lock is added:

    Map<String, String> keyValueMap = Collections.synchronizedMap(new HashMap<String, String>());

    @Test
    public void valuesTest()
    {
        String values = "";

        for (int i = 0; i < 40; i++)
        {
            keyValueMap.put("key" + i, "value" + i);
            values = StringUtils.join(keyValueMap.values(), ",");

            System.out.println(values);

        }
    }


The output of system is then:

value0
value1,value0
value1,value2,value0
value1,value2,value0,value3
value1,value2,value0,value3,value4
value1,value2,value0,value5,value3,value4
value1,value2,value0,value5,value6,value3,value4
value1,value2,value0,value5,value6,value3,value4,value7
value1,value2,value0,value5,value6,value3,value4,value7,value8
value1,value2,value0,value5,value6,value3,value4,value9,value7,value8
value1,value2,value0,value5,value6,value3,value4,value9,value7,value8,value10
value1,value2,value0,value5,value6,value3,value4,value9,value7,value8,value11,value10
value1,value2,value0,value5,value6,value3,value4,value9,value7,value8,value11,value10,value12
value1,value2,value0,value5,value6,value3,value4,value9,value7,value8,value11,value10,value13,value12
value1,value2,value0,value5,value6,value3,value4,value9,value7,value8,value11,value10,value14,value13,value12
value1,value2,value0,value5,value6,value3,value4,value9,value7,value8,value11,value10,value15,value14,value13,value12
value1,value2,value0,value5,value6,value3,value4,value9,value7,value8,value11,value10,value15,value14,value13,value12,value16
value1,value2,value0,value5,value6,value3,value4,value9,value7,value8,value11,value10,value15,value14,value13,value12,value17,value16
value1,value2,value0,value5,value6,value3,value4,value9,value7,value8,value11,value10,value15,value14,value13,value12,value18,value17,value16
value1,value2,value0,value5,value6,value3,value4,value9,value7,value8,value11,value10,value15,value14,value13,value12,value19,value18,value17,value16
value1,value2,value0,value5,value6,value3,value4,value9,value7,value8,value11,value10,value20,value15,value14,value13,value12,value19,value18,value17,value16
value1,value2,value0,value5,value6,value3,value4,value9,value7,value8,value11,value10,value21,value20,value15,value14,value13,value12,value19,value18,value17,value16
value1,value2,value0,value5,value6,value3,value4,value9,value7,value8,value11,value10,value22,value21,value20,value15,value14,value13,value12,value19,value18,value17,value16
value1,value2,value0,value5,value6,value3,value4,value9,value7,value8,value11,value10,value22,value21,value20,value15,value14,value13,value12,value19,value18,value17,value16,value23
value1,value2,value0,value5,value6,value3,value4,value9,value7,value8,value11,value10,value22,value21,value20,value15,value14,value13,value12,value19,value18,value17,value16,value24,value23
value1,value2,value0,value5,value6,value3,value4,value9,value7,value8,value11,value10,value22,value21,value20,value15,value14,value13,value12,value19,value18,value17,value16,value25,value24,value23
value1,value2,value0,value5,value6,value3,value4,value9,value7,value8,value11,value10,value22,value21,value20,value15,value14,value13,value12,value19,value18,value17,value16,value26,value25,value24,value23
value1,value2,value0,value5,value6,value3,value4,value9,value7,value8,value11,value10,value22,value21,value20,value15,value14,value13,value12,value19,value18,value17,value16,value26,value25,value24,value23,value27
value1,value2,value0,value5,value6,value3,value4,value9,value7,value8,value11,value10,value22,value21,value20,value15,value14,value13,value12,value19,value18,value17,value16,value26,value25,value24,value23,value28,value27
value1,value2,value0,value5,value6,value3,value4,value9,value7,value8,value11,value10,value22,value21,value20,value15,value14,value13,value12,value19,value18,value17,value16,value26,value25,value24,value23,value29,value28,value27
value1,value2,value0,value5,value6,value3,value4,value9,value7,value8,value11,value10,value22,value21,value20,value15,value14,value13,value12,value19,value18,value17,value16,value30,value26,value25,value24,value23,value29,value28,value27
value1,value2,value0,value5,value6,value3,value4,value9,value7,value8,value11,value10,value22,value21,value20,value15,value14,value13,value12,value19,value18,value17,value16,value31,value30,value26,value25,value24,value23,value29,value28,value27
value1,value2,value0,value5,value6,value3,value4,value9,value7,value8,value11,value10,value22,value21,value20,value15,value14,value13,value12,value19,value18,value17,value16,value32,value31,value30,value26,value25,value24,value23,value29,value28,value27
value1,value2,value0,value5,value6,value3,value4,value9,value7,value8,value11,value10,value22,value21,value20,value15,value14,value13,value12,value19,value18,value17,value16,value33,value32,value31,value30,value26,value25,value24,value23,value29,value28,value27
value1,value2,value0,value5,value6,value3,value4,value9,value7,value8,value34,value11,value10,value22,value21,value20,value15,value14,value13,value12,value19,value18,value17,value16,value33,value32,value31,value30,value26,value25,value24,value23,value29,value28,value27
value1,value2,value0,value5,value6,value3,value4,value9,value7,value8,value35,value34,value11,value10,value22,value21,value20,value15,value14,value13,value12,value19,value18,value17,value16,value33,value32,value31,value30,value26,value25,value24,value23,value29,value28,value27
value1,value2,value0,value5,value6,value3,value4,value9,value7,value8,value36,value35,value34,value11,value10,value22,value21,value20,value15,value14,value13,value12,value19,value18,value17,value16,value33,value32,value31,value30,value26,value25,value24,value23,value29,value28,value27
value1,value2,value0,value5,value6,value3,value4,value9,value7,value8,value37,value36,value35,value34,value11,value10,value22,value21,value20,value15,value14,value13,value12,value19,value18,value17,value16,value33,value32,value31,value30,value26,value25,value24,value23,value29,value28,value27
value1,value2,value0,value5,value6,value3,value4,value9,value7,value8,value37,value36,value35,value34,value38,value11,value10,value22,value21,value20,value15,value14,value13,value12,value19,value18,value17,value16,value33,value32,value31,value30,value26,value25,value24,value23,value29,value28,value27
value1,value2,value0,value5,value6,value3,value4,value9,value7,value8,value37,value36,value35,value34,value39,value38,value11,value10,value22,value21,value20,value15,value14,value13,value12,value19,value18,value17,value16,value33,value32,value31,value30,value26,value25,value24,value23,value29,value28,value27

The Javadoc of synchronizedMap(..) suggest it might be a good idea to synchronize on the Map itself when iterating over it. I might give that a try tomorrow and see what happens.

     * It is imperative that the user manually synchronize on the returned
     * map when iterating over any of its collection views:
     * <pre>
     *  Map m = Collections.synchronizedMap(new HashMap());
     *      ...
     *  Set s = m.keySet();  // Needn't be in synchronized block
     *      ...
     *  synchronized (m) {  // Synchronizing on m, not s!
     *      Iterator i = s.iterator(); // Must be in synchronized block
     *      while (i.hasNext())
     *          foo(i.next());
     *  }
     * </pre>
     * Failure to follow this advice may result in non-deterministic behavior.

private static ConcurrentHashMap<String, ReentrantLock> locks = new ConcurrentHashMap<String, ReentrantLock>();
private static ConcurrentHashMap<String, Long> timestamps = new ConcurrentHashMap<String, Long>();
private static Map<String, ReentrantLock> locks = Collections.synchronizedMap(new HashMap<String, ReentrantLock>());
private static Map<String, Long> timestamps = Collections.synchronizedMap(new HashMap<String, Long>());

public static void lock(String key) {
if (!locks.containsKey(key)) {
Copy link
Contributor

@wborn wborn Sep 20, 2016

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think it is thread safe to first inspect the map and then add the key in the next statement. Most of the time it will work though. ;-) But in theory two threads may inspect the map. Conclude that the key is not there, and then one thread would overwrite the others value.

OK, but I now see it also gets the key again from the map in the next statement so it should be less of an issue.

@kgoderis
Copy link
Contributor Author

The Javadoc of synchronizedMap(..) suggest it might be a good idea to synchronize on the Map itself when iterating over it. I might give that a try tomorrow and see what happens.

mmh... my understanding was that synchronizedMap() does lock on both read and write operations, in contrast to the ConcurrentHashMap.

I fail to see as to why there is a deadlock in the first place. Does the order of the .values() really matter? So, the deadlock occurs when a tread calls .lock(somestring), and then another thread does a .lock() on all locks (e.g. broadcast).

Imagine that order:

  1. lock(string) locks a single lock
  2. lock() iterates over all locks, and is stopped when getting to the individual lock that is already locked in 1.
  3. unlock(string) gets called by the first thread
  4. the lock() continues to lock the remaining locks

what are we missing here?

putting a synchronised() the map itself is a solution, but I am wondering what performance impact that could have on a large number of bulbs

@wborn
Copy link
Contributor

wborn commented Sep 21, 2016

@kgoderis I refactored the LifxNetworkThrottler a bit myself now to see if I can make the deadlock go away. I've reinstated the ConcurrentHashMap because I really noticed some downgraded performance due to the synchronizedMap.

Also I've introduced an additional CopyOnWriteArrayList for storing/getting the locks that would normally be returned by .values(). That should take care of any influence put(..) operations have on .values() might have. So far no deadlocks... I'll keep testing this for a couple of days.

The bulbs do stay online much better again with the 4 retries and there are only occasionally messages about bulbs that are missing in action. 👍

@kgoderis
Copy link
Contributor Author

Also I've introduced an additional CopyOnWriteArrayList for storing/getting the locks that would normally be returned by .values(). That should take care of any influence put(..) operations have on .values() might have. So far no deadlocks... I'll keep testing this for a couple of days.

I too found some documentation on ConcurrentHashMap vs syncrhonizedMap performance, it should a factor 4 difference.

Ok - just do a PR on my repo so that I can integrate it all in a clean PR on the main repo

@wborn
Copy link
Contributor

wborn commented Sep 21, 2016

I think the root cause of the deadlock is that the ConcurrentHashMap resorts to tricks to keep things highly concurrent: http://www.ibm.com/developerworks/java/library/j-jtp07233/index.html

ConcurrentHashMap achieves higher concurrency by slightly relaxing the promises it makes to callers. A retrieval operation will return the value inserted by the most recent completed insert operation, and may also return a value added by an insertion operation that is concurrently in progress (but in no case will it return a nonsense result). Iterators returned by ConcurrentHashMap.iterator() will return each element once at most and will not ever throw ConcurrentModificationException, but may or may not reflect insertions or removals that occurred since the iterator was constructed. No table-wide locking is needed (or even possible) to provide thread-safety when iterating the collection. ConcurrentHashMap may be used as a replacement for synchronizedMap or Hashtable in any application that does not rely on the ability to lock the entire table to prevent updates.

@wborn
Copy link
Contributor

wborn commented Sep 21, 2016

I also think your org.openhab.binding.oceanic.internal.SerialPortThrottler has the same issues ;-)

@kgoderis
Copy link
Contributor Author

I also think your org.openhab.binding.oceanic.internal.SerialPortThrottler has the same issues ;-)

Yeah, I know, but that is on a single serial port, and normally there is no more than 1 thread accessing the throttler.

@kgoderis
Copy link
Contributor Author

@wborn Any further observations ?

@wborn
Copy link
Contributor

wborn commented Sep 22, 2016

The deadlock no longer occurred after restarting OH dozens of times with with my refactored version of the LifxNetworkThrottler for the last few days. So I'll put in a PR with your repo.

There is still a ConcurrentModificationException when bulbs come back ONLINE when they have been OFFLINE.

21:13:31.788 [ERROR] [inding.lifx.handler.LifxLightHandler] - An exception orccurred while communicating with the bulb
java.util.ConcurrentModificationException
    at java.util.HashMap$HashIterator.nextNode(HashMap.java:1437)[:1.8.0_101]
    at java.util.HashMap$KeyIterator.next(HashMap.java:1461)[:1.8.0_101]
    at org.eclipse.smarthome.binding.lifx.handler.LifxLightHandler$1.run(LifxLightHandler.java:459)[214:org.eclipse.smarthome.binding.lifx:0.9.0.201609221856]
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)[:1.8.0_101]
    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)[:1.8.0_101]
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)[:1.8.0_101]
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)[:1.8.0_101]
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)[:1.8.0_101]
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)[:1.8.0_101]
    at java.lang.Thread.run(Thread.java:745)[:1.8.0_101]

It is easy to reproduce. Just disconnect power from a bulb. Wait for it to go offline. Then restore power to the bulb. When it goes online the exception occurs.

Also I wonder why in LifxLightHandler.sendPacket macAddress.getAsLabel() is used and not macAddress.getHex()? The last one would be a less expensive operation.

I still see some A messages with sequence number '54' has already been sent to the bulb. Is it missing in action? messages, even though my bulbs remain online. Do you also have those? They could ofcourse be valid when there is no response. But I'd rather see them gone when they are false positives due to some software bug. :-)

@kgoderis
Copy link
Contributor Author

Also I wonder why in LifxLightHandler.sendPacket macAddress.getAsLabel() is used and not macAddress.getHex()? The last one would be a less expensive operation.

No particular reason but you are right ;-)

@kgoderis
Copy link
Contributor Author

There is still a ConcurrentModificationException when bulbs come back ONLINE when they have been OFFLINE.

Wouter, I was unable to reproduce this. Apart from that, given that a lock is acquired in the run(), I wondering how that CME is possible in the first place. Did you investigate it further?

@kgoderis
Copy link
Contributor Author

I still see some A messages with sequence number '54' has already been sent to the bulb. Is it missing in action? messages, even though my bulbs remain online. Do you also have those?

I have less of these since we fixed the error you found. Now, the sequence number is between 1 and 255, and when a lot of messages are sent to a bulb, the sequence number runs over 255 and starts again at 1. If there are previously sent messages still stuck in the sent queue (because there was no answer to it due to whatever problem), then the warning is produced. To overcome this annoying message/situation, we could opt for a mechanism whereby we use the timestamp (a stamp is created when the class instance is instantiated) of the sent packets in the sent queue, and retire these packets after a given amount of time so that the queue is cleared over time. For example, do not keep sent packets older than 60 seconds?

@wborn
Copy link
Contributor

wborn commented Sep 24, 2016

Wouter, I was unable to reproduce this. Apart from that, given that a lock is acquired in the run(), I wondering how that CME is possible in the first place. Did you investigate it further?

I have not yet investigated this issue but I can give it a try.

I did have a look at:

I still see some A messages with sequence number '54' has already been sent to the bulb. Is it missing in action? messages, even though my bulbs remain online. Do you also have those? They could ofcourse be valid when there is no response. But I'd rather see them gone when they are false positives due to some software bug. :-)

After tweaking the logging and filtering requests/responses per bulb, my conclusion is that these are not false positives but indeed packets that never got a response.

To overcome this annoying message/situation, we could opt for a mechanism whereby we use the timestamp (a stamp is created when the class instance is instantiated) of the sent packets in the sent queue, and retire these packets after a given amount of time so that the queue is cleared over time. For example, do not keep sent packets older than 60 seconds?

I think we should address the issue of packets that get lost in another (new) issue. For developers working on the binding it is useful information to know that there were indeed packets that got lost. If we were to do something about it in this issue, I think changing the log level to debug would be the most simple short-term solution for now. End users will most of the time just be nagged with the warning.

Also it really depends how bad it really is that a packet got lost. Most of the packets that are sent to bulbs are just continuously querying its current state (GetRequest/GetServiceRequest/GetLightPowerRequest). When there is no response there will be another request, several seconds later, that will most likely get a response. If not, then the bulb will most likely be offline and the end user will see the bulb go from ONLINE to OFFLINE anyways some time later.

When a packet gets lost that is send via handleCommand(..) it is an issue. I sometimes see this happen in my own network when a bulb does not go on/off. But then I would rather have that the binding recovers from it by resending the command instead of logging warnings. ;-)

Even with handleCommand(..) packets it is sometimes undesirable to recover from lost packets. When a creative end user uses the binding for executing cool lighting effects, I think the binding should only make sure that the last command is successfully executed.

So I think we are done with issues #2191 and #2193 when the ConcurrentModificationException is resolved.

@wborn
Copy link
Contributor

wborn commented Sep 24, 2016

Wouter, I was unable to reproduce this. Apart from that, given that a lock is acquired in the run(), I wondering how that CME is possible in the first place. Did you investigate it further?

I have not yet investigated this issue but I can give it a try.

OK, I also had again some troubles to always reproduce it. So I reprogrammed the binding to automatically throw a bulb each time offline. This increased the chance the ConcurrentModificationException occurs. Apparently it occurs due to registration of the unicastChannel in the selector in handlePacket(..).

    private void handlePacket(Packet packet, InetSocketAddress address) {
...
            if (packet instanceof StateServiceResponse) {
...
                                unicastKey = unicastChannel.register(selector,
                                        SelectionKey.OP_READ | SelectionKey.OP_WRITE);
...

This is actually a method called from the iteration over the selector.selectedKeys() in run() according to the call hierarchy. So it makes sense for a ConcurrentModificationException to occur.

Now I just need to come up with a solution.

@wborn
Copy link
Contributor

wborn commented Sep 25, 2016

Karel I put in a new PR with your repo that should fix the CME. Also I made some minor logging improvement and used the getHex when sending packets. That should resolve all remaining issues of #2191 and #2193.

If you want, you can do something about the nagging warnings in case of lost packets. I'm OK with putting it on debug or not doing anything about it now. I would rather see effort go into making a structural solution. Something that could look like what I outlined in one of my comments above.

… cheaper getHex instead of more expensive getAsLabel when sending packets

LIFX : Fix eclipse-archived#2191
LIFX : Fix eclipse-archived#2193

Also-by: Wouter Born <eclipse@maindrain.net>
Signed-off-by: Karel Goderis <karel.goderis@me.com>
@kgoderis
Copy link
Contributor Author

@kaikreuzer This is LGTM and contains necessary improvements for the LIFX binding. Thanks to @wborn for providing insights and fixes.

@kgoderis kgoderis changed the title LIFX : fix #2191 and #2193 LIFX : fix #2191, #2193 and other improvements Sep 25, 2016
@kaikreuzer
Copy link
Contributor

Thank you both!

@kaikreuzer kaikreuzer merged commit b35f749 into eclipse-archived:master Sep 25, 2016
@kgoderis kgoderis deleted the lifx-fix branch October 4, 2016 18:51
@kaikreuzer kaikreuzer added this to the 0.9.0 milestone Nov 30, 2017
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants