-
Notifications
You must be signed in to change notification settings - Fork 236
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Python updates #53
Python updates #53
Conversation
This reverts commit f2904a9.
This reverts commit 59b14aa.
…evice, also thought about how multiple adapters might need to be implemented
Gevent, PyCrypto and realpath are now required for the Python library. realpath is used to locate the serial number automatically in linux. Crypto functions are now handled in a seperate greenlet and use the PyCrypto AES. Output functions are now handled in a seperate greenlet, output can also be disabled. Output now displays all sensor readings, packets recieved and packets decrypted. On a Raspberry PI I can get ~130 packets per sec, decrypted and stored. I'm not sure if this is good or not. Removed "extra" code, let me know if I broke something.
This reverts commit 0fbbd83.
…working on it. Also added support for unknown quality masks, we still have to add some bit masks in as there are several missing. Will also work on this soon as well.
Removed TP7 and TP8, the do not report values.
It also updates at 128frame per sec as it should.
…06, Y should be 105. Fixed up the render.py example. It is real-time on my computer. I did disable the quality, as it was broken and I haven't fixed it yet. You could probably write this same application just using the current sensor data rather than storing the packets and popping them.
Made the example in a code box.
Added an example.py with the example from the README Cleaned up the emotiv.py, fixed the running as __main__ setup. Turned off output by default.
… slightly modified. read.py is being deleted as a result.
… user has mostly because I don't have access to the dev headset.
Fixed Python Library
Hi, I'm also writing a Python library for accessing Emotiv headsets. It is based on the reverse engineered protocol emokit provides. It doesn't use hidapi but uses pyusb 1.0 to directly communicate with the device. It can autodetect the dongles based on some heuristics without giving explicit VID/PID. It is pretty immature but works. I also tested in on RaspberryPi ARM and been able to plot gyro X,Y with matplotlib. I just wanted to let you know about it, here is the project: https://github.com/ozancaglayan/EmotivBCI Thanks |
Very nice, I may use some of that code after testing it into this repo if you don't mind. From: Ozan Çağlayan [mailto:notifications@github.com] Hi, I'm also writing a Python library for accessing Emotiv headsets. It is based on the reverse engineered protocol emokit provides. It doesn't use hidapi but uses pyusb 1.0 to directly communicate with the device. It can autodetect the dongles based on some heuristics without giving explicit VID/PID. It is pretty immature but works. I also tested in on RaspberryPi ARM and been able to plot gyro X,Y with matplotlib. I just wanted to let you know about it, here is the project: https://github.com/ozancaglayan/EmotivBCI Thanks — |
Cool! I'll try to add a link to the README soon. |
Er, link to your project /in/ the emokit README, I mean. :) |
Hi, You can always use the code in my repo. Currently there's no multithreading or cooperative multithreading like you implemented with gevent but I will try every little piece of optimization as the code will run on the raspberry PI which is quite slow. |
Yeah, my code runs pretty well on the RPi. It is able to keep up with 128 frames/sec. I believe I just figured out that the last bit is the FFT data also. Pretty much same thing as the gyros value = data[31] - 104. I believe the counter represents the frequency repeating or counts up one way and down the other. The FFT implementation that I have looks like I may have it reversed or they have some shifting going on. It looks to be what the bit is used for though. I am almost done with the first incantation of pattern recognition. Although, I'm not sure if I should release it if I get it to actually work. I read the thread about hacking headsets on their forums today. I bought this product with the intent to see it's capabilities/quality. Then if it could accomplish what I wanted it to do I would buy the dev license, but from what I read on the forums that doesn't give you raw access to the data either. So what's the point? My headset is falling apart after only a few weeks and I have been gentle. If I make an application that works as good or better than their control panel apps, should I release it? Do you think they would go after me for doing so? From: Ozan Çağlayan [mailto:notifications@github.com] Hi, You can always use the code in my repo. Currently there's no multithreading or cooperative multithreading like you implemented with gevent but I will try every little piece of optimization as the code will run on the raspberry PI which is quite slow. — |
Well as far as I know FFT is not computed on hardware, it's part of their SDK and control panel but I am not totally sure. Education and Research SDK's give you raw data. I don't remember the Dev one. But on the dongle site they all are the same as you can read raw EEG from your consumer kit, right? I am really not sure about the legal stuff you've asked but apparently they didn't go after emokit people. And If they decide to do that, releasing it as a GUI application or a Python library won't make any difference. They will only care about the reverse-engineered protocol spec which is the only thing that is needed to implement what are we doing. |
I only thought of that because of this response to a question in the forums: At 128Hz you are absolutely limited to a maximum of 64Hz as the maximum frequency you can measure. Higher frequencies will also be measurable in a BAD way - by wrapping around and aliasing back into the 0-64Hz range, so for example if you have a lot of electrical mains interference in a 60Hz country, you will see a spike at 60Hz, another at 8Hz ( 128Hz - 120Hz, first harmonic ), another at 52Hz ( 180 - 128 Hz, second harmonic) and so on. Luckily our filtering fixes most of this before we convert to 128Hz, but the cost is the effective bandwidth of EPOC runs up to about 43-45Hz because of the 50 + 60Hz notch filtering we apply. Which leads me to believe it is being transmitted from the headset. If the value was calculated from the other sensor values after the fact they probably wouldn’t be limited by 128hz. From: Ozan Çağlayan [mailto:notifications@github.com] Well as far as I know FFT is not computed on hardware, it's part of their SDK and control panel but I am not totally sure. Education and Research SDK's give you raw data. I don't remember the Dev one. But on the dongle site they all are the same as you can read raw EEG from your consumer kit, right? I am really not sure about the legal stuff you've asked but apparently they didn't go after emokit people. And If they decide to do that, releasing it as a GUI application or a Python library won't make any difference. They will only care about the reverse-engineered protocol spec which is the only thing that is needed to implement what are we doing. — |
Maybe you are right I also see this in the same post. I also note you aren't using a sample window function. The FFT stitches your sample together end to end and if there is a numerical mismatch between the last sample and the first one, the FFT sees a step function at the join, and I don't need to tell you about the effects of a step function in a fourier transform... You should apply a tapered window function which scales the ends of the sample to zero and does not rescale the middle of the sample. Try Hanning or Hamming windows - these are designed to taper smoothly and with minimal effect on higher frequencies. From: Ozan Çağlayan [mailto:notifications@github.com] Well as far as I know FFT is not computed on hardware, it's part of their SDK and control panel but I am not totally sure. Education and Research SDK's give you raw data. I don't remember the Dev one. But on the dongle site they all are the same as you can read raw EEG from your consumer kit, right? I am really not sure about the legal stuff you've asked but apparently they didn't go after emokit people. And If they decide to do that, releasing it as a GUI application or a Python library won't make any difference. They will only care about the reverse-engineered protocol spec which is the only thing that is needed to implement what are we doing. — |
@bschumacher are you able to get contact quality readings with your code? I always get 0 with my code. |
@bschumacher ah I see. If you don't put an electrode in the P3 position (Not annotated as a data channel in anywhere) you do not get any data/quality data. It seems that it's where the signals are referenced. |
No description provided.