-
-
Notifications
You must be signed in to change notification settings - Fork 876
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Setting up ELK to work with Cowrie #450
Comments
@bontchev Hey. Since I've spent some time on setting up Kibana, I'll try to help you with it. First of all, Could you please specify what ELK stack version do you use? (5.x or below?) If Did you validate your logstash config? (for logstash 5.x:
Nope, Kibana could visualize all data that is indexed by Elasticsearch.
It's not a problem for Logstash and Filebeat. They could detect log rotation. |
@fe7ch Thanks for willing to help me. I use version 5.2.0 - the latest I can get with Trying to validate the setup complains about the Java or Ruby version, I think:
There is nothing in |
Quick googling of this message says that you need update your Java version to 8th version. (elastic/logstash#6397) |
OK, I installed Java version 1.8.0_121 and made it the default. Now trying to validate the config file produces a different error message - about a missing Also, errors about
|
Not the ideal way, but it will do the trick:
|
I think the config file validates, although I still get warnings:
Still nothing created in the |
So config is okey. Is there any log files in /var/log/logstash/ ? Any errors listed? |
Yes, there is a constantly updated log file there. It's full of errors like this:
The elasticsearch service isn't running, I guess? Yep, that's it:
2 Gb of memory is not enough?! Bummer... Won't work on this virtual machine, then. I'll try it on a physical one on Friday. |
The amount of RAM that elasticsearch will use is controlled by For testing purporses you may set it to 1Gb or less, but the value in both parameters should be the same. How much RAM does your VM have? |
The VM has 2 Gb memory. I changed that parameter to "1g" on both lines and tried to start the services. The I have no experience with Java - is it so memory-hungry? I thought it was supposed to be able to run on embedded devices and stuff... |
I don't know about Java, but elasticsearch is. When you start elasticsearch it will try to lock the amount of memory you specified. Furthermore, swapping of elasticsearch's memory dramatically reduces elasticsearch's performance, that is why you also should make some configurations to ensure that OS won't swap elasticsearch's memory. Also, the amount of memory you give to elasticsearch should not exceed half of the VM's RAM. All steps necessary to start elasticsearch, are described at official documentation: https://www.elastic.co/guide/en/elasticsearch/reference/current/setup.html . I recommend you to go through every step carefully, then it will be just a matter of configs to make it all (elasticsearch, logstash, kibana) work together. |
Goddammit... The VM is completely borked now. :( Is there a way to disable a service from starting at boot time in Linux - by holding a key or something? I can't open a terminal to do it the "normal" way. :( There is no point trying any more with ELK on this VM. Clearly, 2 Gb Ram is not enough. I'll try it on a physical one. But I'd like to be able to re-gain control of this one, if possible - at least for running Cowrie on it... |
@bontchev boot the VM with CD, edit what you need? |
@katkad, found a better way. Remembered that you can modify a VM's "hardware" while it is turned off. So, I gave it 4 Gb RAM. That's near the limit of what my 8 Gb host can handle, but it was enough to boot it successfully, open a terminal and disable the starting of these services at boot time. |
OK, I spent most of Friday trying to get this crap to work. Failed. Ready to give up - I suspect the problem is that elasticsearch simply won't run on my hardware. I thought it was a 8 Gb RAM machine but, apparently, it only has 4 Gb RAM. Here is the output of
At this point (Cowrie and Logstash started but Elasticserach not started) shouldn't Logstash begin to produce output in the file After validating the Elasticsearch config file (it validates OK but with warnings) and trying to start Elasticsearch:
but the machine slows down to a crawl and the memory now looks like this:
And still no logfile created. Running
I give up. I can't get a 8-Gb RAM Linux machine for this task right now. I guess ELK is just not for me. |
@bontchev Have you looked at the java memory settings at all for that machine you are trying to get ELK running on? I had to increase the memory given to java to get the services to run. According to the docs you want to give java 1/2 of what your total memory of that machine is. You said earlier that you have an 8gig ram machine so you should look at setting your memory allocated to java to 4gigs. If your machine really only has 4 gigs, then set java memory to 2gigs. You can make that change in the file below
find the section with
and change them to
|
I didn't read the whole conversation, seems this has already been covered. But going to leave my previous comment there incase it helps out somebody else. |
@funtimes-ninja, I thought my physical Linux machine had 8 Gb RAM - but, as |
You probably want to limit Xmx rather than Xms :) |
I set both to the same value. |
Something keeps bothering me. OK, so I can't set up Elasticsearch due to insufficient memory. And Kibana needs it to work, so it wouldn't work, either (although I haven't checked). But what about Logstash? Why isn't it creating the After digging around, I finally located the Logstash log file in Who creates this directory? Should I change its owner to |
In fact, ES requires that both values were equal.
If you want it to work without ES, you should comment ES in output section of logstash's config.
I believe it must be owned by logstash, but won't be able to check it before monday. I'm not sure about cowrie, but I had a VM with single CPU core and 4 Gb RAM that successfully hosted elasticsearch + logstash + kibana. So probably, your ELK stack must be tuned for better performance. |
OK, I manually changed the ownership of that directory to However, the other log,
Isn't this some kind of problem in the format of Cowrie's JSON log? |
When everything is configured properly from the start, you don't need to do it.
It's a known issue when json becomes corrupted due to many concurent attacker's sessions (#230). |
Managed to get Elasticsearch started on another physical machine (also 4 Gb RAM but a bit more free) by setting both kinds of memory to 1g in the config file. Well, it starts:
Doesn't mean that it works, alas. :-( After a while, I keep getting stuff like this in
I guess that means that Elasticsearch has died in creative ways, probably because of insufficient memory. How did you get this crap to work on a 4 Gb machine?! I tried the procedure in the documentation, I tried everything people were suggesting - and it simply won't work. :-( Somebody on Twitter is advising me to use ELK in a docker image; I'll try that too, but my hopes aren't high... |
If you post elasticsearch's logs I would have more clues what is going on.
I was following official ELK docs on setting up every component of the stack. Again, cowrie's documentation section on ELK includes basic steps and focused on presenting sample configs related to parsing cowrie's logs. It does not include steps required by ELK docs. |
Hello
I use Cowrie and it works well.
I have the following message in Kibana : "Unable to fetch mapping. Do you have indices matching the pattern?" Below, the log files:
Log file path : /opt/Elastic_stack/logstash-5.2.1/logs/logstash-plain.log
Log file path : /var/log/kibana/kibana.log
|
@fe7ch, sorry for the delay of my reply; I was way too busy with more pressing matters (like setting up an OpenVPN server on a Windows 10 box, sigh) for which I had a deadline.
There are no signs of a problem in the Elasticserach log (
The Kibana log (
It's just the Logstash log (
Have you considered making a Docker container with the necessary setup, so that anyone could run it? Just expose Cowrie's |
That isn't present anywhere in the Logstash logs, either.
Nope, it isn't there. And, despite setting this setting to false yesterday and restarting Cowrie, I still have 10 Mb of JSON parsing errors in my Logstash log for today. Well, at least it's not the usual 250 Mb - but the day is just beginning... But, if the setting isn't present in the original config file, it probably means that my copy of Cowrie isn't implementing it. Time for a Nope, that didn't work. :-(
This is odd... I just compared my copy of this file to the copy on GitHub and they are exactly the same! Furthermore, what is not the same is my copy of |
Your cowrie is outdated!
|
That didn't work:
Just
And I am really afraid to screw up with a setup that currently mostly works and which I spent considerable time getting it to work... Why is I don't want it to "add" to the repository the hundreds of new local files (logs, downloads, etc.) that I currently have in my Cowrie setup. Isn't there a way to just get the latest versions of the files from GitHub? I had the impression that |
Yep, it supposed to be First of all, you should run commands that git asked you (git config). In the case with cowrie, you should set it to credentials you are using at github.
Because you can't do pull if you have changes that aren't commited. Git doesn't know about files at remote repositories at the time when this check is performed. So if you have changes and you want to pull from remote repo you have two options (simplified):
After you do one of the above commands, you'll be able to pull. |
I understand this but I was expecting it to complain about file(s) that are actually different from their copies in the remote repository (i.e., that have changes). OK, never mind. I did all this and restarted Cowrie. However, as far as I can see, Logstash keeps logging about JSON parsing errors in its log. :-( |
Well, it's possible, because turning off cowrie's direct-tpcip log entries doesn't solve the problem, but reduces frequency of error appearence. |
I don't think that, even before I did this, the errors had anything to do with SSH. They seemed to be from Telnet sessions. I also don't understand where the errors are. The log seems perfectly valid JSON to me. I read somewhere that sometimes the session commands can overwrite the JSON lines in the log, but I did a quick check and all the lines in the log start with '{' and end with '}'. I also wrote a quick Python script that does essentially |
Well, you could pick a json from error message of logstash and look it up in cowrie's logs. Then investigate what is wrong with the particular json record. |
I am not quite sure how to connect the warning in the Logstash log with the line causing it in Cowrie's log... I tried grepping for partial information and I think here is one example: Logstash log:
Line in Cowrie's log that I think it is referring to:
I don't see anything wrong with the JSON format on that line... |
The SSH and telnet honeypots run as multiple independent "factories" inside
Twisted. I think maybe they don't do resource sharing very well and try to
write to the JSON file at the same time. Try disabling telnet and see if
you still get the issue.
…On Sun, Mar 5, 2017 at 17:54 Vesselin Bontchev ***@***.***> wrote:
I am not quite sure how to connect the warning in the Logstash log with
the line causing it in Cowrie's log... I tried grepping for partial
information and I think here is one example:
Logstash log:
[2017-03-05T15:44:33,828][WARN ][logstash.filters.json ] Error parsing json {:source=>"message", :raw=>"New connection: 93.118.253.195:1699 (192.168.0.103:23) [session: TT700]", :exception=>#<LogStash::Json::ParserError: Unrecognized token 'New': was expecting 'null', 'true', 'false' or NaN
at [Source: ***@***.***; line: 1, column: 5]>}
Line in Cowrie's log that I *think* it is referring to:
{"eventid": "cowrie.session.connect", "timestamp":
"2017-03-05T13:44:33.223289Z", "session": "b60dc56d", "message": "New
connection: 93.118.253.195:1699 (192.168.0.103:23) [session: TT700]",
"src_port": 1699, "system":
"cowrie.telnet.transport.HoneyPotTelnetFactory", "isError": 0, "src_ip":
"93.118.253.195", "dst_port": 23, "dst_ip": "192.168.0.103", "sensor":
"bontchev-notebook"}
I don't see anything wrong with the JSON format on that line...
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#450 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/ABA4g1hZsYVO9cp0ZKlCHI_5A-GyehYKks5rir5_gaJpZM4L6ol4>
.
|
This has become quite the ticket :) Thank you @fe7ch for the effort. |
@micheloosterhof, I tried disabling alternatively the SSH or the Telnet honeypot but the warnings kept appearing in the log. |
Okay. that clears that up. It's going wrong somewhere else then! |
@micheloosterhof The reason is in #230, isn't it? |
@fe7ch, I don't think so. Look at my example above - the JSON is perfectly valid, not overwritten by another line like in issue #230. But, yes, that issue is what I vaguely remembered having read, thanks for the pointer. Another question about the ELK integration: for Kibana to work properly, do I need to keep the |
If logstash finished working with a file, you can safely rename/remove/archive/etc this file.
Yes, it does. |
OK, I finally got the map background to display. I'll describe the solution here because it isn't too obvious and it took me quite some time to figure it out. To summarize, when adding a "tile map" visualization, the dots of various sizes (corresponding to e.g., the number of logins from that geoip coordinate) were present but on a blank background (i.e. the geographical map of the world was missing) and the following error occurred:
The way to fix it turned out to be this: edit the file
This is the map service provided by Elasticsearch. Restart the Kibana service and the map should work now. OK, I think I pretty much managed to get working everything I wanted to get working with Cowrie and ELK. Unless somebody else still has anything to add to this subject, I'll close the issue tomorrow. |
Sorry for not closing the issue yet but I was asked to move the honeypot to a different machine and wanted to see how it went. Here are some remarks on the ELK integration; most of the problems seem to be in ELK; not in Cowrie.
|
Yep, only the first point has something to do with cowrie, everything else is not related to cowrie in any way.
|
I'm sorry, but I can't figure out what to do. I've cloned the repository and have changed the appropriate README.mk file locally. But I can't sync it with GitHub because (of course) I don't have modification rights. Do I need to create a branch or something? Sorry if it's a stupid question but my experience with source code control systems has been mostly with VSS and CVS, so git is horribly confusing.
Mask? I don't use a mask. I use just a file name:
just like on my previous machine. On my previous machine Logstash was smart enough to find the other logs (with the dates appended to their name). |
It couldn't. If you specify exact name, it will monitor only the specified file. |
I did this, but it creates a pull request only for my fork, not for the main project. I have approved the request, but it seems rather pointless to me - I certainly can modify my own copy of the project, what's the point of making pull requests to it for myself? Gosh, this is so confusing. I give up. Here is my modified copy of the file, do with it whatever you want: https://github.com/bontchev/cowrie/blob/master/doc/elk/README.md
Well, on my previous machine it did. I put my old logs in a separate directory and added the path to the |
It doesn't work. :-( I added a mask and restarted the |
It just doesn't work. I tried everything I could find:
and any combination of the above, restarting the The only thing that seems to work is to uncomment the section
in
God, this sucks! |
You are definitely doing something wrong. Especially, counting that it was working with your setup some time ago. Where are your current logs are stored? |
There is one substantial difference from that previous setup, but I'm not sure how and if it matters. With the previous setup, I started empty and then added a new directory with some old logs. (Which, incidentally, weren't processed immediately. I thought that maybe Logstash just needed some time, like 24 hours, to process them, but it could have been something else that I have done to force it to process them. Also, they were relatively few.) With the current setup, I put all the old log files in
In
In the same place.
This (comments removed for brevity):
Anyway, I don't think we can resolve this mystery - I ended up feeding the old logs manually via netcat, as explained above, so they are already in the database. Another Kibana-related question. Currently I access the dashboard by doing SSH to the machine on which it is running, tunneling port 5601, and pointing my browser to How can I make the dashboard publicly visible? I'd like to have it in read-only mode (so the person viewing it can't screw it up or at least can't save the screwed-up version). I don't really need user/password authentication for that. From what I've seen by googling around, it is done by using the Could you please point me to some guide about how to set up public access to the dashboard that is written for people with no experience with |
With all these questions about ELK (which none of us is an expert in), you may be better of asking for help in Elastic dedicated places... |
nginx couldn't guarantee that a visitor wouldn't change anything. As long as visitor has access to kibana, he could do anything there. The only solution for implementing permessions for accessing ELK I'm aware of is X-Pack (30-days trial, later it must be licensed based on the size of your cluster). I think more suitable way there will be giving access to ppl who won't try to destroy your data + configuring backups of your ES data (https://www.elastic.co/guide/en/elasticsearch/guide/current/backing-up-your-cluster.html) All questions regarding basic setup of cowrie + ELK are answered. I agree with @micheloosterhof, that questions regarding advaced usage of ELK must be asked at more suitable place (https://discuss.elastic.co). |
@bontchev The error |
Hello folks,
I'd like to use Kibana to visualize the events in the Cowrie log and have been failing so far. Yes, I have read this article and also issue #402, as well as the documentation.
To begin with, my setup is fairly simple - just one Cowrie honeypot and ELK installed on the same machine (and supposed to be used on the same machine). So, I don't need Filebeat to ship logs to another machine, correct? Also, I gather from issue #402 that I no longer need an nginx server like the first article mentioned above says, yes?
So far I've done the following:
elasticsearch
,logstash
andkibana
on the same virtual machine where Cowrie is./var/log/kibana
and modified/etc/kibana/kibana.yml
according to the documentation./var/opt/logstash/vendor/geoip/
.cowrie/doc/elk/logstash-cowrie.conf
to/etc/logstash/conf.d/
after modifying some paths to make sure they reflect my environment.logstash
.However, the file
/tmp/cowrie-logstash.log
is not created. What am I missing? Do I need to change some ownerships - e.g., stuff in/etc/logstash/
is currently owned byroot
.Also, does this mean that with this setup Kibana can visualize only one day's worth of data? The file
cowrie.json
gets renamed after midnight and a new one is created.The text was updated successfully, but these errors were encountered: