Page 3 of 3 FirstFirst 123
Results 21 to 23 of 23
  1. #21
    Untanglit
    Join Date
    Apr 2015
    Location
    Sheffield UK
    Posts
    26

    Default

    So after I made this post, I was talking to my colleague and he made a suggestion. This isn't an ideal way to handle this by any means, but it should be functional (I hope).

    I'm currently writing the code to do this, so any feedback will be welcome. I'll also happily share what I have with the forum, once I am happy it works.

    We talked about the email reports that you can configure untangle to generate every day, how you can email these csv files out to an address, and also how logstash has a csv input. Lightbulb moment....

    So far I've written a python script which I have added to cron on my server to check the imap account of the user we have created to receive these emails, it checks for unread emails, then gets any unread messages and looks for a zip file, if it finds one it unzips it to somewhere in /var/log. Further to this, I will probably write an extra function in to the script to check to see if any of these folders that have been downloaded in the past are older than a certain age, or whether they haven't been read in to LS in a while (not sure which will work yet, need to test the situations) and then delete the old unzipped folders.

    I have written a file input for LS which will look to the folder I am saving files to and read them in to LS. I am currently writing the filters to take each CSV and read them in to elasticsearch, getting rid of the top row (the column names) and matching the values to the column names. In the future the whole reading in CSVs process will be easier, LS devs have a plan to make CSV reading better in the future (i.e. don't need to drop the column name row, it can be read from the file and used to label the data - much easier but not implemented yet).

    I would then need to create some dashboards for kibana to be able to make use of this data.
    Jim.Alles likes this.

  2. #22
    Untanglit
    Join Date
    Apr 2015
    Location
    Sheffield UK
    Posts
    26

    Default

    I'm just going through the report files, and I have found that my untangle-node-policy and untangle-node-reporting folders just have a file called no-reports in them. Other files have been generated with CSVs with no data in them, so I'm guessing it's not just because there is no data. Maybe I haven't configured something?

    Can anyone advise if they have these report files, what the filenames are and what the column headers are? For now I can work without them, but my LS filter won't do anything with them if they do start appearing in the future (unless I notice and create the relevant filters - or create an alert for myself).

  3. #23
    Untanglit
    Join Date
    Apr 2015
    Location
    Sheffield UK
    Posts
    26

    Red face

    Okay, well it's getting toward the end of the day, and I have this in a working state saved in my git repo, so I guess I can share what I have so far. I have attached a zip file with:

    email-grabber.py - this is configured as a cron job on my logstash server (you could probably get it to work in windows with task scheduler, as long as you edited the schbang line to point at python, and the folder locations will need editing), it requires python 2.7, as the unzip function won't work without it. Assuming you configure the email account details correctly and have the report emails going to the right location, and they are unread, it should download the zip files, and extract them to /var/log/untangle-reports

    In future I want to implement a way to remove old files automatically, but right now it's not a priority. I could possibly configure log-rotate to do this, not sure yet.

    05-untangle-report-files.conf - This is a logstash config file, it reads from local disc, and finds the logging file I have configured to use in the script. It also looks for the csv files in the subfolders that should appear when the script runs, this part is set to only check for new files in that folder once every hour.

    21-untangle-report-grabber-logs.conf - Very simple parsing of the log file generated by the script

    22-untangle-reports.conf - This will look at each line from the csv found by in the input (05...) file, look at the filename it came from, add a tag to the line so we know which file it came from and which filter to apply against it, and then run the filter against it.

    I've just realised I didn't include the output I use, I will paste it in below:

    Code:
    output {
      if "drop" not in [tags] {
        elasticsearch {
          protocol => "http"
          host => localhost
          workers => 4
        }
        #stdout { codec => rubydebug }
      }
    }
    This checks to make sure there are no tags called drop (as we use them on data that hasn't passed through a filter) before saving the lines to elasticsearch (the hashed out stdout line is used for testing - remove the hash and you can reload logstash to see what it's doing - not good for production).

    Anyway, I know a lot of this won't make much sense to some out there, so any questions please let me know. I would recommend checking out #logstash on freenode as well, I'm in there most days Monday-Friday 9am-6pm GMT.
    Attached Files Attached Files

Page 3 of 3 FirstFirst 123

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •  

SEO by vBSEO 3.6.0 PL2