2016-04-18 7 views
1

Ich bin neu bei Rsyslog, Remote-Protokollierung und Elasticsearch.Konfigurieren Sie Rsyslog (Docker-> TCP-> Rsyslog-> ElasticSearch)

Ich konfigurierte ein Python-Skript (das von Docker-Containern ausgeführt wird), um die Protokollierung an $ HOST zu senden: $ PORT through TCP.

Ich habe bereits rsyslog, das Modul mmnormalize und das Modul Omelastsearch installiert.

Jetzt würde ich gerne verstehen, wie meine rsyslog.conf (auf dem Host) sein sollte, um die Protokolle (kommend von 172.17.0.0/16) mit elasticsearch zu sammeln.

Vielen Dank!

Antwort

0

Hier ist, wie ich das Problem gelöst:

# /etc/rsyslog.d/docker.rb 
version=2 
# My sample record 
# [Apr 25 12:00]$CONTAINER_HOSTNAME:INFO:Package.Module.Sub-Module:Hello World 
# 
# Here there is the rule to parse the log records into trees 
rule=:[%date:char-to:]%]%hostname:char-to::%:%level:char-to::%:%file:char-to::%:%message:rest% 
# 
# alternative to set date field in rfc3339 format 
# rule=:[%date:date-rfc3339%]%hostname:char-to::%:%level:char-to::%:%file:char-to::%:%message:rest% 

# /etc/rsyslog.conf 
module(load="mmnormalize") 
module(load="omelasticsearch") 
module(load="imtcp") 

# apply to log records coming from address:port the specified rule 
input(type="imtcp" 
     address="127.0.0.1" # $HOST 
     port="514"   # $PORT 
     ruleset="docker-rule") 

# define the rule in two actions; parsing the log record into a tree with 
# root $! ($!son-node!grandson-node...) and adding to the elasticsearch index 
# 'docker-logs' the parsed tree, but in a JSON format (specified in a template) 
ruleset(name="docker-rule"){ 
    action(type="mmnormalize" 
      rulebase="/etc/rsyslog.d/docker.rb" 
      useRawMsg="on" 
      path="$!") 
    action(type="omelasticsearch" 
      template="docker-template" 
      searchIndex="docker-logs" 
      bulkmode="on" 
      action.resumeretrycount="-1") 
} 

# define the template: 
# 'constants' are simply putting into the record JSON delimiters as '{' or ',' 
# 'properties' are simply putting the values of the parsed tree into fields 
# named in the previous constant statements through 'value="..."' 
# the result is a JSON record like: 
# { "@timestamp":"foo", 
# "hostname":"bar", 
# "level":"foo", 
# "file":"bar", 
# "message":"foo" 
# } 
template(name="docker-template" type="list"){ 
    constant(value="{") 
     constant(value="\"@timestamp\":") 
      constant(value="\"") 
       # because kibana would use '$!date' as string not as date 
       # that is the only field not from the parsed tree 
       property(name="timereported" dateFormat="rfc3339") 
      constant(value="\"") 
     constant(value=",") 
     constant(value="\"hostname\":") 
      constant(value="\"") 
       property(name="$!hostname") 
      constant(value="\"") 
     constant(value=",") 
     constant(value="\"level\":") 
      constant(value="\"") 
       property(name="$!level") 
      constant(value="\"") 
     constant(value=",") 
     constant(value="\"file\":") 
      constant(value="\"") 
       property(name="$!file") 
      constant(value="\"") 
     constant(value=",") 
     constant(value="\"message\":") 
      constant(value="\"") 
       property(name="$!message") 
      constant(value="\"") 
    constant(value="}") 
} 

nächste Installation Kibana ist möglich, "ein Indexmuster zu konfigurieren", einfach Einstellung: "Indexname oder Muster" auf „Docker -logs "und" Zeitfeldname "bis" @timestamp "

Beachten Sie, dass es keine Kontrolle über die Quelle von Protokollen gibt (172.17.0.0/16); Jeder Protokolldatensatz wird an $ HOST gesendet: $ PORT wird bei korrekter Analyse in den ElasticSearch-Index eingefügt.