05 May 2014

Config log4net send log to elasticsearch with fluentd and kibana - realtime and centralization - part 2

<< Back to part 1 <<

1. Config Log4net to push log with syslog format :

On the Log4net machine, config the appender like this :

<appender name="UdpAppender" type="log4net.Appender.UdpAppender">
  <remoteAddress value="10.90.7.195" />
  <remotePort value="5140" />
  <layout type="log4net.Layout.PatternLayout, log4net">
   <conversionPattern value="&lt;190&gt;%date{MMM dd HH:mm:ss} %P{log4net:HostName} %logger: %thread %level %logger Inside-Log %P{log4net:HostName} [[%message" />
  </layout>
  <filter type="log4net.Filter.LevelRangeFilter">
   <param name="LevelMin" value="INFO" />
   <param name="LevelMax" value="ERROR" />
  </filter>
</appender>

2. Config td-agent to receive log4net output stream : ( on the logging machine )

# vim /etc/td-agent/td-agent.conf :

### Listen on port 5140, module in_syslog ###
<source>
 type syslog
 port 5140
 bind 0.0.0.0
 tag syslog
</source>

### Parsing the events ###
<match syslog.local7.info>
 type parser
 remove_prefix syslog
 format /^(?<thread>[^ ]*) (?<level>[^ ]*) (?<logger>[^ ]*) (?<username>[^ ]*) (?<hostname>[^ ]*) \[\[(?<message>[^*]*)/
 key_name message
</match>

### Write parsed events to ElasticSearch ###
<match local7.info>
 buffer_type file
 buffer_path /mnt/ramdisk/log4net.buff
 buffer_chunk_limit 4m
 buffer_queue_limit 50
 flush_interval 3s
 type elasticsearch
 logstash_format true
 logstash_prefix log4net
 host localhost
 port 9200
</match> 

By default the module in_syslog does not support multi-line message, you need to re-config the regexp format to enable multi-line :

# cd /usr/lib64/fluent/ruby/lib/ruby/gems/1.9.1/gems/fluentd-0.10.39/lib/fluent/plugin
# vim in_syslog.rb

SYSLOG_ALL_REGEXP = /^\<(?<pri>[0-9]+)\>(?<time>[^ ]* {1,2}[^ ]* [^ ]*) (?<host>[^ ]*) (?<ident>[a-zA-Z0-9_\/\.\-]*)(?:\[(?<pid>[0-9]+)\])?[^\:]*\: *(?<message>.*)/m

Dont forget to restart the td-agent service : # /etc/init.d/td-agent restart

If all thing goes well, we will see some events log like this in ElasticSearch :

# curl "localhost:9200/log4net-2014.05.04/_search?pretty&size=1"

{
  "took" : 1,
  "timed_out" : false,
  "_shards" : {
    "total" : 4,
    "successful" : 4,
    "failed" : 0
  },
  "hits" : {
    "total" : 98226,
    "max_score" : 1.0,
    "hits" : [ {
      "_index" : "log4net-2014.05.04",
      "_type" : "fluentd",
      "_id" : "890SPnmEThWX7oelxX8yGQ",
      "_score" : 1.0, "_source" : {"thread":"29","level":"INFO","logger":"Inside.SellerHandler.SellerHandler","username":"Bus-Inside","hostname":"Inside-BusL","message":"SalesOrderStatusDto\r\nOrderNumber: 100593335\r\nPaymentStatus: \r\nPaymentStatusInside: 0\r\nOrderStatusInside: 3\r\nDeliveryStatusInside: 0\r\nOrderStatusMagento: \r\nProcessDate: 5/4/2014 12:02:48 AM\r\nCompleteDate: 1/1/0001 12:00:00 AM\r\nPODDate: 1/1/0001 12:00:00 AM\r\nDelayDate: 1/1/0001 12:00:00 AM\r\nDelayToDate: 1/1/0001 12:00:00 AM\r\nApplyClaimDate: 1/1/0001 12:00:00 AM\r\nCancelDate: 1/1/0001 12:00:00 AM\r\nShippingDate: 1/1/0001 12:00:00 AM\r\nCancelReason: \r\nCancelCode: \r\nDelayReason: \r\nDisputingFlag: 0\r\nIsCODConfirm: False\r\nIsApproveDelayed: False\r\nTrackingNumber: \r\nCarrierCode: \r\nComment: \r\nTransferToSellerAmount: 0\r\nOrderStatusAction: ProcessOrder\r\nCorrelationId: 00000000-0000-0000-0000-000000000000\r\nOriginalAddress: \r\nAction: UpdateStatus\r\nVersionNo: \r\nIsRetry: True\r\nRetryCount: 0\r\nBusCreatedDate: 5/4/2014 12:02:48 AM\r\nBusUpdatedDate: 5/4/2014 12:02:48 AM\r\nRouter: \r\nExceptionType: Unknown\r\nIsRetryRefresh: False\r\n","@timestamp":"2014-05-04T00:02:34+07:00"}
    } ]
  }
}


And when displaying in Kibana, this record should looks like this : 

And don't forget to update the ElasticSearch mapping to make Terms Panel working smoothly :

# vim /etc/elasticsearch/templates/log4net.json :

{
    "log4net" : {
        "template" : "log4net*",
        "mappings" : {
            "fluentd" : {
                "_ttl" : { "enabled" : true, "default" : "62d" },
                "properties" : {
                        hostname:{"type":"string","index":"not_analyzed"},
                        level:{"type":"string","index":"not_analyzed"},
                        logger:{"type":"string","index":"not_analyzed"},
                        thread:{"type":"string","index":"not_analyzed"},
                        username:{"type":"string","index":"not_analyzed"},
                        message:{"type":"string","index":"not_analyzed"}
                }
            }
        }
    }
}

Notice that ES template is only effect with the newest index, that mean you need to delete the exist index and recreate a new one :

# curl -X DELETE localhost:9200/log4net-2014.05.04 

That's it.