解决rsyslog与ELK堆栈集成的问题

我正在尝试configurationrsyslog将日志发送到CentOS上的logstash。 所以我正在按照教程 。 然而,build立后,什么也没有发生。 一切都开始好,没有发生错误,但在elasticsearch没有日志。

这是我的/etc/rsyslog.conf

 #### MODULES #### $ModLoad imuxsock # provides support for local system logging (eg via logger command) $ModLoad imjournal # provides access to the systemd journal #### GLOBAL DIRECTIVES #### # Where to place auxiliary files $WorkDirectory /var/lib/rsyslog # Use default timestamp format $ActionFileDefaultTemplate RSYSLOG_TraditionalFileFormat # Include all config files in /etc/rsyslog.d/ $IncludeConfig /etc/rsyslog.d/*.conf # Turn off message reception via local log socket; # local messages are retrieved through imjournal now. $OmitLocalLogging on # File to store the position in the journal $IMJournalStateFile imjournal.state #### RULES #### *.info;mail.none;authpriv.none;cron.none /var/log/messages # The authpriv file has restricted access. authpriv.* /var/log/secure # Log all the mail messages in one place. mail.* -/var/log/maillog # Log cron stuff cron.* /var/log/cron # Everybody gets emergency messages *.emerg :omusrmsg:* # Save news errors of level crit and higher in a special file. uucp,news.crit /var/log/spooler # Save boot messages also to boot.log local7.* /var/log/boot.log *.*;\ local3.none -/var/log/syslog *.*;\ local3.none -/var/log/messages *.* @@10.0.15.25:10514 

/etc/rsyslog.d/loghost.conf

 $ModLoad imfile $InputFileName /var/log/devops_training.log $InputFileTag devops $InputFileStateFile stat-devops $InputFileSeverity debug $InputFileFacility local3 $InputRunFileMonitor 

这里是我的logstashconfiguration:

 input { syslog { type => rsyslog port => 10514 } } filter { } output { if [type] == "rsyslog" { elasticsearch { hosts => [ "localhost:9200" ] index => 'rsyslog-%{+YYYY.MM.dd}' document_type => "rsyslog" } } } 

rsyslogconfiguration似乎没有错误:

 rsyslogd: version 7.4.7, config validation run (level 1), master config /etc/rsyslog.conf rsyslogd: End of config validation run. Bye. 

而logstash的日志也没有任何错误:

 [2017-06-07T20:11:48,004][INFO ][logstash.setting.writabledirectory] Creating directory {:setting=>"path.queue", :path=>"/var/lib/logstash/queue"} [2017-06-07T20:11:48,188][INFO ][logstash.agent ] No persistent UUID file found. Generating new UUID {:uuid=>"adf934f1-caf5-48be-b65c-b2907c0d6336", :path=>"/var/lib/logstash/uuid"} [2017-06-07T20:11:49,438][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://localhost:9200/]}} [2017-06-07T20:11:49,439][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://localhost:9200/, :path=>"/"} [2017-06-07T20:11:49,604][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>#<URI::HTTP:0x3fdb353a URL:http://localhost:9200/>} [2017-06-07T20:11:49,623][INFO ][logstash.outputs.elasticsearch] Using mapping template from {:path=>nil} [2017-06-07T20:11:49,744][INFO ][logstash.outputs.elasticsearch] Attempting to install template {:manage_template=>{"template"=>"logstash-*", "version"=>50001, "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"_default_"=>{"_all"=>{"enabled"=>true, "norms"=>false}, "dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword"}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date", "include_in_all"=>false}, "@version"=>{"type"=>"keyword", "include_in_all"=>false}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}} [2017-06-07T20:11:49,758][INFO ][logstash.outputs.elasticsearch] Installing elasticsearch template to _template/logstash [2017-06-07T20:11:49,880][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>[#<URI::Generic:0x8dcaeed URL://localhost:9200>]} [2017-06-07T20:11:49,883][INFO ][logstash.pipeline ] Starting pipeline {"id"=>"main", "pipeline.workers"=>1, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>5, "pipeline.max_inflight"=>125} [2017-06-07T20:11:50,623][INFO ][logstash.pipeline ] Pipeline main started [2017-06-07T20:11:50,644][INFO ][logstash.inputs.syslog ] Starting syslog udp listener {:address=>"0.0.0.0:10514"} [2017-06-07T20:11:50,660][INFO ][logstash.inputs.syslog ] Starting syslog tcp listener {:address=>"0.0.0.0:10514"} [2017-06-07T20:11:50,827][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600} 

问题不仅在于我不知道解决这个问题。 我不明白是什么问题,以及如何排除故障。

我build议你把麻烦分成两部分:

1.)testing远程rsyslog转发是否工作。 停止logstash并使用以下命令打开TCP连接:

 nc -l 10514 

在您的客户端上,使用logging器将某些内容logging到系统日志中,并查看它是否到达您的Logstash服务器。 您也可以重新启动rsyslog守护程序来创build一些日志stream量。

2.)testinglogstash和elasticsearch之间的连接是否正常工作。 为此,请在logstashconfiguration中定义一个简单的文件input,并将一些日志行写入该文件。

 input { file { path => "/tmp/test_log" type => "rsyslog" } } 

然后检查您的rsyslog索引是否在elasticsearch中正确创build。

我已经设置了类似的东西。 所以写下我按照故障排除的几个步骤。 检查是否创build索引。

 curl -XGET 'http://localhost:9200/rsyslog-*/_search?q=*&pretty' 

不要通过systemctl启动logstash,而要通过CLI启动它,以便看到发生了什么事情。 通常的build议是在logstash conf的输出节中inputSTDIN和STDOUT。 不过,我只是将下面的行追加到输出块。

 stdout { codec => rubydebug } 

并通过命令行启动logstash

 /usr/share/logstash/bin/logstash --path.settings /etc/logstash -f /etc/logstash/conf.d/logstash.conf 

然后,您将能够看到事件,因为它们是由logstash接收和处理的。