2016-04-26 13 views
1

Ich möchte Daten an hdfs über logstash-output-webhdfs senden. Die Konfiguration von logstash ist:logstash-output-webhdfs Fehler beim Ausgeben ausgehender Objekte

input{ 
    file{ 
      path => "/root/20160315.txt" 
    } 
} 

output{ 
    webhdfs{ 
      host => "x.x.x.x" 
      path => "/user/logstash/dt=%{+YYYY-MM-dd}/logstash-%{+HH}.log" 
      user => "logstash" 
    } 
    stdout{ 
      codec => rubydebug 
    } 
} 

aber ich bekomme die folgende Fehlermeldung:

Failed to flush outgoing items {:outgoing_count=>1, :exception=>"WebHDFS::ServerError", :backtrace=>["/root/logstash-2.3.0/vendor/bundle/jruby/1.9/gems/webhdfs-0.8.0/lib/webhdfs/client_v1.rb:351:in `request'", "/root/logstash-2.3.0/vendor/bundle/jruby/1.9/gems/webhdfs-0.8.0/lib/webhdfs/client_v1.rb:270:in `operate_requests'", "/root/logstash-2.3.0/vendor/bundle/jruby/1.9/gems/webhdfs-0.8.0/lib/webhdfs/client_v1.rb:73:in `create'", "/root/logstash-2.3.0/vendor/bundle/jruby/1.9/gems/logstash-output-webhdfs-2.0.4/lib/logstash/outputs/webhdfs.rb:211:in `write_data'", "/root/logstash-2.3.0/vendor/bundle/jruby/1.9/gems/logstash-output-webhdfs-2.0.4/lib/logstash/outputs/webhdfs.rb:195:in `flush'", "org/jruby/RubyHash.java:1342:in `each'", "/root/logstash-2.3.0/vendor/bundle/jruby/1.9/gems/logstash-output-webhdfs-2.0.4/lib/logstash/outputs/webhdfs.rb:183:in `flush'", "/root/logstash-2.3.0/vendor/bundle/jruby/1.9/gems/stud-0.0.22/lib/stud/buffer.rb:219:in `buffer_flush'", "org/jruby/RubyHash.java:1342:in `each'", "/root/logstash-2.3.0/vendor/bundle/jruby/1.9/gems/stud-0.0.22/lib/stud/buffer.rb:216:in `buffer_flush'", "/root/logstash-2.3.0/vendor/bundle/jruby/1.9/gems/stud-0.0.22/lib/stud/buffer.rb:159:in `buffer_receive'", "/root/logstash-2.3.0/vendor/bundle/jruby/1.9/gems/logstash-output-webhdfs-2.0.4/lib/logstash/outputs/webhdfs.rb:166:in `receive'", "/root/logstash-2.3.0/vendor/bundle/jruby/1.9/gems/logstash-core-2.3.0-java/lib/logstash/outputs/base.rb:83:in `multi_receive'", "org/jruby/RubyArray.java:1613:in `each'", "/root/logstash-2.3.0/vendor/bundle/jruby/1.9/gems/logstash-core-2.3.0-java/lib/logstash/outputs/base.rb:83:in `multi_receive'", "/root/logstash-2.3.0/vendor/bundle/jruby/1.9/gems/logstash-core-2.3.0-java/lib/logstash/output_delegator.rb:130:in `worker_multi_receive'", "/root/logstash-2.3.0/vendor/bundle/jruby/1.9/gems/logstash-core-2.3.0-java/lib/logstash/output_delegator.rb:114:in `multi_receive'", "/root/logstash-2.3.0/vendor/bundle/jruby/1.9/gems/logstash-core-2.3.0-java/lib/logstash/pipeline.rb:305:in `output_batch'", "org/jruby/RubyHash.java:1342:in `each'", "/root/logstash-2.3.0/vendor/bundle/jruby/1.9/gems/logstash-core-2.3.0-java/lib/logstash/pipeline.rb:305:in `output_batch'", "/root/logstash-2.3.0/vendor/bundle/jruby/1.9/gems/logstash-core-2.3.0-java/lib/logstash/pipeline.rb:236:in `worker_loop'", "/root/logstash-2.3.0/vendor/bundle/jruby/1.9/gems/logstash-core-2.3.0-java/lib/logstash/pipeline.rb:205:in `start_workers'"], :level=>:warn} 

Ist jemand jemals diese problenm erfüllt.

Antwort

0

Es scheint, dass Sie die Benutzeroption logstash-output-webhdfs auf den Benutzer hdfs supergroup setzen sollten, den Sie zum Starten von hdfs verwenden. Zum Beispiel, wenn Sie root verwenden, um start-dfs.sh bash zu starten Die Benutzeroption sollte root sein.

Zusätzlich sollten Sie/etc/hosts bearbeiten, fügen Sie die hdfs cluster node route list hinzu.

+0

Es funktioniert für mich, danke. – John