Flume-错误日志,同时使用FileChannel

我使用Flume flume-ng-1.5.0(与CDH 5.4)从许多服务器收集日志并接收HDFS这里是我的configuration:

#Define Source , Sinks, Channel collector.sources = avro collector.sinks = HadoopOut collector.channels = fileChannel # Define Scribe Interface collector.sources.avro.type = avro collector.sources.avro.bind = 0.0.0.0 collector.sources.avro.port = 1463 collector.sources.avro.threads = 5 collector.sources.avro.channels = fileChannel collector.channels.fileChannel.type = file collector.channels.fileChannel.checkpointDir = /channel/flume/collector/checkpoint collector.channels.fileChannel.dataDirs = /channel/flume/collector/data #collector.channels.fileChannel.transactionCapacity = 100000 #collector.channels.fileChannel.capacity = 1000000000 #Describe Haoop Out collector.sinks.HadoopOut.type = hdfs collector.sinks.HadoopOut.channel = fileChannel collector.sinks.HadoopOut.hdfs.path = /logfarm/%{game_studio}/%{product_code}/%Y-%m-%d/%{category} collector.sinks.HadoopOut.hdfs.filePrefix = %{category}-%Y-%m-%d collector.sinks.HadoopOut.hdfs.inUseSuffix = _current collector.sinks.HadoopOut.hdfs.fileType = DataStream collector.sinks.HadoopOut.hdfs.writeFormat = Text #Max File size = 10 MB collector.sinks.HadoopOut.hdfs.round = true collector.sinks.HadoopOut.hdfs.roundValue = 10 collector.sinks.HadoopOut.hdfs.roundUnit = minute collector.sinks.HadoopOut.hdfs.rollSize = 10000000 collector.sinks.HadoopOut.hdfs.rollCount = 0 collector.sinks.HadoopOut.hdfs.rollInterval = 600 collector.sinks.HadoopOut.hdfs.maxOpenFiles = 4096 collector.sinks.HadoopOut.hdfs.timeZone = Asia/Saigon collector.sinks.HadoopOut.hdfs.useLocalTimeStamp = true collector.sinks.HadoopOut.hdfs.threadsPoolSize = 50 collector.sinks.HadoopOut.hdfs.batchSize = 10000 

目录:/ channel / flume / collector / checkpoint,/ channel / flume / collector / data是空白的,由用户槽

但是我有一个奇怪的例外:

 2015-05-08 18:31:34,290 ERROR org.apache.flume.SinkRunner: Unable to deliver event. Exception follows. java.lang.IllegalStateException: Channel closed [channel=fileChannel]. Due to java.io.EOFException: null at org.apache.flume.channel.file.FileChannel.createTransaction(FileChannel.java:340) at org.apache.flume.channel.BasicChannelSemantics.getTransaction(BasicChannelSemantics.java:122) at org.apache.flume.sink.hdfs.HDFSEventSink.process(HDFSEventSink.java:368) at org.apache.flume.sink.DefaultSinkProcessor.process(DefaultSinkProcessor.java:68) at org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:147) at java.lang.Thread.run(Thread.java:745) Caused by: java.io.EOFException at java.io.RandomAccessFile.readInt(RandomAccessFile.java:827) at java.io.RandomAccessFile.readLong(RandomAccessFile.java:860) at org.apache.flume.channel.file.EventQueueBackingStoreFactory.get(EventQueueBackingStoreFactory.java:80) at org.apache.flume.channel.file.Log.replay(Log.java:426) at org.apache.flume.channel.file.FileChannel.start(FileChannel.java:290) at org.apache.flume.lifecycle.LifecycleSupervisor$MonitorRunnable.run(LifecycleSupervisor.java:251) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) ... 1 more 

我想要任何专家帮我解决它。 非常感谢你

我遇到了与Flume频道相关的类似错误。 当我删除/移动数据和检查点目录时,我得到了修复。

在你的情况下:

/通道/水槽/收集器/检查点,/通道/水槽/收集器/数据

确保目录“/ channel / flume / collector /”是干净而空的。 重新运行水槽工作应创build“检查点”和“数据”目录。

移动目录并将其保存在某个您喜欢的地方作为日志的未来参考始终是安全的。 我在CDH 5.4(flume 1.5)和CDh 5.5(flume 1.6)上都做得很成功。 关于频道closures的大部分例外都应该由此来解决。

供参考: http : //mail-archives.apache.org/mod_mbox/flume-user/201309.mbox/%3CCAC4PaS8LzX7QbDZBMV=Nw94xeeocd=m+vbNrL6DhXOe+t-gQ5Q@mail.gmail.com%3E

我相信apache.org仍然在这个问题上检查: https : //issues.apache.org/jira/browse/FLUME-2282

希望能帮助到你!!