HDFS在apache上的performance

我有几个与HDFS相关的问题,可能有不同的根源。 我尽可能多地发布信息,希望至less可以对其中的一些发表看法。 基本上这些情况是:

  • 找不到HDFS类
  • 与某些datanode连接似乎是缓慢/意外closures。
  • 执行程序丢失(并且由于内存不足错误而无法重新启动)

我在找什么:

– HDFS错误configuration/调整build议

– 全局设置缺陷(例如,VM和NUMA不匹配的影响)

– 对于最后一类问题,我想知道为什么当执行程序死亡时,JVM的内存没有被释放,因此不允许启动一个新的执行程序。

我的设置如下:

1个具有32个内核和50GB RAM的虚拟机pipe理程序,在此虚拟机上运行5个虚拟机。 每个vms有5个核心和7GB。 每个节点有1个工作站设置,4个核心可用6 GB(其余资源旨在供hdfs / os使用

我使用4GB的数据集在Spark 1.4.0 / hdfs 2.5.2安装程序上运行Wordcount工作负载。 我从官方网站(没有本地编译)得到的二进制文件。

请让我知道,如果我可以提供其他相关信息。

(1)&2)在work / app-id / exec-id / stderr文件中logging在worker上)

1)Hadoop类相关的问题

15:34:32: DEBUG HadoopRDD: SplitLocationInfo and other new Hadoop classes are unavailable. Using the older Hadoop location info code. java.lang.ClassNotFoundException: org.apache.hadoop.mapred.InputSplitWithLocationInfo 15:40:46: DEBUG SparkHadoopUtil: Couldn't find method for retrieving thread-level FileSystem input data java.lang.NoSuchMethodException: org.apache.hadoop.fs.FileSystem$Statistics.getThreadStatistics() 

2)HDFS性能相关问题

出现以下错误:

  15:43:16: ERROR TransportRequestHandler: Error sending result ChunkFetchSuccess{streamChunkId=StreamChunkId{streamId=284992323013, chunkIndex=2}, buffer=FileSegmentManagedBuffer{file=/tmp/spark-b17f3299-99f3-4147-929f-1f236c812d0e/executor-d4ceae23-b9d9-4562-91c2-2855baeb8664/blockmgr-10da9c53-c20a-45f7-a430-2e36d799c7e1/2f/shuffle_0_14_0.data, offset=15464702, length=998530}} to /192.168.122.168:59299; closing connection java.io.IOException: Broken pipe 15:43:16 ERROR TransportRequestHandler: Error sending result ChunkFetchSuccess{streamChunkId=StreamChunkId{streamId=284992323013, chunkIndex=0}, buffer=FileSegmentManagedBuffer{file=/tmp/spark-b17f3299-99f3-4147-929f-1f236c812d0e/executor-d4ceae23-b9d9-4562-91c2-2855baeb8664/blockmgr-10da9c53-c20a-45f7-a430-2e36d799c7e1/31/shuffle_0_12_0.data, offset=15238441, length=980944}} to /192.168.122.168:59299; closing connection java.io.IOException: Broken pipe 15:44:28 : WARN TransportChannelHandler: Exception in connection from /192.168.122.15:50995 java.io.IOException: Connection reset by peer (note that it's on another executor) 

一段时间之后:

 15:44:52 DEBUG DFSClient: DFSClient seqno: -2 status: SUCCESS status: ERROR downstreamAckTimeNanos: 0 15:44:52 WARN DFSClient: DFSOutputStream ResponseProcessor exception for block BP-845049430-155.99.144.31-1435598542277:blk_1073742427_1758 java.io.IOException: Bad response ERROR for block BP-845049430-155.99.144.31-1435598542277:blk_1073742427_1758 from datanode xxxx:50010 at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer$ResponseProcessor.run(DFSOutputStream.java:819) 

以下两个错误出现几次:

 15:51:05 ERROR Executor: Exception in task 19.0 in stage 1.0 (TID 51) java.nio.channels.ClosedChannelException at org.apache.hadoop.hdfs.DFSOutputStream.checkClosed(DFSOutputStream.java:1528) at org.apache.hadoop.fs.FSOutputSummer.write(FSOutputSummer.java:98) at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:58) at java.io.DataOutputStream.write(DataOutputStream.java:107) at org.apache.hadoop.mapred.TextOutputFormat$LineRecordWriter.writeObject(TextOutputFormat.java:81) at org.apache.hadoop.mapred.TextOutputFormat$LineRecordWriter.write(TextOutputFormat.java:102) at org.apache.spark.SparkHadoopWriter.write(SparkHadoopWriter.scala:95) at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13$$anonfun$apply$6.apply$mcV$sp(PairRDDFunctions.scala:1110) at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13$$anonfun$apply$6.apply(PairRDDFunctions.scala:1108) at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13$$anonfun$apply$6.apply(PairRDDFunctions.scala:1108) at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1285) at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13.apply(PairRDDFunctions.scala:1116) at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13.apply(PairRDDFunctions.scala:1095) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:63) at org.apache.spark.scheduler.Task.run(Task.scala:70) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:213) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 15:51:19 DEBUG AkkaRpcEnv$$anonfun$actorRef$lzycompute$1$1$$anon$1: [actor] received message AssociationError [akka.tcp://[email protected]:38277] -> [akka.tcp://sparkDriver@xxxx:34732]: Error [Invalid address: akka.tcp://sparkDriver@xxxx:34732] [ akka.remote.InvalidAssociation: Invalid address: akka.tcp://sparkDriver@xxxx:34732 Caused by: akka.remote.transport.Transport$InvalidAssociationException: Connection refused: /xxxx:34732 ] from Actor[akka://sparkExecutor/deadLetters] 

在datanode的日志中:

 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: localhost.localdomain:50010:DataXceiver error processing WRITE_BLOCK operation src: /192.168.122.15:56468 dst: /192.168.122.229:50010 java.net.SocketTimeoutException: 60000 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/192.168.122.229:50010 remote=/192.168.122.15:56468] 

我也可以find以下警告:

 2015-07-13 15:46:57,927 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Slow BlockReceiver write data to disk cost:718ms (threshold=300ms) 2015-07-13 15:46:59,933 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Slow BlockReceiver write packet to mirror took 1298ms (threshold=300ms) 

3)执行者损失

在工作早期,主站的日志显示以下消息:

 15/07/13 13:46:50 INFO Master: Removing executor app-20150713133347-0000/5 because it is EXITED 15/07/13 13:46:50 INFO Master: Launching executor app-20150713133347-0000/9 on worker worker-20150713153302-192.168.122.229-59013 15/07/13 13:46:50 DEBUG Master: [actor] handled message (2.247517 ms) ExecutorStateChanged(app-20150713133347-0000,5,EXITED,Some(Command exited with code 1),Some(1)) from Actor[akka.tcp://[email protected]:59013/user/Worker#-83763597] 

在作业完成之前这不会停止,或者最终失败(取决于执行者实际失败的数量。

下面是每个尝试执行程序启动时都可用的java日志(在worker的work / app-id / exec-id中): http : //pastebin.com/B4FbXvHR