opentsdb服务在负载下挂起

我正在尝试使用openTSDB提供的RESTful api将数据放入openTSDB。 但opentsdb进程总是在几分钟后挂在那里。 它可以成功地把一些数据,然后变得越来越慢。 最后它会在日志中抛出一些例外。

请检查下面的日志:

2015-08-31 21:56:48,360 INFO [New I/O worker #48] HttpQuery: [id: 0x38971101, /10.75.44.33:34549 :> /10.75.44.33:4242] HTTP /api/put?details done in 388999ms 

2015-08-31 21:56:48,360 INFO [新的I / O服务器老板#65] ConnectionManager:[id:0x835153bf,/10.75.44.33:34618 => /10.75.44.33:4242] OPEN 2015-08-31 21 :56:48,361 INFO [新的I / O工作者#40] ConnectionManager:[id:0x835153bf,/10.75.44.33:34618=> /10.75.44.33:4242] BOUND:/10.75.44.33:4242 2015-08-31 21 :56:48,361 INFO [新的I / O服务器boss#65] ConnectionManager:[id:0xdf86eb92,/ 0:0:0:0:0:0:0:1:35607 => / 0:0:0:0 :0:0:0:1:4242] OPEN 2015-08-31 21:56:48,361信息[新的I / O工作者#40] ConnectionManager:[id:0x835153bf,/10.75.44.33:34618 => /10.75。 44.33:4242] CONNECTED:/10.75.44.33:34618 2015-08-31 21:56:48,361 INFO [新I / O工作者#41] ConnectionManager:[id:0xdf86eb92,/ 0:0:0:0:0: 0:0:1:35607 => / 0:0:0:0:0:0:0:1:4242] BOUND:/ 0:0:0:0:0:0:0:1:4242 2015- 08-31 21:56:48,361信息[新的I / O工作者#41] ConnectionManager:[id:0xdf86eb92,/ 0:0:0:0:0:0:0:1:35607 => / 0: 0:0:0:0:0:1:4242]连接:/ 0:0:0:0:0:0:0:1:35607 2015-08-31 21:56:48,361信息[新I / O工人#41] HttpQuery:[id:0xdf86eb92 / 0:0:0:0:0:0:0:1:35607 => / 0:0:0:0:0:0:0:1:4242] HTTP / api / version在0ms内完成2015- 08-31 21:56:48,361信息[新的I / O工作者#41] ConnectionManager:[id:0xdf86eb92,/ 0:0:0:0:0:0:0:1:35607:> / 0: 0:0:0:0:0:1:4242] DISCONNECTED 2015-08-31 21:56:48,361信息[新I / O工作者#41] ConnectionManager:[id:0xdf86eb92,/ 0:0:0:0 :0:0:0:1:35607:> / 0:0:0:0:0:0:0:1:4242] UNBOUND 2015-08-31 21:56:48,362信息[新I / O worker# 41] ConnectionManager:[id:0xdf86eb92,/ 0:0:0:0:0:0:0:1:35607:> / 0:0:0:0:0:0:0:1:4242] CLOSED

2015-08-31 21:58:23,436错误[新I / O工作者#40] ConnectionManager:来自[id:0x835153bf,/10.75.44.33:34618 => /10.75.44.33:4242]下游的意外exceptionjava.lang .OutOfMemoryError:java.nio.HeapCharBuffer上的Java堆空间(HeapCharBuffer.java:57)〜java.nio.CharBuffer.allocate(CharBuffer.java:331)〜[na:1.7.0_85]〜[na:1.7.0_85 ] org.jboss.netty.buffer.ChannelBuffers.decodeString(ChannelBuffers.java:1193)〜[netty-3.9.4.Final.jar:na] at org.jboss.netty.buffer.AbstractChannelBuffer.toString(AbstractChannelBuffer.java :551)〜[netty-3.9.4.Final.jar:na] at org.jboss.netty.buffer.AbstractChannelBuffer.toString(AbstractChannelBuffer.java:543)〜[netty-3.9.4.Final.jar:na] at net.opentsdb.tsd.HttpJsonSerializer.parsePutV1(HttpJsonSerializer.java:133)〜[tsdb-2.1.0.jar:c775b5f] at net.opentsdb.tsd.HttpQuery.getContent(HttpQuery.java:459)在net.opentsdb.tsd.Rp的net.opentsdb.tsd.PutDataPointRpc.execute(PutDataPointRpc.java:102)〜[tsdb-2.1.0.jar:c775b5f]上的[2.1.0.jar:c775b5f] cHandler.handleHttpQuery(RpcHandler.java:273)〜[tsdb-2.1.0.jar:c775b5f] at net.opentsdb.tsd.RpcHandler.messageReceived(RpcHandler.java:180)〜[tsdb-2.1.0.jar:c775b5f ] at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)〜[netty-3.9.4.Final.jar:na] at org.jboss.netty.handler.timeout.IdleStateAwareChannelUpstreamHandler.handleUpstream(IdleStateAwareChannelUpstreamHandler .java:36)〜[netty-3.9.4.Final.jar:na] at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)〜[netty-3.9.4.Final.jar: na] at org.jboss.netty.channel.DefaultChannelPipeline $ DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)〜[netty-3.9.4.Final.jar:na] at org.jboss.netty.handler.timeout.IdleStateHandler。 messageReceived(IdleStateHandler.java:294)〜[netty-3.9.4.Final.jar:na] at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)〜[netty-3.9.4.Final的.jar:NA] org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)〜[netty-3.9.4.Final.jar:na] at org.jboss.netty.channel.DefaultChannelPipeline $ DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline。 java:791)〜[netty-3.9.4.Final.jar:不] .jar:na] at org.jboss.netty.channel.SimpleChannelHandler.handleUpstream(SimpleChannelHandler.java:88)〜[netty-3.9.4.Final.jar:na] at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream (DefaultChannelPipeline.java:564)〜[netty-3.9.4.Final.jar:na] at org.jboss.netty.channel.DefaultChannelPipeline $ DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)〜[netty-3.9.4。 Final.jar:na]在org.jboss.netty的org.jboss.netty.handler.codec.http.HttpContentDecoder.messageReceived(HttpContentDecoder.java:108)〜[netty-3.9.4.Final.jar:na]。 channel.SimpleChannelUpstreamHandler.handleUps tream(SimpleChannelUpstreamHandler.java:70)〜[netty-3.9.4.Final.jar:na] at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)〜[netty-3.9.4.Final .jar:na] at org.jboss.netty.channel.DefaultChannelPipeline $ DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)〜[netty-3.9.4.Final.jar:na] at org.jboss.netty.channel.Channels .fireMessageReceived(Channels.java:296)〜[netty-3.9.4.Final.jar:na] at org.jboss.netty.handler.codec.http.HttpChunkAggregator.messageReceived(HttpChunkAggregator.java:194)〜[netty- 3.9.4.Final.jar:na] at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)〜[netty-3.9.4.Final.jar:na] at org.jboss.netty。 channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)〜[netty-3.9.4.Final.jar:na] at org.jboss.netty.channel.DefaultChannelPipeline $ DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)〜[netty -3.9.4.Final.jar:na] at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296)〜[netty-3.9.4.Final.jar:na] 2015-08-31 21:58:57,956信息[New I / O worker# 40] ConnectionManager:[id:0x835153bf,/10.75.44.33:34618:> /10.75.44.33:4242] DISCONNECTED 2015-08-31 21:58:57,956 INFO [新I / O工作者#40] ConnectionManager:[id: 0x835153bf,/10.75.44.33:34618:> /10.75.44.33:4242] UNBOUND 2015-08-31 21:58:57,956 INFO [新I / O工作者#40] ConnectionManager:[id:0x835153bf,/10.75.44.33: 34618:> /10.75.44.33:4242]已closures2015-08-31 21:59:05,424 INFO [新的I / O服务器老板#65] ConnectionManager:[id:0x9a40399c,/10.75.44.33:34625 => /10.75。 44.33:4242] OPEN 2015-08-31 21:59:05,424 INFO [新I / O工作者#42] ConnectionManager:[id:0x9a40399c,/10.75.44.33:34625 => /10.75.44.33:4242] BOUND: 10.75.44.33:4242 2015-08-31 21:59:05,424 INFO [新I / O工作者#42] ConnectionManager:[id:0x9a40399c,/10.75.44.33:34625 => /10.75.44.33:4242]连接:/ 10.75.44.33:34625 2015-08-31 22:02:21,615错误[新的I / O工作 r#31] ConnectionManager:来自[id:0xec533f17,/10.75.44.33:34599 => /10.75.44.33:4242]下游的意外exceptionjava.lang.OutOfMemoryError:超出com.google.protobuf.ZeroCopyLiteralByteString的GC开销限制。换行(ZeroCopyLiteralByteString.java:52)〜[asynchbase-1.6.0.jar:na] at org.hbase.async.Bytes.wrap(Bytes.java:287)〜[asynchbase-1.6.0.jar:na] at org.hbase.async.PutRequest.toMutationProto(PutRequest.java:529)〜[asynchbase-1.6.0.jar:na] at org.hbase.async.MultiAction.serialize(MultiAction.java:229)〜[asynchbase-1.6 .0.jar:na]在org.hbase.async.RegionClient.sendRpc(RegionClient.java)处的org.hbase.async.RegionClient.encode(RegionClient.java:1146)〜[asynchbase-1.6.0.jar:na] :894)〜[asynchbase-1.6.0.jar:na] at org.hbase.async.RegionClient.bufferEdit(RegionClient.java:757)〜[asynchbase-1.6.0.jar:na] at org.hbase.async .regionClient.sendRpc(RegionClient.java:881)〜[asynchbase-1.6.0.jar:na] at org.hbase.async.HBaseClient.sendRpcToRegion(HBaseClient.java:1698)〜[asynchbase-1 .6.0.jar:na] at net.opentsdb.core.TSDB.addPointInternal(TSDB.java)org.hbase.async.HBaseClient.put(HBaseClient.java:1343)〜[asynchbase-1.6.0.jar:na] :681)〜[tsdb-2.1.0.jar:c775b5f] at net.opentsdb.core.TSDB.addPoint(TSDB.java:573)〜[tsdb-2.1.0.jar:c775b5f] at net.opentsdb.tsd .PutDataPointRpc.execute(PutDataPointRpc.java:146)〜[tsdb-2.1.0.jar:c775b5f] at net.opentsdb.tsd.RpcHandler.handleHttpQuery(RpcHandler.java:273)〜[tsdb-2.1.0.jar: (SimpleChannelUpstreamHandler.java:70)在net.opentsdb.tsd.RpcHandler.messageReceived(RpcHandler.java:180)〜[tsdb-2.1.0.jar:c775b5f] at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(c775b5f) 〜[netty-3.9.4.Final.jar:na] at org.jboss.netty.handler.timeout.IdleStateAwareChannelUpstreamHandler.handleUpstream(IdleStateAwareChannelUpstreamHandler.java:36)〜[netty-3.9.4.Final.jar:na] at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)〜[netty-3.9.4.Final.jar:na] at org.jboss.netty.channel.DefaultCh annelPipeline $ DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)〜[netty-3.9.4.Final.jar:na] at org.jboss.netty.handler.timeout.IdleStateHandler.messageReceived(IdleStateHandler.java:294)〜[netty -3.9.4.Final.jar:na] at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)〜[netty-3.9.4.Final.jar:na] at org.jboss.netty .channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)〜[netty-3.9.4.Final.jar:na] at org.jboss.netty.channel.DefaultChannelPipeline $ DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)〜[ netty-3.9.4.Final.jar:na] at org.jboss.netty.handler.codec.http.HttpContentEncoder.messageReceived(HttpContentEncoder.java:82)〜[netty-3.9.4.Final.jar:na] at org.jboss.netty.channel.SimpleChannelHandler.handleUpstream(SimpleChannelHandler.java:88)〜[netty-3.9.4.Final.jar:na] at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564 ) 〜[netty-3.9.4.Final.jar:不] org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler()); org.jboss.netty.handler.codec.http.HttpContentDecoder.messageReceived(HttpContentDecoder.java:108)〜[netty-3.9.4.Final.jar:na] .java:70)〜[netty-3.9.4.Final.jar:na] at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)〜[netty-3.9.4.Final.jar: na] at org.jboss.netty.channel.DefaultChannelPipeline $ DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)〜[netty-3.9.4.Final.jar:na] at org.jboss.netty.channel.Channels.fireMessageReceived( Channels.java:296)〜[netty-3.9.4.Final.jar:na] 2015-08-31 22:02:21,616 INFO [New I / O worker#31] ConnectionManager:[id:0xec533f17,/10.75。 44.33:34599:> /10.75.44.33:4242]已断开2015-08-31 22:02:21,616 INFO [Ne w I / O worker#31] ConnectionManager:[id:0xec533f17,/10.75.44.33:34599:> /10.75.44.33:4242] UNBOUND 2015-08-31 22:02:21,616信息[New I / O worker#31 ] ConnectionManager:[id:0xec533f17,/10.75.44.33:34599:> /10.75.44.33:4242] CLOSED 2015-08-31 22:02:21,616错误[新的I / O工作者#66] RegionClient:来自下游的意外exception在[com.google.protobuf.ZeroCopyLiteralByteString.wrap(ZeroCopyLiteralByteString.java:52)〜[ asynchbase-1.6.0.jar:na] at org.hbase.async.Bytes.wrap(Bytes.java:287)〜[asynchbase-1.6.0.jar:na] at org.hbase.async.GetRequest.serialize(在org中,org.hbase.async.RegionClient.encode(RegionClient.java:1146)〜[asynchbase-1.6.0.jar:na]中的GetRequest.java:385〜[asynchbase-1.6.0.jar:na] hbase.async.RegionClient.sendRpc(RegionClient.java:894)〜[asynchbase-1.6.0.jar:na] at org.hbase.async.HBaseClient.sendRpcToRegion(HBaseClient.java:1698)〜[asynchbase-1.6。 0.jar:na] at net.opentsdb.core.TSDB.get()中的org.hbase.async.HBaseClient.get(HBaseClient.java:995)〜[asynchbase-1.6.0.jar:na](TSDB.java: 1090)〜[tsdb-2.1.0.jar:c775b5f] at net.opentsdb.core.CompactionQueue.flush(CompactionQueue.java:185)〜[tsdb-2.1.0.jar:c775b5f] at net.opentsdb.core。 CompactionQueue.flush(CompactionQueue.java:116)〜[tsdb-2.1.0.jar:c775b5f] at net.opentsdb.core.TSDB.flush(TSDB.java:730)〜[tsdb-2.1.0.jar:c775b5f ] at net.opentsdb.core.CompactionQueue.flush(CompactionQueue.java:191)〜[tsdb-2.1.0.jar:c775b5f] at net.opentsdb.core.CompactionQueue.flush(CompactionQueue.java:116)〜[tsdb -2.1.0.jar:c775b5f] at net.opentsdb.core.TSDB.flush(TSDB.java:730)〜[tsdb-2.1.0.jar:c775b5f] at net.opentsdb.core.CompactionQueue.flush(CompactionQueue在net.opentsdb上的.java:191)〜[tsdb-2.1.0.jar:c775b5f] at net.opentsdb.core.CompactionQueue.flush(CompactionQueue.java:116)〜[tsdb-2.1.0.jar:c775b5f] .core.TSDB.flush(TSDB.java:730)〜[tsdb-2.1.0.jar:c775b5f] at net.opentsdb.core.CompactionQueue.flush(Co net.opentsdb.core.CompactionQueue.flush(CompactionQueue.java:116)〜[tsdb-2.1.0.jar:c775b5f] at net.mpactionQueue.java:191)〜[tsdb-2.1.0.jar:c775b5f] opentsdb.core.TSDB.flush(TSDB.java:730)〜[tsdb-2.1.0.jar:c775b5f] at net.opentsdb.core.CompactionQueue.flush(CompactionQueue.java:191)〜[tsdb-2.1.0 .jar:c775b5f] at net.opentsdb.core.CompactionQueue.flush(CompactionQueue.java:116)〜[tsdb-2.1.0.jar:c775b5f] at net.opentsdb.core.TSDB.flush(TSDB.java:730 )net.opentsdb.core.CompactionQueue上的[tsdb-2.1.0.jar: .flush(CompactionQueue.java:116)〜[tsdb-2.1.0.jar:c775b5f] at net.opentsdb.core.TSDB.flush(TSDB.java:730)〜[tsdb-2.1.0.jar:c775b5f] at net.opentsdb.core.CompactionQueue.flush(CompactionQueue.java:191)〜[tsdb-2.1.0.jar:c775b5f] at net.opentsdb.core.CompactionQueue.flush(CompactionQueue.java:116)〜[tsdb- 2.1.0.jar:c775b5f] at net.opentsdb.core.TSDB.flush(TSDB.java:730)〜[tsdb-2.1.0.jar:c7 (CompactionQueue.java:191)〜[tsdb-2.1.0.jar:c775b5f] at net.opentsdb.core.CompactionQueue.flush(CompactionQueue.java:116)〜[75b5f] at net.opentsdb.core.CompactionQueue.flush tsdb-2.1.0.jar:c775b5f] at net.opentsdb.core.TSDB.flush(TSDB.java:730)〜[tsdb-2.1.0.jar:c775b5f] 2015-08-31 22:02:21,616 INFO [新的I / O工作者#66] HBaseClient:与.META的连接丢失。 region 2015-08-31 22:02:21,617 ERROR [新I / O工作者#67] RegionClient:来自[id:0x97cc2014,/10.75.44.33:44182 => /10.75.44.35:18020] java下游的意外exception。 lang.OutOfMemoryError:超出org.hbase.async.HBaseClient.createRegionSearchKey(HBaseClient.java:1954)〜[asynchbase-1.6.0.jar:na]的GC开销限制,位于org.hbase.async.HBaseClient.getRegion(HBaseClient。 java:1984)〜[asynchbase-1.6.0.jar:na] at org.hbase。org.hbase.async.HBaseClient.sendRpcToRegion(HBaseClient.java:1659)〜[asynchbase-1.6.0.jar:na]在net.opentsdb.core.TSDB.get(TSDB.java:1090)〜[tsdb-2.1.0.jar]上的async.HBaseClient.get(HBaseClient.java:995)〜[asynchbase-1.6.0.jar:na] :c775b5f] at net.opentsdb.core.CompactionQueue.flush(CompactionQueue.java:185)〜[tsdb-2.1.0.jar:c775b5f] at net.opentsdb.core.CompactionQueue.flush(CompactionQueue.java:116)〜 [tsdb-2.1.0.jar:c775b5f]在net.opentsdb.core.TSDB.flush(TSDB.java:730)〜[tsdb-2.1.0.jar:c775b5f] at net.opentsdb.core.CompactionQueue.flush (CompactionQueue.java: 在net.opentsdb.core中的net.opentsdb.core.CompactionQueue.flush(CompactionQueue.java:116)〜[tsdb-2.1.0.jar:c775b5f]上的[191]〜[tsdb-2.1.0.jar:c775b5f] TSDB.flush(TSDB.java:730)〜[tsdb-2.1.0.jar:c775b5f] at net.opentsdb.core.CompactionQueue.flush(CompactionQueue.java:191)〜[tsdb-2.1.0.jar:c775b5f在net.opentsdb.core.TSDB.flush(TSDB.java:730)〜[tsdb] at net.opentsdb.core.CompactionQueue.flush(CompactionQueue.java:116)〜[tsdb-2.1.0.jar:c775b5f] -2.1.0.jar:c775b5f] at net.opentsdb.core.CompactionQueue.flush(CompactionQueue.java:191)〜[tsdb-2.1.0.jar:c775b5f] at net.opentsdb.core.CompactionQueue.flush(CompactionQueue net.opentsdb.core.TSDB.flush(TSDB.java:730)〜[tsdb-2.1.0.jar:c775b5f] at net.opentsdb.java:116)〜[tsdb-2.1.0.jar:c775b5f] .core.CompactionQueue.flush(CompactionQueue.java:191)〜[tsdb-2.1.0.jar:c775b5f] at net.opentsdb.core.CompactionQueue.flush(CompactionQueue.java:116)〜[tsdb-2.1.0。 jar:c775b5f] at net.opentsdb.core.TSDB.flush(TSDB.java:730)〜[tsdb-2.1.0.jar:c775b5f] at net.opent sdb.core.CompactionQueue.flush(CompactionQueue.java:191)〜[tsdb-2.1.0.jar:c775b5f] at net.opentsdb.core.CompactionQueue.flush(CompactionQueue.java:116)〜[tsdb-2.1.0 .jar:c775b5f] at net.opentsdb.core.TSDB.flush(TSDB.java:730)〜[tsdb-2.1.0.jar:c775b5f] at net.opentsdb.core.CompactionQueue.flush(CompactionQueue.java:191 )在net.opentsdb.core.CompactionQueue.flush(CompactionQueue.java:116)〜[tsdb-2.1.0.jar:c775b5f] at net.opentsdb.core.TSDB处的[tsdb-2.1.0.jar:c775b5f] .flush(TSDB.java:730)〜[tsdb-2.1.0.jar:c775b5f] at net.opentsdb.core.CompactionQueue.flush(CompactionQueue.java:191)〜[tsdb-2.1.0.jar:c775b5f] at net.opentsdb.core.TSDB.flush(TSDB.java:730)〜[tsdb-2.1.0.jar:c775b5f] at net.opentsdb.core.CompactionQueue.flush(CompactionQueue.java:116) net.opentsdb.core.CompactionQueue.flush(CompactionQueue。2.1.2.jar:c775b5f] at net.opentsdb.core.CompactionQueue.flush(CompactionQueue.java:191)〜[tsdb-2.1.0.jar:c775b5f] java:116)〜[tsdb-2.1.0.jar:c775b5f] at net.opentsdb.core.TSDB.flush(T SDB.java:730)〜[tsdb-2.1.0.jar:c775b5f] 2015-08-31 22:03:09,322信息[新的I / O服务器老板#65] ConnectionManager:[id:0x7c5ac3e7,/10.75.44.33 :34633 => /10.75.44.33:4242] OPEN 2015-08-31 22:03:09,322 INFO [新buildI / O工作者#43] ConnectionManager:[id:0x7c5ac3e7,/10.75.44.33:34633 => /10.75。 44.33:4242] BOUND:/10.75.44.33:4242 2015-08-31 22:03:09,322 INFO [新I / O工作者#43] ConnectionManager:[id:0x7c5ac3e7,/10.75.44.33:34633 => /10.75。 44.33:4242] CONNECTED:/10.75.44.33:34633 2015-08-31 22:03:09,322错误[New I / O worker#39] ConnectionManager:来自下游的意外exception[id:0xe5cfc336,/10.75.44.33:34614 => /10.75.44.33:4242] java.lang.OutOfMemoryError:超出GC开销限制2015-08-31 22:03:09,322信息[New I / O worker#39] ConnectionManager:[id:0xe5cfc336,/10.75.44.33 :34614:> /10.75.44.33:4242] DISCONNECTED 2015-08-31 22:03:09,322信息[新的I / O工作者#39] ConnectionManager:[id:0xe5cfc336,/10.75.44.33:34614:> /10.75。 44.33:4242] UNBOUND 2015-08-31 22:0 3:09,322信息[新的I / O工作者#39] ConnectionManager:[id:0xe5cfc336,/10.75.44.33:34614:> /10.75.44.33:4242] CLOSED

我的问题在下面

  1. 当我放入数据时,openTSDB进程的内存使用量越来越大。 这是正常的吗? 因为我认为把第一个文件和第二个文件放在一起的成本应该是一样的。 如果文件大小相同。 但是,似乎opentsdb的内存使用情况正在上升

  2. 日志显示错误来自下游。 我的理解意味着hbase。 但是我没有看到来自hbase的任何错误。 为什么? 什么是stream氓?