在AWS上运行Kubernetes上的Kafka

我有以下情况:

3个节点实例遍布3个可用区域

  • 6个在StatefulSets中使用AWS外部卷来运行Kafka数据的代理。
  • 实例大小:m4.2xlarge
  • EBS卷:st1 – 500 GiB
  • 在Kubernetes资源级别没有限制请求和内存设置(不适合生产 – 必须设置资源限制)1个主题,6个分区,不复制。 与卡夫卡版本0.11.0一起使用librdkafka

生产者发送一个大小为100字节的消息

这里的命令是:

kubectl exec -it kafka-kafka-librdkafka -- examples/rdkafka_performance -P -t test -s 100 -b kafka-kafka-headless:9092 -X request.timeout.ms=900000 -X batch.num.messages=10000 -X queue.buffering.max.ms=1000 

结果:

 % Sending messages of size 100 bytes % 500000 messages produced (50000000 bytes), 0 delivered (offset 0, 0 failed) in 1000ms: 0 msgs/s and 0.00 MB/s, 41 produce failures, 500000 in queue, no compression % 1000000 messages produced (100000000 bytes), 500000 delivered (offset 0, 0 failed) in 2000ms: 249957 msgs/s and 25.00 MB/s, 65 produce failures, 500000 in queue, no compression % 1525491 messages produced (152549100 bytes), 1025491 delivered (offset 0, 0 failed) in 3000ms: 341774 msgs/s and 34.18 MB/s, 90 produce failures, 500000 in queue, no compression % 1958991 messages produced (195899100 bytes), 1525500 delivered (offset 0, 0 failed) in 4000ms: 381328 msgs/s and 38.13 MB/s, 120 produce failures, 433491 in queue, no compression % 2232174 messages produced (223217400 bytes), 2028173 delivered (offset 0, 0 failed) in 5000ms: 405594 msgs/s and 40.56 MB/s, 150 produce failures, 204001 in queue, no compression % 2622943 messages produced (262294300 bytes), 2528180 delivered (offset 0, 0 failed) in 6000ms: 421328 msgs/s and 42.13 MB/s, 161 produce failures, 94763 in queue, no compression % 3145529 messages produced (314552900 bytes), 3035578 delivered (offset 0, 0 failed) in 7000ms: 433623 msgs/s and 43.36 MB/s, 176 produce failures, 109951 in queue, no compression % 3675274 messages produced (367527400 bytes), 3498817 delivered (offset 0, 0 failed) in 8039ms: 435186 msgs/s and 43.52 MB/s, 196 produce failures, 176458 in queue, no compression % 4181717 messages produced (418171700 bytes), 3961228 delivered (offset 0, 0 failed) in 9042ms: 438068 msgs/s and 43.81 MB/s, 213 produce failures, 220489 in queue, no compression % 4669614 messages produced (466961400 bytes), 4499671 delivered (offset 0, 0 failed) in 10085ms: 446156 msgs/s and 44.62 MB/s, 230 produce failures, 169946 in queue, no compression % 5071907 messages produced (507190700 bytes), 4964422 delivered (offset 0, 0 failed) in 11132ms: 445930 msgs/s and 44.59 MB/s, 230 produce failures, 107490 in queue, no compression % 5638247 messages produced (563824700 bytes), 5392203 delivered (offset 0, 0 failed) in 12141ms: 444125 msgs/s and 44.41 MB/s, 231 produce failures, 246046 in queue, no compression 

有了我们的资源,我认为我们在某个地方被封顶了。 你有什么想法,它可能是瓶颈?

我在我的工作中pipe理3个kubernetes集群和4个kafka集群(每个kafka节点)。

我永远不会把kafka放在kubernetes里面。 这根本没有意义。

只需使用专用的ec2机器用于kafka群集,并将k8s的VPC与kafka VPC连接即可。

Kafka是一个数据库,它可以直接访问操作系统,而且可以比k8​​s部署更自由地进行调整。

K8s不是一个通用的锤子,用于服务和cronjobs和外部数据库。