【kafka系列教程42】Kafka Manager - Go语言中文社区

【kafka系列教程42】Kafka Manager


作为一个分布式的消息发布-订阅系统,Apache Kafka在 Yahoo内部已经被很多团队所使用,例如媒体分析团队就将其应用到了实时分析流水线中,同时,Yahoo整个Kafka集群处理的峰值带宽超过了 20Gbps(压缩数据)。为了让开发者和服务工程师能够更加简单地维护Kafka集群,Yahoo构建了一个基于Web的管理工具,称为Kafka Manager,日前该项目已经在GitHub上开源

通过Kafka Manager用户能够更容易地发现集群中哪些主题或者分区分布不均匀,同时能够管理多个集群,能够更容易地检查集群的状态,能够创建主题,执行首选的副 本选择,能够基于集群当前的状态生成分区分配,并基于生成的分配执行分区的重分配,此外,Kafka Manager还是一个非常好的可以快速查看集群状态的工具。

Kafka Manager使用Scala语言编写,其Web控制台基于Play Framework实现,除此之外,Yahoo还迁移了一些Apache Kafka的帮助程序以便能够与Apache Curator框架一起工作。

Kafka在雅虎

Kafka在雅虎内部被很多团队使用,媒体团队用它做实时分析流水线,可以处理高达20Gbps(压缩数据)的峰值带宽。

为了简化开发者和服务工程师维护Kafka集群的工作,构建了一个叫做Kafka管理器的基于Web工具,叫做 Kafka Manager。这个管理工具可以很容易地发现分布在集群中的哪些topic分布不均匀,或者是分区在整个集群分布不均匀的的情况。它支持管理多个集群、选择副本、副本重新分配以及创建Topic。同时,这个管理工具也是一个非常好的可以快速浏览这个集群的工具。

该软件是用Scala语言编写的。目前(2015年02月03日)雅虎已经开源了Kafka Manager工具。这款Kafka集群管理工具主要支持以下几个功能:

  1. 管理几个不同的集群;
  2. 很容易地检查集群的状态(topics, brokers, 副本的分布, 分区的分布);
  3. 选择副本;
  4. 产生分区分配(Generate partition assignments)基于集群的当前状态;
  5. 重新分配分区。

以下是该集群管理工具的截图:

Cluster Management

cluster


Topic List

topic


Topic View

topic


Consumer List View

consumer


Consumed Topic View

consumer


Broker List

broker


Broker View

broker

安装要求

  1. Kafka 0.8.. or 0.9.. or 0.10.. or 0.11..
  2. Java 8+
  3. sbt 0.13.x

配置

系统至少需要配置zookeeper集群的地址,可以在kafka-manager安装包的conf目录下面的application.conf文件中进行配置。例如:

kafka-manager.zkhosts="my.zookeeper.host.com:2181"

你可以指定多个zookeeper地址,用逗号分隔:

kafka-manager.zkhosts="my.zookeeper.host.com:2181,other.zookeeper.host.com:2181"

另外, 如果你不想硬编码,可以使用环境变量ZK_HOSTS。

ZK_HOSTS="my.zookeeper.host.com:2181"

你可以启用/禁止以下的功能,通过修改application.config:

application.features=["KMClusterManagerFeature","KMTopicManagerFeature","KMPreferredReplicaElectionFeature","KMReassignPartitionsFeature"]
  • KMClusterManagerFeature - 允许从Kafka Manager添加,更新,删除集群。
  • KMTopicManagerFeature - 允许从Kafka集群中增加,更新,删除topic
  • KMPreferredReplicaElectionFeature - 允许为Kafka集群运行首选副本
  • KMReassignPartitionsFeature - 允许生成分区分配和重新分配分区

考虑为启用了jmx的大群集设置这些参数:

  • kafka-manager.broker-view-thread-pool-size=< 3 * number_of_brokers>
  • kafka-manager.broker-view-max-queue-size=< 3 * total # of partitions across all topics>
  • kafka-manager.broker-view-update-seconds=< kafka-manager.broker-view-max-queue-size / (10 * number_of_brokers) >

下面是一个包含10个broker,100个topic的kafka集群示例,每个topic有10个分区,相当于1000个总分区,并启用JMX:

  • kafka-manager.broker-view-thread-pool-size=30
  • kafka-manager.broker-view-max-queue-size=3000
  • kafka-manager.broker-view-update-seconds=30

控制消费者偏offset缓存的线程池和队列:

  • kafka-manager.offset-cache-thread-pool-size=< default is # of processors>
  • kafka-manager.offset-cache-max-queue-size=< default is 1000>
  • kafka-manager.kafka-admin-client-thread-pool-size=< default is # of processors>
  • kafka-manager.kafka-admin-client-max-queue-size=< default is 1000>

您应该在启用了消费者轮询的情况下为大量#消费者增加以上内容。虽然它主要影响基于ZK的消费者轮询。

Kafka管理的消费者offset现在由“__consumer_offsets”topic中的KafkaManagedOffsetCache消费。请注意,这尚未经过跟踪大量offset的测试。每个集群都有一个单独的线程消费这个topic,所以它可能无法跟上被推送到topic的大量offset。

部署

下面的命令创建一个可部署应用的zip文件。

sbt clean dist

如果你不想拉源码,在编译,我已经编译好,放在百度云盘上了。

https://pan.baidu.com/s/1geEB1rt

启动服务

解压刚刚的zip文件,然后启动它:

$ bin/kafka-manager

默认情况下,端口为9000。可覆盖,例如:

$ bin/kafka-manager -Dconfig.file=/path/to/application.conf -Dhttp.port=8080

再如果java不在你的路径中,或你需要针对不同的版本,增加-java-home选项:

$ bin/kafka-manager -java-home /usr/local/oracle-java-8

用安全启动服务

为SASL添加JAAS配置,添加配置文件位置:

$ bin/kafka-manager -Djava.security.auth.login.config=/path/to/my-jaas.conf

注意:确保运行kafka manager的用户有读取jaas配置文件的权限。

打包

如果你想创建一个Debian或者RPM包,你可以使用下面命令打包:

sbt debian:packageBin
sbt rpm:packageBin

 

稳定版本

 

At LinkedIn, we are running ZooKeeper 3.3.*. Version 3.3.3 has known serious issues regarding ephemeral node deletion and session expirations. After running into those issues in production, we upgraded to 3.3.4 and have been running that smoothly for over a year now.
在LinkedIn,我们正在运行的ZooKeeper3.3。*。3.3.3 Version已知严重的问题是关于短暂节点的缺失和会话超时。遇到到这些问题在之后,我们升级到3.3.4,现在已运行顺畅了一年多了。

 

投入运行的ZooKeeper

 

Operationally, we do the following for a healthy ZooKeeper installation:
在操作上,我们做以下健康的zookeeper安装方式:

  • Redundancy in the physical/hardware/network layout: try not to put them all in the same rack, decent (but don't go nuts) hardware, try to keep redundant power and network paths, etc.
    在物理,硬件,网络布局的冗余: 尽量不要把它们放在同一机架内,良好的(但不要发疯) 硬件,尽量保持冗余的电源和网络路径等等.
  • I/O segregation: if you do a lot of write type traffic you'll almost definitely want the transaction logs on a different disk group than application logs and snapshots (the write to the ZooKeeper service has a synchronous write to disk, which can be slow).
    I/0隔离:如果你有很多写操作,你最好把事务日志放在与应用日志、快照不同的一个磁盘组上。(写到zookeeper服务器会同步写磁盘,会慢点。)
  • Application segregation: Unless you really understand the application patterns of other apps that you want to install on the same box, it can be a good idea to run ZooKeeper in isolation (though this can be a balancing act with the capabilities of the hardware).
    应用隔离:zookeeper不要和别的应用安装在一起,除非你真的了解你想要安装在同一机器中其他应用的模式,最好单独运行zookeeper(尽管zookeeper是可以平衡使用硬件的性能的)
  • Use care with virtualization: It can work, depending on your cluster layout and read/write patterns and SLAs, but the tiny overheads introduced by the virtualization layer can add up and throw off ZooKeeper, as it can be very time sensitive
    小心使用虚拟化:可以使用,这取决于你的集群布局和read/write模式和SLA,但是微小的虚拟化层引入的开销加起来会断开zookeeper,因为它非常的敏感。
  • ZooKeeper configuration and monitoring: It's java, make sure you give it 'enough' heap space (We usually run them with 3-5G, but that's mostly due to the data set size we have here). Unfortunately we don't have a good formula for it. As far as monitoring, both JMZ and the 4 letter commands are very useful, they do overlap in some cases (and in those cases we prefer the 4 letter commands, they seem more predictable, or at the very least, they work better with the LI monitoring infrastructure)
    zookeeper配置和监控:它是java,首先确保你给它足够的堆空间(我们通常设置3-5G,这个配置主要是根据我们的情况下配置的),不幸的时我们没有它算法公式。至于监控,JMZ和4个字母的命令非常有用,它们某些情况下重叠(在这种情况下,我们更喜欢4个字母的命令,他们似乎更可预测的,或者至少,它们与LI监控基础设施更好的工作)
  • Don't overbuild the cluster: large clusters, especially in a write heavy usage pattern, means a lot of intracluster communication (quorums on the writes and subsequent cluster member updates), but don't underbuild it (and risk swamping the cluster).
    不要过度建设集群:大型集群,尤其是在写入沉重的使用模式,意味着很多内通讯(规定人数的写入和后续的集群成员的更新),但不underbuild它(和风险覆盖集群)。
  • Try to run on a 3-5 node cluster: ZooKeeper writes use quorums and inherently that means having an odd number of machines in a cluster. Remember that a 5 node cluster will cause writes to slow down compared to a 3 node cluster, but will allow more fault tolerance.
    尝试在3 - 5个节点集群上运行:Zookeeper写入使用规定人数,本质上这意味着在集群中有个奇数的机器。要知道,一个5节点的集群比3节点集群要慢,但将允许更多的容错能力。

Overall, we try to keep the ZooKeeper system as small as will handle the load (plus standard growth capacity planning) and as simple as possible. We try not to do anything fancy with the configuration or application layout as compared to the official release as well as keep it as self contained as possible. For these reasons, we tend to skip the OS packaged versions, since it has a tendency to try to put things in the OS standard hierarchy, which can be 'messy', for want of a better way to word it.

总体来看,我们尽量保持zookeeper尽可能小的处理负载 (标准增长容量规划) 并尽可能的简单。我们尽量不做什么花里胡哨的配置或应用程序的布局,相比,我们尽可能的保持使用官方版本的发布。基于这些原因,我们倾向于跳过操作系统打包的版本,因为它会把焦点集中在操作系统标准层次结构中。

重要的客户端配置

 

The most important producer configurations control
最重要的生产配置控制

  • 压缩
  • 同步生产 vs 异步生产
  • 批处理大小(异步生产)

The most important consumer configuration is the fetch size.
最重要的consumer配置是获取消息的大小。

 

所有的配置文档都在[Kafka Broker配置]查看。

 

 



 

版权声明:本文来源CSDN,感谢博主原创文章,遵循 CC 4.0 by-sa 版权协议,转载请附上原文出处链接和本声明。
原文链接:https://blog.csdn.net/dcm19920115/article/details/93389614
站方申明:本站部分内容来自社区用户分享,若涉及侵权,请联系站方删除。

0 条评论

请先 登录 后评论

官方社群

GO教程

猜你喜欢