这篇文章主要介绍了深入了解如何基于Python读写Kafka,文中通过示例代码介绍的非常详细,对大家的学习或者工作具有一定的参考学习价值,需要的朋友可以参考下
本篇会给出如何使用python来读写kafka, 包含生产者和消费者.
以下使用kafka-python客户端
生产者
爬虫大多时候作为消息的发送端, 在消息发出去后最好能记录消息被发送到了哪个分区, offset是多少, 这些记录在很多情况下可以帮助快速定位问题, 所以需要在send方法后加入callback函数, 包括成功和失败的处理
# -*- coding: utf-8 -*- ''' callback也是保证分区有序的, 比如2条消息, a先发送, b后发送, 对于同一个分区, 那么会先回调a的callback, 再回调b的callback ''' import json from kafka import KafkaProducer topic = 'demo' def on_send_success(record_metadata): print(record_metadata.topic) print(record_metadata.partition) print(record_metadata.offset) def on_send_error(excp): print('I am an errback: {}'.format(excp)) def main(): producer = KafkaProducer( bootstrap_servers='localhost:9092' ) producer.send(topic, value=b'{"test_msg":"hello world"}').add_callback(on_send_success).add_callback( on_send_error) # close() 方法会阻塞等待之前所有的发送请求完成后再关闭 KafkaProducer producer.close() def main2(): ''' 发送json格式消息 :return: ''' producer = KafkaProducer( bootstrap_servers='localhost:9092', value_serializer=lambda m: json.dumps(m).encode('utf-8') ) producer.send(topic, value={"test_msg": "hello world"}).add_callback(on_send_success).add_callback( on_send_error) # close() 方法会阻塞等待之前所有的发送请求完成后再关闭 KafkaProducer producer.close() if __name__ == '__main__': # main() main2()
消费者
kafka的消费模型比较复杂, 我会分以下几种情况来进行说明
1.不使用消费组(group_id=None)
不使用消费组的情况下可以启动很多个消费者, 不再受限于分区数, 即使消费者数量 > 分区数, 每个消费者也都可以收到消息
# -*- coding: utf-8 -*- ''' 消费者: group_id=None ''' from kafka import KafkaConsumer topic = 'demo' def main(): consumer = KafkaConsumer( topic, bootstrap_servers='localhost:9092', auto_offset_reset='latest', # auto_offset_reset='earliest', ) for msg in consumer: print(msg) print(msg.value) consumer.close() if __name__ == '__main__': main()
2.指定消费组
以下使用pool方法来拉取消息
pool 每次拉取只能拉取一个分区的消息, 比如有2个分区1个consumer, 那么会拉取2次
pool 是如果有消息马上进行拉取, 如果timeout_ms内没有新消息则返回空dict, 所以可能出现某次拉取了1条消息, 某次拉取了max_records条
# -*- coding: utf-8 -*- ''' 消费者: 指定group_id ''' from kafka import KafkaConsumer topic = 'demo' group_id = 'test_id' def main(): consumer = KafkaConsumer( topic, bootstrap_servers='localhost:9092', auto_offset_reset='latest', group_id=group_id, ) while True: try: # return a dict batch_msgs = consumer.poll(timeout_ms=1000, max_records=2) if not batch_msgs: continue ''' {TopicPartition(topic='demo', partition=0): [ConsumerRecord(topic='demo', partition=0, offset=42, timestamp=1576425111411, timestamp_type=0, key=None, value=b'74', headers=[], checksum=None, serialized_key_size=-1, serialized_value_size=2, serialized_header_size=-1)]} ''' for tp, msgs in batch_msgs.items(): print('topic: {}, partition: {} receive length: '.format(tp.topic, tp.partition, len(msgs))) for msg in msgs: print(msg.value) except KeyboardInterrupt: break consumer.close() if __name__ == '__main__': main()
关于消费组
我们根据配置参数分为以下几种情况
- group_id=None
- auto_offset_reset='latest': 每次启动都会从最新出开始消费, 重启后会丢失重启过程中的数据
- auto_offset_reset='latest': 每次从最新的开始消费, 不会管哪些任务还没有消费
- 指定group_id
- 全新group_id
- auto_offset_reset='latest': 只消费启动后的收到的数据, 重启后会从上次提交offset的地方开始消费
- auto_offset_reset='earliest': 从最开始消费全量数据
- 旧group_id(即kafka集群中还保留着该group_id的提交记录)
- auto_offset_reset='latest': 从上次提交offset的地方开始消费
- auto_offset_reset='earliest': 从上次提交offset的地方开始消费
- 全新group_id
性能测试
以下是在本地进行的测试, 如果要在线上使用kakfa, 建议提前进行性能测试
producer
# -*- coding: utf-8 -*- ''' producer performance environment: mac python3.7 broker 1 partition 2 ''' import json import time from kafka import KafkaProducer topic = 'demo' nums = 1000000 def main(): producer = KafkaProducer( bootstrap_servers='localhost:9092', value_serializer=lambda m: json.dumps(m).encode('utf-8') ) st = time.time() cnt = 0 for _ in range(nums): producer.send(topic, value=_) cnt += 1 if cnt % 10000 == 0: print(cnt) producer.flush() et = time.time() cost_time = et - st print('send nums: {}, cost time: {}, rate: {}/s'.format(nums, cost_time, nums // cost_time)) if __name__ == '__main__': main() ''' send nums: 1000000, cost time: 61.89236712455749, rate: 16157.0/s send nums: 1000000, cost time: 61.29534196853638, rate: 16314.0/s '''
consumer
# -*- coding: utf-8 -*- ''' consumer performance ''' import time from kafka import KafkaConsumer topic = 'demo' group_id = 'test_id' def main1(): nums = 0 st = time.time() consumer = KafkaConsumer( topic, bootstrap_servers='localhost:9092', auto_offset_reset='latest', group_id=group_id ) for msg in consumer: nums += 1 if nums >= 500000: break consumer.close() et = time.time() cost_time = et - st print('one_by_one: consume nums: {}, cost time: {}, rate: {}/s'.format(nums, cost_time, nums // cost_time)) def main2(): nums = 0 st = time.time() consumer = KafkaConsumer( topic, bootstrap_servers='localhost:9092', auto_offset_reset='latest', group_id=group_id ) running = True batch_pool_nums = 1 while running: batch_msgs = consumer.poll(timeout_ms=1000, max_records=batch_pool_nums) if not batch_msgs: continue for tp, msgs in batch_msgs.items(): nums += len(msgs) if nums >= 500000: running = False break consumer.close() et = time.time() cost_time = et - st print('batch_pool: max_records: {} consume nums: {}, cost time: {}, rate: {}/s'.format(batch_pool_nums, nums, cost_time, nums // cost_time)) if __name__ == '__main__': # main1() main2() ''' one_by_one: consume nums: 500000, cost time: 8.018627166748047, rate: 62354.0/s one_by_one: consume nums: 500000, cost time: 7.698841094970703, rate: 64944.0/s batch_pool: max_records: 1 consume nums: 500000, cost time: 17.975456953048706, rate: 27815.0/s batch_pool: max_records: 1 consume nums: 500000, cost time: 16.711708784103394, rate: 29919.0/s batch_pool: max_records: 500 consume nums: 500369, cost time: 6.654940843582153, rate: 75187.0/s batch_pool: max_records: 500 consume nums: 500183, cost time: 6.854053258895874, rate: 72976.0/s batch_pool: max_records: 1000 consume nums: 500485, cost time: 6.504687070846558, rate: 76942.0/s batch_pool: max_records: 1000 consume nums: 500775, cost time: 7.047331809997559, rate: 71058.0/s '''
以上就是本文的全部内容,希望对大家的学习有所帮助,也希望大家多多支持。
广告合作:本站广告合作请联系QQ:858582 申请时备注:广告合作(否则不回)
免责声明:本站资源来自互联网收集,仅供用于学习和交流,请遵循相关法律法规,本站一切资源不代表本站立场,如有侵权、后门、不妥请联系本站删除!
免责声明:本站资源来自互联网收集,仅供用于学习和交流,请遵循相关法律法规,本站一切资源不代表本站立场,如有侵权、后门、不妥请联系本站删除!
暂无评论...
更新日志
2024年11月25日
2024年11月25日
- 凤飞飞《我们的主题曲》飞跃制作[正版原抓WAV+CUE]
- 刘嘉亮《亮情歌2》[WAV+CUE][1G]
- 红馆40·谭咏麟《歌者恋歌浓情30年演唱会》3CD[低速原抓WAV+CUE][1.8G]
- 刘纬武《睡眠宝宝竖琴童谣 吉卜力工作室 白噪音安抚》[320K/MP3][193.25MB]
- 【轻音乐】曼托凡尼乐团《精选辑》2CD.1998[FLAC+CUE整轨]
- 邝美云《心中有爱》1989年香港DMIJP版1MTO东芝首版[WAV+CUE]
- 群星《情叹-发烧女声DSD》天籁女声发烧碟[WAV+CUE]
- 刘纬武《睡眠宝宝竖琴童谣 吉卜力工作室 白噪音安抚》[FLAC/分轨][748.03MB]
- 理想混蛋《Origin Sessions》[320K/MP3][37.47MB]
- 公馆青少年《我其实一点都不酷》[320K/MP3][78.78MB]
- 群星《情叹-发烧男声DSD》最值得珍藏的完美男声[WAV+CUE]
- 群星《国韵飘香·贵妃醉酒HQCD黑胶王》2CD[WAV]
- 卫兰《DAUGHTER》【低速原抓WAV+CUE】
- 公馆青少年《我其实一点都不酷》[FLAC/分轨][398.22MB]
- ZWEI《迟暮的花 (Explicit)》[320K/MP3][57.16MB]