-
Notifications
You must be signed in to change notification settings - Fork 227
kafka instrumentation native support for apm-python agent. #964
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
@Rohit-dot-KumarAM thanks for your contribution. We'll have a look shortly. In the mean time, can I ask you to stop opening, closing and opening more pull requests? This is causing a lot of notifications on our end :) |
❕ Build Aborted
Expand to view the summary
Build stats
Trends 🧪Steps errors
Expand to view the steps failures
|
Extremely apologize for multiple pull requests, this happened because of my confusion in github. Thanks for your time. |
|
Sorry for the delay here, @Rohit-dot-KumarAM. This is a big change and we're working to make sure we align with other agents on messaging libraries -- in fact, we have an open spec PR here: elastic/apm#380 That, coupled with the holidays and our upcoming 6.0 release has meant this has taken a back seat. But we haven't forgotten about it! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@Rohit-dot-KumarAM The spec is now merged: https://github.com/elastic/apm/blob/master/specs/agents/tracing-instrumentation-messaging.md
Can you please take a look at it and revise your PR based on the naming conventions and context information laid out in that doc?
Additionally, it looks like you're not using the correct header names as laid out in this spec: https://github.com/elastic/apm/blob/master/specs/agents/tracing-distributed-tracing.md
For example, you are using trace
instead of traceparent
for the header name.
As an aside, I don't particularly like the situation with get_client()
-- I'm working on a more generalized solution for making the client more available so that you don't have to check each framework in turn. I'll post here when it's ready and you can use that instead.
Note: if you don't have time to do the above changes, then when I find some time I can go through and modify your PR myself.
Hello @Rohit-dot-KumarAM, thanks a million for your contribution. Is there anything that I can do to help to get this merged? I would love to have automatic instrumentation for Kafka. |
Hi I did this long back and after that some of the spec changes added, can you help me to align with the spec and also. I may ruin the entire work due to my limited understanding in instrumentation. |
@Rohit-dot-KumarAM No worries. I have this on my list to push over the finish line in the coming months. We definitely want Kafka support, and we appreciate your effort here. I may end up making a new PR with @Rohit-dot-KumarAM's work plus some spec fixes and other work. For those with interest in Kafka support, make sure you're watching #853 as well. |
Changes include: * automatically create transactions when looping over results * instrument KadkaConsumer.poll * add binary TraceParent handling * added support for span links * add tests
Changes include: * automatically create transactions when looping over results * instrument KadkaConsumer.poll * add binary TraceParent handling * added support for span links * add tests
Closing in favor of #1555, which is based on this PR. |
* kafka instrumentation for apm-python agent. * Adapt #964 for Kafka/messaging support Changes include: * automatically create transactions when looping over results * instrument KadkaConsumer.poll * add binary TraceParent handling * added support for span links * add tests * create/delete topics explicitly in a pytest fixture * add changelog entry Co-authored-by: Rohit-dot-KumarAM <rohit.kumaram@maplelabs.com>
Successor Now |
Add instrumentation for KafkaProducer.send method and KafkaConsumer.next method, while running inside framework.
How it works:
-> Considering send is being called from a transaction say Django transaction, it will capture a span out of it.
-> Then in elasticapm/instrumentation/packages/kafka.py for send 1 extra header called trace is being added .
-> this "trace" header will contain the tp-string with the span-id.
-> at the receiver's side while instrumenting "next" the "trace" header will be fetched.
-> If there is a running transaction which is not of type "messaging" then will create span for "next".
-> else if there is a transaction with type messaging we will close the transaction considering that the current transaction was started from previous "next" call.
-> else there is no current transaction running then we will create one and make it child of the send-span, making use of "trace" header which contains our tpstring.
Supported base Technologies:
================
-> Django 1.8.19
-> flask 1.1.2
-> celery 3.1.26.post2 (considering only kafka-send inside a celery task. Not consume.)
Related issues
closes #853