Skip to main content
Best practices
0 votes
0 replies
42 views

QuickBooks Online's API documentation includes a recommendation that users use webhooks instead of change-data-capture to synchronize their entity records with QuickBooks's. Why are webhooks ...
In Hoc Signo's user avatar
0 votes
0 answers
54 views

I have 3 pods running in a Kubernetes environment. The three pods are as follows: (1) Web application pod (2) Oracle database (3) Debezium The web application is connected to an Oracle database, ...
Chamath Jeevan's user avatar
0 votes
1 answer
97 views

I’m using PostgreSQL as the source database with Change Data Capture (CDC) enabled via publications. I have two related tables: -- Table 1: orders CREATE TABLE orders ( id UUID PRIMARY KEY, ...
jasraj bedi's user avatar
0 votes
1 answer
68 views

I'm trying to write to BigQuery using Apache Beam, in python. However, I want to use the newest CDC features to write on Bigquery. However, I can't get the correct format of the objects in the ...
José Fonseca's user avatar
1 vote
1 answer
75 views

We need to capture changes done in the PostgreSQL DB tables, and are planning on using CDC for the same. We have a requirement to ignore/filter events which were written inn the DB by a specific user/...
RaRa's user avatar
  • 306
0 votes
2 answers
130 views

I am encountering issue when trying to CDC data from MongoDB source to kafka using Debezium connector with confluent schema registry. I only want the CDC data to have some of the fields which are ...
RAJAT BANSAL's user avatar
0 votes
0 answers
25 views

I am looking for a way to design a stream that ONLY emits columns that have changed in a record. If I use the stream, it emits the entire record even if only one of the column values has changed in ...
Saqib Ali's user avatar
  • 4,551
0 votes
1 answer
102 views

I'm trying to use Flink-cdc to capture data change from Mysql and update the Hudi table in S3. My pyFlink job was like: env = StreamExecutionEnvironment.get_execution_environment(config) env....
Rinze's user avatar
  • 834
0 votes
1 answer
96 views

We have a Kafka cluster that we're trying to connect Debezium to. We are able to successfully deploy a Producer or Consumer using the following producer.config/consumer.config (these are temporary ...
user1913559's user avatar
0 votes
0 answers
120 views

I'm using Debezium Server to capture changes from a PostgreSQL database and publish them to Google Cloud Pub/Sub. I want to set a table-specific ordering key so that messages related to the same ...
pramod's user avatar
  • 193
0 votes
0 answers
91 views

I'm running a Flink cluster in Docker in my local env, and I've copied these jar files to the /opt/flink/lib/ of the image: flink-cdc-dist-3.3.0.jar flink-cdc-pipeline-connector-mysql-3.3.0.jar flink-...
Rinze's user avatar
  • 3
1 vote
1 answer
79 views

I have a setup where I’m using a Neo4j source connector to propagate change events to a Kafka topic. My Spring Boot application consumes these messages. However, when a transaction in Neo4j involves ...
Sarthak Sharma's user avatar
1 vote
1 answer
68 views

I want to have a ADF job that triggers on a SQL (Azure SQL DB) table re-load. I know that there is a Change Data Capture (CDC) trigger that can be used in ADF, but that seems to be something I would ...
M Rothwell's user avatar
0 votes
0 answers
110 views

I'm trying to create a trigger that will log the user who made modifications (INSERT, UPDATE, DELETE) to a table into a Change Data Capture (CDC) table in SQL Server. Specifically, I want to insert ...
eusebio72's user avatar
0 votes
1 answer
343 views

I am fairly new to Azure Data Factory (ADF) but have been learning and experimenting with some of its advanced features. I'm currently working on a use case involving Change Data Capture (CDC) and ...
Nahid Talukdar's user avatar

15 30 50 per page
1
2 3 4 5
20