Hi Sreekanth, is it possible to generate a timestamp field with the timestamp of the replication, the date and time that a registry is replicated in delta scenario?
currently Replication flows do not support this feature. at this time, you can use Native HANA cloud DB functionality of Export SQL command to generate a csv file.
Hello thanks for helping!, What's the difference between creating a Replication flow with load type = Initial and Delta ( Delta = 1 hour ) and another replication flow with load type = initial only and a schedule run every 1 hour?
Initial and Delta option - once initial load is completed, system will check for data changes in the source, and load delta/changed data into target table continuously. Initial load only - It just loads data once, then next data load is not looking for delta changes in the source.
@@AndreaTerrone the needed data volume is bigger if you keep loading initials. You send the whole dataset everytime instead of only small changed parts.
and the mappings?
Hi Sreekanth, is it possible to generate a timestamp field with the timestamp of the replication, the date and time that a registry is replicated in delta scenario?
Hi Sreekanth - Is there any way to send the updates to the target system as csv or parquet updates similar to the SAP Data Intelligence RMS process?
currently Replication flows do not support this feature. at this time, you can use Native HANA cloud DB functionality of Export SQL command to generate a csv file.
Hello thanks for helping!,
What's the difference between creating a Replication flow with load type = Initial and Delta ( Delta = 1 hour ) and another replication flow with load type = initial only and a schedule run every 1 hour?
Initial and Delta option - once initial load is completed, system will check for data changes in the source, and load delta/changed data into target table continuously. Initial load only - It just loads data once, then next data load is not looking for delta changes in the source.
@@sreekanthsurampally
Ok, so the output is exactly the same, the difference is only in how the data are uploaded
@@AndreaTerrone the needed data volume is bigger if you keep loading initials. You send the whole dataset everytime instead of only small changed parts.
@@AndreaTerrone it is not the same, initialization job generates the pointer and the delta job runs based on the limits.