How can we return this OutputtoADF value to a calling notebook? If suppose I am running a for loop in a diff notebook to call the processing notebook how will the error return back to it?
That's really good idea...Sometime back I was struggling with it. I have used web activity and using Rest API to check the job status which calls notebook.
Hi, If I have multiple command lines in a notebook and each one is inside try-except block then how can I collate all the errors from the command line and pass to adf and then send them together in mail ?
Hi, your videos are quite helpful for my implementation. I have just came across one issue in which I am trying to load multiple files with different schemas into the snowflake table using data factory, having my source area as blob that has multiple files and sink area as snowflake. While running the copy script the 'Import Schema' section does not work as my pipeline is dynamic. Can you please help me with a solution for that. It's been a roadblock. Happy to hear back from you regarding it or you would like to get connected to discuss more about this unique scenario.
Hi Mam... How can we parse an xml file using this and if xml contains any invalid characters/hexadecimal characters we need to capture this error. How can we achieve this using this flow?
How can we return this OutputtoADF value to a calling notebook?
If suppose I am running a for loop in a diff notebook to call the processing notebook how will the error return back to it?
Thanks ...that was awesome explanation!
That's really good idea...Sometime back I was struggling with it. I have used web activity and using Rest API to check the job status which calls notebook.
How can i send error/exception from synapse notebook to ADF?
Very helpful. Thank you.
Hi, If I have multiple command lines in a notebook and each one is inside try-except block then how can I collate all the errors from the command line and pass to adf and then send them together in mail ?
What if there are no error messages ? Will set variable not fail ?
Superb content
Hi, your videos are quite helpful for my implementation. I have just came across one issue in which I am trying to load multiple files with different schemas into the snowflake table using data factory, having my source area as blob that has multiple files and sink area as snowflake. While running the copy script the 'Import Schema' section does not work as my pipeline is dynamic. Can you please help me with a solution for that. It's been a roadblock. Happy to hear back from you regarding it or you would like to get connected to discuss more about this unique scenario.
Fantastic video... thanks mam
Nice way of identifying output from notebook. I have similar reqmt but I can't modify the notebook. Do you have any other way of doing it
Useful video 👍
Hi Mam... How can we parse an xml file using this and if xml contains any invalid characters/hexadecimal characters we need to capture this error. How can we achieve this using this flow?
Can you please make video on how to implement the RDD concepts in PySpark code
👏👏