hi Dustin, many thanks for this demo. I noticed that time to time synapse pool switch back to default configuration (w/o looging). Have you seen such behaviour?
Hi Dustin, thanks for this tip. I have two questions: 1) I wanto to load some tables from one container to another one, and how to provide the destination info? 2)I'm getting this kind of error.. i don´t know what to do... Exception in thread Thread-19: Traceback (most recent call last): File "/home/trusted-service-user/cluster-env/env/lib/python3.6/threading.py", line 916, in _bootstrap_inner self.run() File "/home/trusted-service-user/cluster-env/env/lib/python3.6/threading.py", line 864, in run self._target(*self._args, **self._kwargs) File "", line 11, in run_tasks function(value) File "", line 14, in load_table .option("password", db_password) File "/opt/spark/python/lib/pyspark.zip/pyspark/sql/readwriter.py", line 172, in load return self._df(self._jreader.load()) File "/opt/spark/python/lib/py4j-0.10.7-src.zip/py4j/java_gateway.py", line 1257, in __call__ answer, self.gateway_client, self.target_id, self.name) File "/opt/spark/python/lib/pyspark.zip/pyspark/sql/utils.py", line 69, in deco return f(*a, **kw)
For the error, have you tried running load_table function on its own without the threading? I expect you will get an error message if you do and that might be more clear. Could be something as simple as bad password or db info but it isn't clear from this error message.
Do we have to specifically write logger.info or warn to get the message to log analytics in Synapse ? Or we run the code in synapse and any error message in synapse will go to log analytics automatically ?
Hi Dustin, thank you, I was in need of a demo and your video fit like a glove.
Hi Dustin, nice video!
Any plants to do the same but for Microsoft Fabric?
@@derkachm No, I am not doing enough with Fabric yet to add anything new there.
Thanks Dustin!
hi Dustin, many thanks for this demo. I noticed that time to time synapse pool switch back to default configuration (w/o looging). Have you seen such behaviour?
Once it is set at the Spark pool level I have not had an issue.
Hi Dustin, thanks for this tip. I have two questions:
1) I wanto to load some tables from one container to another one, and how to provide the destination info?
2)I'm getting this kind of error.. i don´t know what to do...
Exception in thread Thread-19:
Traceback (most recent call last):
File "/home/trusted-service-user/cluster-env/env/lib/python3.6/threading.py", line 916, in _bootstrap_inner
self.run()
File "/home/trusted-service-user/cluster-env/env/lib/python3.6/threading.py", line 864, in run
self._target(*self._args, **self._kwargs)
File "", line 11, in run_tasks
function(value)
File "", line 14, in load_table
.option("password", db_password)
File "/opt/spark/python/lib/pyspark.zip/pyspark/sql/readwriter.py", line 172, in load
return self._df(self._jreader.load())
File "/opt/spark/python/lib/py4j-0.10.7-src.zip/py4j/java_gateway.py", line 1257, in __call__
answer, self.gateway_client, self.target_id, self.name)
File "/opt/spark/python/lib/pyspark.zip/pyspark/sql/utils.py", line 69, in deco
return f(*a, **kw)
For the error, have you tried running load_table function on its own without the threading? I expect you will get an error message if you do and that might be more clear. Could be something as simple as bad password or db info but it isn't clear from this error message.
Do we have to specifically write logger.info or warn to get the message to log analytics in Synapse ? Or we run the code in synapse and any error message in synapse will go to log analytics automatically ?
Anything that goes to log4j will go to log analytics. I believe print statements and Python logger will not send to log analytics.
You cant upload a .txt file anymore, could you help explain how to set this up now?
See the new video instead which covers uploading a few different ways: ua-cam.com/video/CVzGWWSGWGg/v-deo.html
Great sir