97. Databricks | Pyspark | Data Security: Enforcing Column Level Encryption

Поділитися
Вставка
  • Опубліковано 31 січ 2025

КОМЕНТАРІ • 26

  • @sravankumar1767
    @sravankumar1767 Рік тому +1

    Nice explanation 👌 👍 👏

  • @tanushreenagar3116
    @tanushreenagar3116 Рік тому +1

    GREAT EXPLANATION SIR

  • @phanisrikrishna
    @phanisrikrishna Рік тому

    Hi Raja,
    This particular video is great.
    I have one question. will the size of df increase by creating encryption on some of the columns? How do we take care of memory while designing?
    Thanks in advance.

  • @SravanthiP-v3o
    @SravanthiP-v3o Рік тому

    Sir After creating the encrypted and decrypted column.. how do we hide the original ssn table? and when we run the command select ssn from dimeployee how do we ensure that we get the encrypted format ?

  • @sanjayr3597
    @sanjayr3597 11 місяців тому

    Good video.I have a question regarding KEY, how is that value stored ..? can we use the same function another notebook with the same cluster?
    any draws using this method?

  • @nagamanickam6604
    @nagamanickam6604 9 місяців тому +1

    Thank you

  • @rmrz2225
    @rmrz2225 Рік тому +1

    Good job, but,I have a question, when we encrypt information aren't we not supposed to be able to decrypt it?

  • @ramreddy1138
    @ramreddy1138 Рік тому +1

    Good one. But, how do we filter the data and apply comparisons?

    • @rajasdataengineering7585
      @rajasdataengineering7585  Рік тому

      In order to filter and compare, we need to decrypt the data on the fly using decrypt method

    • @ramreddy1138
      @ramreddy1138 Рік тому

      ​@@rajasdataengineering7585 It will impact performance too much..

  • @azureadi-q3y
    @azureadi-q3y Рік тому +1

    good Explain, could you please explain delta lake live table (today i loaded 1 table with 20 columns and same table next day i am getting with 2 more extra columns how to handle in delta loads in ADB and how to manage delta merge command (In production))

  • @NagarjunaSunguluru
    @NagarjunaSunguluru Рік тому +1

    Can we apply those functions on nested json data also??

    • @rajasdataengineering7585
      @rajasdataengineering7585  Рік тому

      Yes we can do. At dataframe level, we can apply encryption and write in json format which will keep encrypted data within json file

  • @revjr1284
    @revjr1284 Рік тому

    I am getting the below error while encrypting the data.
    'TypeError: encoding without a string argument'
    Kindly help

  • @lavanijavidalikhan3844
    @lavanijavidalikhan3844 Рік тому +1

    Can we apply same thing to encrypt a csv file

  • @sravankumar1767
    @sravankumar1767 Рік тому +1

    What is md5, could u please explain this one

  • @gattureddy3796
    @gattureddy3796 11 місяців тому

    Hi Raja, could you please share a notebook link or DBC file

  • @sabesanj5509
    @sabesanj5509 Рік тому +1

    Raja sir, Will they these kind of questions in Spark interviews??