Fastest way to delete data?

Поділитися
Вставка
  • Опубліковано 3 січ 2025

КОМЕНТАРІ • 20

  • @sumitayadav6692
    @sumitayadav6692 Рік тому +1

    Hi Connor... please make a detailed video on how to effectively read awr and addm reports.

  • @ecoworrier
    @ecoworrier Рік тому +1

    Nice summary. Thanks Connor.

  • @steveandreassend8620
    @steveandreassend8620 Рік тому

    Your last method at the end with MOVE ONLINE is elegant. If that's not possible, a fast solution for an in-place DELETE on large non-partitioned tables is to sort the rows for deletion by DB block ID by using the ROWID, may be using a temp table, and then batch deleting the rows. This reduces the db file sequential reads by clustering by physical location. Linux SD devices and often SAN devices will do a read ahead and pre-cache adjacent data blocks.

    • @DatabaseDude
      @DatabaseDude  Рік тому

      Indeed - an excellent option. I'd love to see the database do this natively

  • @sagsag73
    @sagsag73 Рік тому

    As always , there is tradeoff , in ordet o move the table and keep the needed rows (assuming we need to delete 20% of the table) we still need almost double the space. however, it is a great option !!!

    • @DatabaseDude
      @DatabaseDude  Рік тому

      Indeed. Although space is so cheap and plentiful nowadays (I know this is a generalisation :-))

  • @rajaneeshrao4188
    @rajaneeshrao4188 Рік тому

    Can we use multi threadding. I. e. dividing deleting rows with some logic and distributing to multiple session. Like dbms_parallel_execute where rows are distributed in chunks and each chunk is allocated to each session.

    • @DatabaseDude
      @DatabaseDude  Рік тому

      yes you can. dbms_parallel_execute is a common option here. That improves response time, but you'll be doing the same amount of work in total

  • @vikram4755
    @vikram4755 Рік тому

    If we try this out on a 5 TB table and since its online, Database would need additional 5 TB to create a MV in background to support online?

    • @DatabaseDude
      @DatabaseDude  Рік тому +1

      Correct. A "move" always needs (temporarily) the old and new data.

  • @Sagardua297
    @Sagardua297 Рік тому +1

    Absolutely agree 💯

  • @siddharthapenchala8146
    @siddharthapenchala8146 Рік тому

    Sir, is it true ?
    'Bulk collect' is the best approach to delete more a large number of rows efficiently from a table which contains huge amount of data?

    • @DatabaseDude
      @DatabaseDude  Рік тому +1

      bulk collect is definitely better than single row at a time, but native SQL (delete from ...) is implicitly the same or better than bulk collect

    • @rajaneeshrao4188
      @rajaneeshrao4188 Рік тому

      But non- partitioned table to delete millions of rows bulk collect with some limit and commit is good right compared to simple delete?

  • @sumitayadav6692
    @sumitayadav6692 Рік тому

    Hi..can you make a video on varchar 32k. I used in my table and found the row size got increased and the table is largest in db. Is v32k taking lots of space in asm .should i move to clob?

    • @DatabaseDude
      @DatabaseDude  Рік тому

      A varchar2(32k) is just a clob under the covers (once the data is larger than ~4k) so you incur all the same performance overheads

  • @eddiehapson1730
    @eddiehapson1730 Рік тому

    Nice

  • @ryanwwest
    @ryanwwest Рік тому

    Seems like a nuke would be faster.

    • @DatabaseDude
      @DatabaseDude  Рік тому +1

      "truncate" has always been very "thorough" :-)