Why You Should Use Azure Storage Tables
Вставка
- Опубліковано 1 сер 2024
- Table Storage is often one of the most overlooked options on Azure for data storage, but it’s also one of the most versatile, scalable, and cheapest options there is. It provides a key-value pair storage mechanism that can scale to terabytes of data and a queryable API with OData. In this video, we’ll look at how it works and the tools and API’s for working with Table Storage.
00:00 -- Table Storage on Azure
08:46 -- Azure Portal and Storage Explorer
13:51 -- Using OData to Query Data
OData: www.odata.org/
------
Wintellect:
WintellectNOW: www.wintellectnow.com/
Blaize's Website: www.blaize.net/
Twitter:
Blaize: / theonemule
Wintellect: / wintellect
WintellectNOW: / wintellectnow - Наука та технологія
Your videos are great, they deserve more views. I've been trying to find good cosmos db content, and have been working through your videos on that and the other ones (like this one). Thanks for the good info
This was a great find, as I was a little weary of finding content that tells you how to set these up, but not how to use them, thank you!
I particularly like 'cheap data storage' mentioned at 07:57, which is primarily why I am looking at this. But does anyone know of any learning/videos that compares this solution with a small RDMBS (eg Azure SQL DB)....?
Very good explanation, congratulations. I have a doubt, if I want to use this type of tables, how can I delete all the content before inserting new rows?
You can do replace upserts with this. It removes then replaces all the data for a row.
if you query by partitionkey (eg eq 'AK') that may invoke a single thread. Maybe querying by rowkey (eg eq '456') would involve many threads (perhaps one per partition) and reduce all results together. Perhaps quicker (if few but bug partitions). IMHO usual mantra "test, test, test" applies!
for bug read big [sorry my fat fingers!]
Within a partition can be faster depending on the query. Going across partitions is generally something to avoid if possible. But yes! Test, test, test. And even when you are sure, test again
Is there a way we can run stored functions or procs on them.?
There's no trigger for stables. If you want to use tables and create a trigger, use a storage queue to trigger a function that can read the table.
How do I copy these table data from one datalake to another datalake??
You'd need to wrote a script tp do that.
How to sort the data in these tables based on any columns?
You really can't sort with this. You have to do it client side.