AI Expedition
AI Expedition
  • 4
  • 23 327
Speed up your TensorFlow code using TFRecords and dataset pipelines
In this video you'll learn how to use TFRecords and dataset pipelines to increase your TensorFlow Training code.
A link to my website:
aiexpedition.com/
A link to the github repo:
github.com/jlaihong/Speed-up-TensorFlow-code-using-TFRecords
Documentation:
tf.data.TFRecordDataset
www.tensorflow.org/api_docs/python/tf/data/TFRecordDataset
tf.train.Example
www.tensorflow.org/api_docs/python/tf/train/Example
tf.train.Features
www.tensorflow.org/api_docs/python/tf/train/Features
tf.train.Feature
www.tensorflow.org/api_docs/python/tf/train/Feature
TFRecord and tf.train.Example
www.tensorflow.org/tutorials/load_data/tfrecord
Переглядів: 5 171

Відео

Remove people and pimples from images
Переглядів 2653 роки тому
See how I convert AI models into working tools and try out the tool for yourself. Link to the tool: aiexpedition.com/tools/image_inpainting/ Link to the research paper: openaccess.thecvf.com/content_cvpr_2018/papers/Yu_Generative_Image_Inpainting_CVPR_2018_paper.pdf Link to the model code: github.com/JiahuiYu/generative_inpainting Link to my website: aiexpedition.com/ Link to my Udemy course: w...
Image Super Resolution: SRResNet and SRGAN TensorFlow 2 implementation and model intuition
Переглядів 9 тис.3 роки тому
In this video, I talk through a TensorFlow 2 implementation of the Image Super Resolution SRResNet and SRGAN models, outlined in the paper: Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network. arxiv.org/pdf/1609.04802.pdf Link to previous demo video: ua-cam.com/video/n7ji9g8hE0Q/v-deo.html Link to TensorFlow 2 code: github.com/jlaihong/image-super-resolution Lin...
Improve Image Quality using AI
Переглядів 9 тис.3 роки тому
In this video, I show you how to use 2 of the most well known AI Image Super Resolution models to enhance your image quality (SRResNet and SRGAN). It's completely free and only requires that you have a Google account (you have a Google account if you use gmail or if you use the Google Play Store). Link to the github code repository: github.com/jlaihong

КОМЕНТАРІ

  • @mohammadalshrahi6827
    @mohammadalshrahi6827 5 місяців тому

    thanks for the video, i tried to train the model locally for srResNet the model was struggling after around 70 epochs to decrease the loss, the loss i rechecd locally is around 205, is that the case for you as well?

  • @ElectroVisionAI
    @ElectroVisionAI 8 місяців тому

    Thank you very much for explaining the SRGAN method in detail and comprehensibly in the video. I have read the SRGAN paper by Christian Ledig and colleagues published in 2017, which laid the foundation for dozens of subsequent works. However, I couldn't quite grasp it in my mind. The paper seemed intentionally edited not to be fully understood by everyone, and some crucial points were not explained in detail. Thanks to this video, I was able to understand many of the missing points. However, I couldn't run the code provided in the video due to a Tensorflow version issue. I tried another Github code, and it worked. The contents of the codes are largely the same. First, it trains the Generator with SRResNet (has Residual Blocks). Then it trains the SRGAN. In SRGAN training, it uses the initial weights of the pre-trained generator. It calculates the content loss, adversarial loss, and the perceptual loss function, which is the sum of these losses. The training and test codes are provided in a single notebook. The underlying codes that are imported and run in the background are much simpler. I'm sharing it for everyone's benefit. I also thank Martin Krasser for providing a clean code and making it publicly available on his Github account. github.com/krasserm/super-resolution

  • @PhiloMath1412
    @PhiloMath1412 8 місяців тому

    I don't usually comment, but more ppl need to subscribe and share this video, or channel, haven't looked at channel yet, but I sub'ed 👍

  • @fununterhaltung6556
    @fununterhaltung6556 10 місяців тому

    best video! thanks u so much <3 helped so much for my work

    • @aiexpedition5314
      @aiexpedition5314 8 місяців тому

      Thanks so much for the kind words. glad it helped :)

  • @MuhmmadZahid-d1s
    @MuhmmadZahid-d1s Рік тому

    Hi there, I am trying to run your code by "connect to a local run time" Error message is: ModuleNotFoundError Traceback (most recent call last) ~\AppData\Local\Temp\ipykernel_14164\4188715264.py in <module> 5 import tensorflow as tf 6 ----> 7 from datasets.div2k.parameters import Div2kParameters 8 from models.srresnet import build_srresnet 9 from models.pretrained import pretrained_models ModuleNotFoundError: No module named 'datasets.div2k'

  • @MuhmmadZahid-d1s
    @MuhmmadZahid-d1s Рік тому

    Hi Dear Jeremy, Thank you for the worthy video, I am also trying to run your code but getting errors, I am running it using colab, With reference to a below post, in which you guide that , import sys, import tensorflow, print ("Python versoin")...... and send the output in reply, please see: Python 3.8.0 (tags/v3.8.0:fa919fd, Oct 14 2019, 19:37:50) [MSC v.1916 64 bit (AMD64)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> import sys >>> import tensorflow 2023-10-31 10:43:03.666009: W tensorflow/stream_executor/platform/default/dso_loader.cc:60] Could not load dynamic library 'cudart64_110.dll'; dlerror: cudart64_110.dll not found 2023-10-31 10:43:03.666452: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine. >>> print("Python version") Python version >>> print(sys.version) 3.8.0 (tags/v3.8.0:fa919fd, Oct 14 2019, 19:37:50) [MSC v.1916 64 bit (AMD64)] >>> print("Version info.") Version info. >>> print(sys.version__info) Traceback (most recent call last): File "<stdin>", line 1, in <module> AttributeError: module 'sys' has no attribute 'version__info' >>> print(sys.version_info) sys.version_info(major=3, minor=8, micro=0, releaselevel='final', serial=0) >>> print("Tensorflow info.") Tensorflow info. >>> print(tensorflow.__version__) 2.4.3 >>> print("Keras info.") Keras info. >>> print(tensor.keras.__version__) Traceback (most recent call last): File "<stdin>", line 1, in <module> NameError: name 'tensor' is not defined >>> print(tensorflow.keras.__version__) 2.4.0 >>>

  • @21past
    @21past Рік тому

    Very helpful video, thanks !

  • @AndresLeonRangel
    @AndresLeonRangel Рік тому

    Really good. I want to remove my ex from some important pics

  • @ahmadsirajhashmi1550
    @ahmadsirajhashmi1550 Рік тому

    it takes a lot of time to train any way to reduce the time

  • @reader.in.a.time.of.ignorance
    @reader.in.a.time.of.ignorance 2 роки тому

    THANK YOU ❤ شكرا لك

  • @Ilyas-NRV
    @Ilyas-NRV 2 роки тому

    It didn't change anything

  • @י.ינ
    @י.ינ 2 роки тому

    I hate when I download an image and… yeah this

  • @DarnIDidntKnowThat
    @DarnIDidntKnowThat 2 роки тому

    how would i use this on linux kali please?

  • @nat.serrano
    @nat.serrano 2 роки тому

    what are the steps to host this in firebase? or somewhere...I'd like to create an API to call from an ios APP

  • @iOhan_flp
    @iOhan_flp 2 роки тому

    This is amazing bro. Thank you!

  • @sravankumarchilaka
    @sravankumarchilaka 2 роки тому

    can you please rectify this valueError: Layer count mismatch when loading weights from file. Model expected 70 layers, found 89 saved layers

    • @tosoliniluca00
      @tosoliniluca00 2 роки тому

      hi, have you find a solution? because i've same error

  • @helloworld9478
    @helloworld9478 2 роки тому

    Exactly what I am looking for

  • @שגיאאקשיקר-מ4ע
    @שגיאאקשיקר-מ4ע 2 роки тому

    how can I to use bicubic_x2? this not work when I replace the dataset_key, its only work with bicubic_x4 key

  • @eduardojreis
    @eduardojreis 2 роки тому

    Such a helpful video! Thanks a lot for taking time and effort doing it!!!

  • @SS-cz2de
    @SS-cz2de 2 роки тому

    Very well explained.

    • @aiexpedition5314
      @aiexpedition5314 2 роки тому

      Thank you Samit :) I appreciate the kind words.

  • @ginanottenkaemper7960
    @ginanottenkaemper7960 2 роки тому

    thank you very much. How long did it take to create the TFRecord? I am loading 600,000 images of size 2500x3000 pixels and am looking for a way to reduce the image size before saving as TFRecord.

    • @aiexpedition5314
      @aiexpedition5314 2 роки тому

      Hi Gina, sorry for the late reply. I can't remember exactly how long it took to create the TFRecord. It was a few minutes. Your dataset seems large so I would recommend creating the TFRecord on a fast machine and then uploading the TFRecord somewhere that colab can download it from. (The upload file functionality for colab was quite slow last time I checked).

    • @ginanottenkaemper7960
      @ginanottenkaemper7960 2 роки тому

      @@aiexpedition5314 Thank you for your answer. My current work-around is to create multiple TFRecord files so that I can run it in parallel on multiple machines. I will do the upload for Colab with the link to Google Drive, which I have only had good experiences with so far. You seem to have trained CNNs quite often. Can you estimate how long the training will take? (600.000 Images from 2 Classes, Transfer learning with a ResNet-50) Also, I'm going to start your Udemy course next week and I'm very excited about it!

    • @aiexpedition5314
      @aiexpedition5314 2 роки тому

      @@ginanottenkaemper7960 Downloading from Google Drive works well. 600,000 images of size 2500x3000. That's quite a lot and will probably take days or even weeks to train if you perform a couple of training iterations and use all of the data. However, with transfer learning and only using 2 classes you should start to get good training accuracy even after your network has only seen a couple hundred of examples, so you could stop the training early even after a few minutes. Thanks for considering my course! I hope you learn a lot and have a good learning experience. Feel free to ask if you have any questions. Good luck

    • @ginanottenkaemper7960
      @ginanottenkaemper7960 2 роки тому

      ​@@aiexpedition5314 Hey Jeremy, Thanks for your quick reply. Hope you are well. Training the network worked perfectly thanks to you. now i'm having trouble evaluating my model. I would like to call the function model.predict() or model.evaluate(). I tried to create a batch of the test dataset (dataset.batch(128)) and pass it to the methods. The code has been running for over an hour with no result and I don't know if I should have done something differently or if I need to wait for longer.

    • @aiexpedition5314
      @aiexpedition5314 2 роки тому

      @@ginanottenkaemper7960 glad your training is working. It could be that you just have many images in your testing dataset and it will take a while. Is there any indication of progress while predicting or evaluating? How many images are there in your test set? Something you could try is calling predict() or evaluate() on dataset.take(256).batch(128). This will just run prediction on the first 256 images over 2 batches and should be a lot quicker than running over the entire dataset. Then you can use that to give you an indication of how long the evaluation will take over the entire dataset.

  • @sanjaynt7434
    @sanjaynt7434 2 роки тому

    Thank you❤🙏, no one explained tf record this clearly.

  • @coccosapiens
    @coccosapiens 3 роки тому

    Fantastic Video! Can you show how to train on a custom personal dataset?

    • @aiexpedition5314
      @aiexpedition5314 2 роки тому

      Thank you :). I don't have a video on that just yet. If you want a quick solution you can run the code in the training script which will download and extract the Div2k dataset. Then you can replace the images in the 4 folders: DIV2K_train_HR, DIV2K_train_LR_bicubic, DIV2K_valid_HR and DIV2K_valid_LR_bicubic with the appropriate images and delete the "cache" folder. When you rerun it, it should train on your custom dataset. Hope this helps.

    • @coccosapiens
      @coccosapiens 2 роки тому

      @@aiexpedition5314 thank you so much for your help 🙏🏻 I'm currently doing it 💪🏼

  • @MuhammadHussain-ub8nd
    @MuhammadHussain-ub8nd 3 роки тому

    i got an error of ValueError: Layer count mismatch when loading weights from file. Model expected 70 layers, found 89 saved layers.

    • @aiexpedition5314
      @aiexpedition5314 3 роки тому

      Hi Muhammad, which script are you running? And did you run this line: !pip install tensorflow==2.4.3

    • @MuhammadHussain-ub8nd
      @MuhammadHussain-ub8nd 3 роки тому

      I'm run SRGAN script and it got me this error since i am already running on google colab so i Haven't run pip install tenserflow . why it is important to install this colab has already all thing preinstall ..? thanks for responding your video is very helpful for me

    • @MuhammadHussain-ub8nd
      @MuhammadHussain-ub8nd 3 роки тому

      this is single video on UA-cam that has done some implementation with logic looking for more in-depth videos on this topic.

    • @aiexpedition5314
      @aiexpedition5314 3 роки тому

      Hi @@MuhammadHussain-ub8nd, the problem is that the error is caused by the latest version of TensorFlow which is installed on colab. That line will install a previous version which is compatible with the code. I hope this helps :)

    • @MuhammadHussain-ub8nd
      @MuhammadHussain-ub8nd 3 роки тому

      hm thanks do you have any social media platform where we can contact you .

  • @erlendllandgundersen5446
    @erlendllandgundersen5446 3 роки тому

    This video really helped me on my master's thesis! Thanks a lot!!

  • @ccuuttww
    @ccuuttww 3 роки тому

    liked very helpful

    • @ccuuttww
      @ccuuttww 3 роки тому

      And I think not even TFrecord can really speed up the process also the memory fitting U may not fit it inside to 8GB ram with that large dataset

  • @chloehe3889
    @chloehe3889 3 роки тому

    I am happy to finally find out your video with still low view counts. It is so helpful. Thank you!

  • @virajjadhav3601
    @virajjadhav3601 3 роки тому

    The video is really helpful, but please work on the audio quality. Thanks for the tutorial

  • @DangNguyen-yh5mm
    @DangNguyen-yh5mm 3 роки тому

    Can i ask you a question? Everytime training model do you reload the dataset for mapping data, i know we have to take the random cropped a specific size for whole dataset, for example before training we have to take random 128x128 cropped size for HR and 32x32 for LR, while training do we need take it again every epochs? Thank you man. (I'm doing the ESRGAN model)

    • @aiexpedition5314
      @aiexpedition5314 3 роки тому

      Hi Dang, an epoch in this code is defined as 1000 steps and the batch size is 16 images, so each epoch the model sees 16,000 images. We don't necessarily see all of the images during an epoch - it's possible we could even see the same image multiple times an epoch - I can't remember how many images the training dataset has.

    • @DangNguyen-yh5mm
      @DangNguyen-yh5mm 3 роки тому

      i have confused because in the DIV2K it has only 800 for training and 100 for validating, when you mapping augmentation job it does same size 800-100 after mapping, and you take 16,000 images that's mean you mapping 20 times for the whole dataset to one, i dont see any "length" when load dataset you set up in your code, thank you. if you free, can i have your contact for discuss about my project?

    • @aiexpedition5314
      @aiexpedition5314 3 роки тому

      @@DangNguyen-yh5mm The augmentation gets a random crop each time, so the model see's a different part of each of the 800 images.

    • @DangNguyen-yh5mm
      @DangNguyen-yh5mm 3 роки тому

      @@aiexpedition5314 Oh thank you, now i understrand that, but one more question, do you know how they balanced the hyperparameters for each term of loss, for example: ( w1*content_loss + w2*adversarial_loss + w3*MSE_loss ) and in content_loss they rescaled feature maps by multiply for 1/12.75 that's mean W1 == 0.006 (in paper), W2==...., W3=.... how do they do that? in my case there is four term like that.

  • @Jennifer-yz6uq
    @Jennifer-yz6uq 3 роки тому

    Very nice

  • @muhammadawon8164
    @muhammadawon8164 3 роки тому

    Great content. Just wondering if same method can be used for satellite imagery where each pixel represents an image?

    • @aiexpedition5314
      @aiexpedition5314 3 роки тому

      Hi Muhammad, thanks for your comment. I'm not sure I understand the question, could you perhaps rephrase it or ask it differently :)

    • @muhammadawon8164
      @muhammadawon8164 3 роки тому

      Can I use these algorithms for satellite imagery to get better results?

    • @aiexpedition5314
      @aiexpedition5314 3 роки тому

      @@muhammadawon8164 I believe it should improve your results but remember that this will make your images larger, so if you feed your images into another network then it will use more memory and will probably result in slower training

    • @muhammadawon8164
      @muhammadawon8164 3 роки тому

      @@aiexpedition5314 This is helpful. Thank you.

  • @muhammadawon8164
    @muhammadawon8164 3 роки тому

    Great content. Just wondering if same method can be used for satellite imagery where each pixel represents an image?

    • @cemrealdogan7163
      @cemrealdogan7163 Рік тому

      Late reply but I think if you want to apply it for satellite imagery, you should integrate other network architectures to SRRESNET like a network named RCAN(Residual channel attention network). You can check this article for further info: www.mdpi.com/2072-4292/14/12/2890. Hope this helps.

  • @bobakq
    @bobakq 3 роки тому

    Thank you for the great video. When I ran the code, it throws an error on cell 8. ValueError: You are trying to load a weight file containing 89 layers into a model with 70 layers.

    • @aiexpedition5314
      @aiexpedition5314 3 роки тому

      Thanks for letting me know. I've found that the problem seems to be that colab uses the latest version of Tensorflow (currently 2.7) which seems to break the code. I've fixed this by adding this line to the top: pip install tensorflow==2.4.3

    • @akgenius5078
      @akgenius5078 2 роки тому

      Same problem even now not fix

    • @RafaelSanabium
      @RafaelSanabium 2 роки тому

      @@aiexpedition5314 Seens that the problem persist. Would be a fault on our computer?

    • @RafaelSanabium
      @RafaelSanabium 2 роки тому

      I think the problem is that we have python 3.8 instead 3.7, them te problem with that line

    • @RafaelSanabium
      @RafaelSanabium 2 роки тому

      @@aiexpedition5314 I cant figure out. But its seens thar I don have thar path (/usr/local/lib/python3.7/dist-packages/keras/saving/hdf5_format.py) in my machine, but even if I create one, just dont work...

  • @rusulamer2892
    @rusulamer2892 3 роки тому

    It's a great video, but I've a question about the loss value during training? can we visualize it

    • @aiexpedition5314
      @aiexpedition5314 3 роки тому

      Absolutely, you can save the loss values in the train_step method and then visualize them using tensorboard or just by plotting them.

    • @rusulamer2892
      @rusulamer2892 3 роки тому

      ​@@aiexpedition5314 is it reducing along the time,, or just fluctuated up and down? because I've tried to train the model for 100 steps and I notice the loss value changed up and down from one step to another! is that right?

    • @aiexpedition5314
      @aiexpedition5314 3 роки тому

      @@rusulamer2892 it should decrease over time. 100 steps is way to small considering there are already supposed to be 1000 steps per epoch. I've just opened the code in colab and changed my runtype to GPU and ran everything as default and I can see the loss decreasing during training. The loss starts off around 5000 at the beginning of training, epoch 1, step 36 it was around 2500, epoch 1 step 65 it was around 1960 and at epoch 1 step 100 it was around 1612. Yours probably isn't training because of this line: training_epochs = training_steps / steps_per_epoch. If your're only using 100 training steps and the steps per epoch is 1000 then you get 0 training epochs...

    • @rusulamer2892
      @rusulamer2892 3 роки тому

      @@aiexpedition5314 no I've changed the steps per epoch to 10 ,, and I see this fluctuating, is it possible because you are generating a new image that should have different pixels from the original one ? so I think it's possible to have this fluctuation

    • @aiexpedition5314
      @aiexpedition5314 3 роки тому

      @@rusulamer2892 hmm I'm not sure to be honest. Does it decrease if you leave it with default settings?

  • @blackburn9085
    @blackburn9085 3 роки тому

    Great Video! Very Informative! Keep it UP

  • @JamesBond-ux1uo
    @JamesBond-ux1uo 3 роки тому

    video quality is very poor

    • @aiexpedition5314
      @aiexpedition5314 3 роки тому

      Hi James, thanks for letting me know. The video is playing on HD on my side but the audio seems a bit poor near the end of the video. Is that what you meant?

  • @kameranjacobsohn7561
    @kameranjacobsohn7561 3 роки тому

    👏👏👏

  • @OgoNkado
    @OgoNkado 3 роки тому

    Very cool JW :D - I enjoyed the memes :')

  • @jc3252
    @jc3252 3 роки тому

    No comment , you really helped me , hopefully that can work for satellite imagery?

    • @aiexpedition5314
      @aiexpedition5314 3 роки тому

      Glad it was helpful. It should work fine depending on your use case. You won't be able to zoom in infinitely and see images on the ground from satellite images but it should enhance the quality at least.

  • @stephenbester6323
    @stephenbester6323 3 роки тому

    Excellent video! Very informative and entertaining to follow!

  • @austinolomogar6797
    @austinolomogar6797 3 роки тому

    Is there a way you can use lsgan to reduce noise from an image?

    • @aiexpedition5314
      @aiexpedition5314 3 роки тому

      Hi Olom, thanks for the question! I've done a search for this and found the following paper: www.mdpi.com/1424-8220/21/9/2998/pdf where they use LSGAN for image denoising. Unfortunately I didn't see a link to their code. My understanding of LSGAN is that it uses the least squared function for the loss instead of binary cross entropy. If you'd like to use the code in this video, you could switch out the generative part of the loss function for SRGAN. I do add jpeg noise to the images when training but to make it better you should also add other types of noise (maybe Gaussian noise). If you only want to remove noise without changing the scale of the image, then you can just send in the same image for both the low res and high res image, add noise to the high res image and use a scaling factor of 1. Although this will require some code changes.

  • @harryscordis840
    @harryscordis840 3 роки тому

    Fantastic Video especially for people just starting out in AI - Thanks for this Jeremy!

  • @kameranjacobsohn7561
    @kameranjacobsohn7561 3 роки тому

    Excellent

  • @ZenvilleErasmus
    @ZenvilleErasmus 3 роки тому

    Great stuff!

  • @OgoNkado
    @OgoNkado 3 роки тому

    Very cool format JW! Punchy and easy to digest 💪🏾

  • @AshwineePandey94
    @AshwineePandey94 3 роки тому

    Excellent video Jeremy!

  • @gopolanglekoto
    @gopolanglekoto 3 роки тому

    This is cool and super useful! Please do an explainer video on the intuition behind these models

    • @aiexpedition5314
      @aiexpedition5314 3 роки тому

      Thanks Gopolang! I'll definitely do a follow up video explaining how these models work!