The EASIEST way to finetune LLAMA-v2 on local machine!

Поділитися
Вставка
  • Опубліковано 19 лип 2023
  • In this video, I'll show you the easiest, simplest and fastest way to fine tune llama-v2 on your local machine for a custom dataset! You can also use the tutorial to train/finetune any other Large Language Model (LLM). In this tutorial, we will be using autotrain-advanced.
    AutoTrain Advanced github repo: github.com/huggingface/autotr...
    Steps:
    Install autotrain-advanced using pip:
    - pip install autotrain-advanced
    Setup (optional, required on google colab):
    - autotrain setup --update-torch
    Train:
    autotrain llm --train --project_name my-llm --model meta-llama/Llama-2-7b-hf --data_path . --use_peft --use_int4 --learning_rate 2e-4 --train_batch_size 12 --num_train_epochs 3 --trainer sft
    If you are on free version of colab, use this model instead: huggingface.co/abhishek/llama.... This is a smaller sharded version of llama-2-7b-hf by meta.
    Please subscribe and like the video to help me keep motivated to make awesome videos like this one. :)
    My book, Approaching (Almost) Any Machine Learning problem, is available for free here: bit.ly/approachingml
    Follow me on:
    Twitter: / abhi1thakur
    LinkedIn: / abhi1thakur
    Kaggle: kaggle.com/abhishek

КОМЕНТАРІ • 291

  • @linuxmanju
    @linuxmanju 4 місяці тому +29

    Anyone comes across this in 2024 (jan ), the command switches with new autotrain version is autotrain llm --train --project-name josh-ops --model mistralai/Mistral-7B-Instruct-v0.2 --data-path . --use-peft --quantization int4 --lr 2e-4 --train-batch-size 12 --epochs 3 --trainer sft . Great, Video, thanks Abhishek

  • @tarungupta83
    @tarungupta83 10 місяців тому +4

    That's Awesome, nothing better than this way of training large language model. Super easy ❤

  • @andyjax100
    @andyjax100 2 місяці тому

    Keeping it this simple is something very few people are able to do. Very well explained.
    This can be understood by even a beginner. Atleast the execution if not the intuition behind it. Kudos

  • @syedshahab8471
    @syedshahab8471 10 місяців тому +2

    Thank you for the on-point tutorial.

  • @abhishekkrthakur
    @abhishekkrthakur  10 місяців тому +25

    Please subscribe and like the video to help me keep motivated to make awesome videos like this one. :)

    • @arpitghatiya7214
      @arpitghatiya7214 9 місяців тому

      Please make a video on Llama2 + RAG (instead of finetuning)

  • @tarungupta83
    @tarungupta83 10 місяців тому +5

    Appreciate it, and request to continue making such videos🎉

  • @WeDuMedia
    @WeDuMedia Місяць тому

    Incredibly helpful video, I appreciate that you took the time to create this! Great stuff

  • @charleskarpati1129
    @charleskarpati1129 6 місяців тому

    Thank you Abhishek! This is phenomenal.

  • @AICoffeeBreak
    @AICoffeeBreak 10 місяців тому +11

    Amazing, tutorials at light speed! Llama 2 was just released! 😮

  • @MasterBrain182
    @MasterBrain182 10 місяців тому +1

    Astonishing content Man 🔥🔥🔥 🚀

  • @bryanvann
    @bryanvann 10 місяців тому +18

    Thanks for the tutorial! A couple questions for you. Is there an approach you're using to test quality and verity that the training data has influenced the weights in the model sufficiently to learn the new task? And second, can you use the same approach for unstructured training data such as using a large corpus of private data to do domain adaptation?

  • @nirsarkar
    @nirsarkar 10 місяців тому

    Excellent, thank you so much. I will try.

  • @jdoejdoe6161
    @jdoejdoe6161 10 місяців тому +1

    Hi Abh
    Your method is inspiring and commendable. How do we read the csv or json training dataset we prepared instead of the hugging face dataset you used?

  • @xthefoetusx
    @xthefoetusx 10 місяців тому +3

    Great video! Would be great if in some future vid you could go into depth on the training hyperparameters and perhaps also talk about what size your custom datasets should be.

    • @abhishekkrthakur
      @abhishekkrthakur  10 місяців тому +4

      sometimes I do that. however, this model would have taken wayy too long to train. im training a model as i type here and if i get good results ill share both model and params 🙂

    • @emrahe468
      @emrahe468 10 місяців тому +1

      @@abhishekkrthakur guess no good luck with the training :(

  • @prachijadhav9098
    @prachijadhav9098 10 місяців тому +2

    Nice video Abhishek!
    I am curious to know about custom data for LLMs. What is the ideal (good quality) data size (e.g., #rows), to fine-tune these models for good performance, not necessarily it should be big data of course.
    Thanks!

  • @ajaytaneja111
    @ajaytaneja111 10 місяців тому +4

    Hi Abhishek, is the auto train using LORA or prompt tuning as the PEFT technique?

  • @user-nj7ry9dl3y
    @user-nj7ry9dl3y 9 місяців тому +1

    For fine-tuning of the large language models (llama-2-13b-chat), what should be the format(.text/.json/.csv) and structure (like should be an excel or docs file or prompt and response or instruction and output) of the training dataset? And also how to prepare or organise the tabular dataset for training purpose?

  • @aaronliruns
    @aaronliruns 9 місяців тому +7

    Great tutorial! Can you also put up one video teaching on how to merge the fine tuned weights to the base model and do inference? Would like to see an end-to-end course. Thank you!

    • @adamocheri3513
      @adamocheri3513 9 місяців тому +2

      +1 on this question !!!!

    • @devyanshrastogi
      @devyanshrastogi 7 місяців тому

      any updates guys?? I really want to know how to merge the fine tuned model with the base model and do the inference. Do let me you have any resources or insights about the same

    • @kopamed5024
      @kopamed5024 4 місяці тому

      @@devyanshrastogi also need this answered. have you guys had any success?

  • @stevenshaw124
    @stevenshaw124 10 місяців тому +3

    what kind of GPUs do you have? how big was your dataset and how long did it take to train? what is the smallest fine-tuning data set size that would be reasonable?

  • @sohailhosseini2266
    @sohailhosseini2266 8 місяців тому

    Thanks for sharing!

  • @YuniYoshi
    @YuniYoshi 6 місяців тому +1

    There is only one thing I want to see. I want to see you using the final result and prove it actually works. Thank you.

  • @jessem2176
    @jessem2176 10 місяців тому

    Great Video. i love it and can't wait to try it. Now that Llama2 is out... is it better to FineTune a model or try to create your own Model?

  • @r34ct4
    @r34ct4 10 місяців тому

    Thanks for the comprehensive tutorial. Can this be done using chat logs to build a clone of your friend? I have done this with GPT3.5 finetuning using prompt->response. The prompts are questions generated by ChatGPT based on the chat log message. Can the same thing be done with Instruction->Input->Response? Thank you very much man.

  • @deltagamma1442
    @deltagamma1442 10 місяців тому +1

    How do you set the training data? I see different people using different formats? Does it matter or is the only requirement that it has to be structured meaniningfully?

  • @boujlidamohamed
    @boujlidamohamed 10 місяців тому +1

    First thank you for the great tutorial , I have one question : I am trying to finetune the model on Japanese , do you have any advice for that ? I have tried the same script as you did but it didn't work; it produced some gibberish after the training finished , I am guessing it is a tokenizer problem, what do you think ?

  • @jeremyarancio1683
    @jeremyarancio1683 10 місяців тому

    Nice vid
    Should we label input tokens to -100 to focus the training on the prediction?
    I see no one doing it

  • @mariusirgens5555
    @mariusirgens5555 9 місяців тому

    Superb video! Does autotrain allow to export finetuned model as GGML file? Or can it be used with GGML file?

  • @JagadishSongapagounder
    @JagadishSongapagounder 10 місяців тому +1

    Great Job :)

  • @nehabidkar7377
    @nehabidkar7377 9 місяців тому

    Thanks for this great explanation. Can you provide the link to you training data?

  • @abramswee
    @abramswee 10 місяців тому

    thanks for sharing!

  • @safaelaqrichi9096
    @safaelaqrichi9096 10 місяців тому

    Thank you for this interesting video. How could we change the encoding to ''latin-1' in order to train on french language ? thank you.

  • @manojreddy7618
    @manojreddy7618 10 місяців тому

    Thank you for the video. I am new to this, so I am trying to set it up on my windows PC. When I am trying to install the latest version of autotrain-advanced==0.6.2, I get an error saying: trition==2.0.0.post1 cannot be found. Which I believe is only available on Linux. So is it possible to use autotrain-advanced on windows?

  • @mautkajuari
    @mautkajuari 10 місяців тому

    Informative video, hopefully one day I will get a task that requires me to finetune a LLM

  • @elmuchoconrado
    @elmuchoconrado 9 місяців тому +7

    As always very useful and short without wasting anyone's time. Thank you. Just I'm a bit confused about the prompt formatting you have used here - "### Instruction:
    ### Input:... etc" while Llama official is "[INST] {{ system_prompt }}{{ user_message }} [/INST]" and on TheBloke's page it says "SYSTEM: {system_prompt}
    USER: {prompt}
    ASSISTANT:"

    • @ahmetekizx
      @ahmetekizx 7 місяців тому

      I think this isn't mandatory, it is a suggestion.

  • @jaivalani4609
    @jaivalani4609 10 місяців тому

    Thank you ,what is diff between instruction and input

  • @utoubp
    @utoubp 4 місяці тому

    Hi Abhishek,
    Much appreciated. How would things change if we were to use simple fine tuning? That is, just a large single code file to learn from, to tune code-llama, phi2, etc..

  • @dr.mikeybee
    @dr.mikeybee 6 місяців тому

    Nice job!

  • @spookyrays2816
    @spookyrays2816 10 місяців тому

    Thank you brother

  • @cloudsystem3740
    @cloudsystem3740 10 місяців тому

    thank you very much

  • @oliversilverstein1221
    @oliversilverstein1221 9 місяців тому

    hello, thank you. i really need to know: does this pad appropriately? also, how does it internally split it into prompt completion? Can i make up roles like ### System? does it complete only the last message?

  • @returncode0000
    @returncode0000 10 місяців тому

    I just bought a RTX 4090 Founders Edition. Could you tell on a particular example were I could run into limits with card when training LLMs locally? I personally think that I'm safe for the next few years and I will not run in any problems.

  • @rohitdaddekar2900
    @rohitdaddekar2900 10 місяців тому

    Hey, could you guide us how to train custom dataset on llama2? How to prepare our dataset for training?

  • @tal7atal7a66
    @tal7atal7a66 2 місяці тому

    thanks bro ❤

  • @DevanshiSukhija
    @DevanshiSukhija 10 місяців тому

    How is your ipython giving suggestions? I want the same set up. Please make a video on these types of set up that assists in coding and other processes.

  • @sd_1989
    @sd_1989 10 місяців тому

    Thanks!

  • @anantkabra6825
    @anantkabra6825 7 місяців тому +1

    Hello I am getting this error can someone please help me out with it: ValueError: Batch does not contain any data (`None`). At the end of all iterable data available before expected stop iteration.

  • @vasuchandra
    @vasuchandra 10 місяців тому

    Thanks for the tutorial.
    On a Linux 5.15.0-71-generic #78-Ubuntu SMP x86_64 x86_64 x86_64 GNU/Linux machine, I get following error when training llm with the small dataset. File "/usr/local/lib/python3.10/dist-packages/transformers/modeling_utils.py", line 2819, in from_pretrained
    raise ValueError(
    ValueError:
    Some modules are dispatched on the CPU or the disk. Make sure you have enough GPU RAM to fit
    the quantized model. If you want to dispatch the model on the CPU or the disk while keeping
    these modules in 32-bit, you need to set `load_in_8bit_fp32_cpu_offload=True` and pass a custom
    `device_map` to `from_pretrained`.
    What could be the problem? Is it possible to share the data.csv that you have with single row that I can take as reference to test my own data?

  • @crimsonalchemist856
    @crimsonalchemist856 10 місяців тому +1

    Hey Abhishek, Thanks for sharing this amazing tutorial. Can I do this on my RTX 3070Ti 8GB GPU? If yes, what batch size would be preferable?

    • @abhishekkrthakur
      @abhishekkrthakur  10 місяців тому +2

      8GB sounds a bit low for this. maybe try bs=1 or 2? but tbh, im not sure if it will work. Might work fine for a smaller model!

  • @sandeelg_lite
    @sandeelg_lite 10 місяців тому

    I trained model using autotrain in same way as you suggested and model file is stored.
    Now I need to use this model for prediction. Can you shed some light on this as well?

  • @EduardoRodriguez-fu4ry
    @EduardoRodriguez-fu4ry 10 місяців тому

    Great tutorial! Thank you! Maybe I missed it but, at which point do you enter your HF token?

    • @abhishekkrthakur
      @abhishekkrthakur  10 місяців тому +1

      You dont. You login using "huggingface-cli login" command. There's also a similar command for notebooks and colab. :)

  • @ConsultingjoeOnline
    @ConsultingjoeOnline 3 місяці тому

    How do you convert it to work with Ollama? I setup the model file and it doesnt seem to know anything from my training.

  • @unclecode
    @unclecode 10 місяців тому +1

    Beautiful content, I have a side question, what tool you are using to have "copilot"-like suggestion in your terminal? Thx again for the video

    • @jessem2176
      @jessem2176 10 місяців тому

      I use Hugginfaces co pilot. - it works pretty well and super easy to set up and free..

    • @ahmetekizx
      @ahmetekizx 7 місяців тому

      @@jessem2176 Thanks for the recommendation, but did you mean HuggingFace Personal-copilot Blog?

  • @agostonhuszka8237
    @agostonhuszka8237 10 місяців тому

    Thank for the tutorial!
    How can I fine-tune the language model with a domain-specific unlabeled dataset to improve performance on that specific domain? Is it effective to leave the instruction and input empty and only use domain-specific text for the output?

    • @sanjaykotabagi4407
      @sanjaykotabagi4407 10 місяців тому

      Hey, Can we connect. Even I need help on similar topic. We can discuss more ...

  • @FlyXing16
    @FlyXing16 9 місяців тому

    Thanks Kaggle grand master :) you got an channel.

  • @chichen8425
    @chichen8425 2 місяці тому

    I know it could be too much but could you also make a video of how to prepare the data? I have like 'question' and 'answer' but I am strugging to make it to a trainable data set into that kind of csv so I could use it!

  • @user-we6vc9co1b
    @user-we6vc9co1b 10 місяців тому +1

    Do you have to use [INST]...[/INST] for indicating the instructions? I think the original Llama 2 model was trained with these tags, so I am a bit puzzled if you have to use the tags in the csv or they are added internally ?!

    • @abhishekkrthakur
      @abhishekkrthakur  10 місяців тому

      in this video, im finetuning the base model. you can finetune it anyway you want. you can even take the chat model and finetune it this way. if you are using a different format for finetuning, you must use the same format while inference in order to get the best results.

  • @_Zefyr_
    @_Zefyr_ 8 місяців тому +1

    Hi I have a question , it´s posible to use "autotrain" without cuda, with rocm support of AMD GPU ?

  • @0xeb-
    @0xeb- 10 місяців тому

    How do you deal with response in the dataset that has newline characters?

  • @Truizify
    @Truizify 10 місяців тому

    Thanks for the tutorial! How would you modify the code to train on a dataset containing a single column of text? i.e. trying to perform domain-specific additional pretraining?
    I would remove the peft portion to do full finetuning, anything else?

    • @sanjaykotabagi4407
      @sanjaykotabagi4407 10 місяців тому

      Hey, Can we connect. Even I need help on similar topic. We can discuss more ...

    • @user-bq2vt4zz2e
      @user-bq2vt4zz2e 9 місяців тому

      Hi, I'm looking into something similar. Did you find a good way to do this?

  • @kunalpatil7705
    @kunalpatil7705 9 місяців тому

    Thanks for the video. i have a doubt that how can i make a package of it so others can also use it offline by just installing the application

  • @srinivasanm48
    @srinivasanm48 Місяць тому

    When will I be able to see the model that I have trained? Once all the training is complete?

  • @deepakkrishna837
    @deepakkrishna837 7 місяців тому

    Hi when we tried fine tuning MPT LLM using autotrain, getting the error ValueError: MPTForCausalLM does not support gradient checkpointing. Any help you can offer on this pleas?

  • @kishalmandal5676
    @kishalmandal5676 10 місяців тому

    How can i load the model for inference if i stop training after 1 epoch out of 3 epochs.

  • @ajaypranav1390
    @ajaypranav1390 5 місяців тому

    Thanks for this great video, but how to fine tune or train for question answer data set

  • @dhruvilshah7770
    @dhruvilshah7770 2 місяці тому +1

    Can you make a video for fine tuning in silicon macs ?

  • @mallorywestwood
    @mallorywestwood 10 місяців тому

    Can we do this on a CPU? I am using a GGmL model.. please share your thoughts

  • @am0x01
    @am0x01 4 місяці тому +1

    In my experiment, it not create the [config.json] what am I doing wrong?

  • @marioricoibanez144
    @marioricoibanez144 10 місяців тому

    Hey! Fantastic video, but i do not understand at all the division into smaller chunks of the model in order to work in free version of collab, can you explain it? Thank you!

    • @abhishekkrthakur
      @abhishekkrthakur  10 місяців тому

      chunks are loaded into ram first. since larger chunks didnt fit in ram with all the other stuff, i created a version with smaller shards :)

  • @0xeb-
    @0xeb- 10 місяців тому

    How to shard as you mentioned towards the end?

  • @protectorate2823
    @protectorate2823 9 місяців тому

    Hello @abishekkrthakur can I train summarization models with autotrain advanced?

  • @oxydol3456
    @oxydol3456 Місяць тому

    which machine is recommended for fine-tuning LLAMA? windows?

  • @ShotterManable
    @ShotterManable 10 місяців тому

    Is there a way to run it on CPU? Thanks sir, I love your work

  • @eltoro2339
    @eltoro2339 10 місяців тому

    I added push_to_hub command but it didnt push.. how do I use it to test the output?

  • @aakritisrivastava4789
    @aakritisrivastava4789 10 місяців тому

    I am trying to use the generated model using autotrain from_pretrained ,, but its giving me error does not appear to have a file named config.json. Does anyone have the code for predicting or help me with this issue

  • @nirsarkar
    @nirsarkar 9 місяців тому

    Can this be done on Apple Silicon, I have M2 with 24G memory?

  • @jas5945
    @jas5945 10 місяців тому +1

    Very good tutorial. On what machine are you running this? I am trying to run it on a Macbook pro M1 but I keep getting "ValueError: No GPU found. Please install CUDA and try again." I have tried to do this directly on Huggingface and got "error 400: bad request"...so I cloned autotrain and ran it locally...still getting error 400. Do you have any pointers?

  • @Sehyo
    @Sehyo 10 місяців тому

    How can I turn this into a gptq version after finetuning?

  • @muhammadasadullah4452
    @muhammadasadullah4452 8 місяців тому

    Great work Abhishek Thakur, it will be great if you made a video on how to run the fine-tuned model

    • @abhishekkrthakur
      @abhishekkrthakur  8 місяців тому

      already done. check out other videos on my channel

    • @AnandMoorthyJ
      @AnandMoorthyJ 7 місяців тому

      @@abhishekkrthakur can you please post the video link? there are many videos in your channel, it's hard to find which one you are talking about.

    • @devyanshrastogi
      @devyanshrastogi 7 місяців тому

      ​@@abhishekkrthakur I did fine tuning on the model, but I don't think I can run it on google colab with T4 since its show out of memory error!! Any suggestion?

    • @ozzzer
      @ozzzer Місяць тому

      @@AnandMoorthyJ did you find the video? im looking for the link aswell :)

  • @jdoejdoe6161
    @jdoejdoe6161 10 місяців тому +3

    Please show how you used the trained mode for inference

  • @StEvUgnIn
    @StEvUgnIn 4 місяці тому

    I did the same with LLama-2, but --push_to_hub doesn't push at all.

  • @simonv3548
    @simonv3548 10 місяців тому

    Thanks for the nice tutorial. Could you show how to perform inference the finetuned model?

  • @manishsharma2211
    @manishsharma2211 10 місяців тому

    The way Abhishek side eyes before stopping the video and resuming is is soo crazy 🤣🤣😅

  • @rajhammeersinghhada72
    @rajhammeersinghhada72 5 місяців тому

    Why do we need --mixed-precsion and --quantization both? Aren't they both doing the same thing?

  • @govindarao4348
    @govindarao4348 10 місяців тому

    when i am using this command pip install autotrain-advanced getting errors
    error: subprocess-exited-with-error
    note: This error originates from a subprocess, and is likely not a problem with pip.
    error: subprocess-exited-with-error

  • @abdellaziztekaya8596
    @abdellaziztekaya8596 4 місяці тому

    Where can i find to code you worte and your dataset? I would like to use it as an exemple for testing

  • @sebastianandrescajasordone8501
    @sebastianandrescajasordone8501 10 місяців тому

    I am running out of memory when testing it on the free-version of google colab, did you use the exact same tuning parameters as described in the video?

    • @abhishekkrthakur
      @abhishekkrthakur  10 місяців тому

      yes. you can reduce batch size. note, you need to use different model path if you are on colab or it will run out of memory. see description for more details

  • @ashishtater3363
    @ashishtater3363 Місяць тому

    I have llm downloaded can I fine tune it with downloading from huggingface.

  • @yashvardhanjain1968
    @yashvardhanjain1968 10 місяців тому

    Thanks! Is there a way to push the trained model to hub after its trained and not using the --push_to_hub while training? Also, when I try to use push to hub, I get a "you don't have rights to create a model under this namespace". I am using a read token to access the llama model. Do I need to change it to a write token? Is it possible to use two separate tokens? (sorry, I'm super new to Huggingface) Any help is much appreciated. Thanks!

    • @abhishekkrthakur
      @abhishekkrthakur  10 місяців тому +1

      yes. you need to use a write token. you can remove push to hub and then push the model manually using git commands if you wish

  • @user-oh6ve3df7l
    @user-oh6ve3df7l 10 місяців тому +1

    Amazing content. One Q left: how can I run the model locally in inference mode after training? Anyone have a command for that?

  • @cesarsantosvisballambis5469
    @cesarsantosvisballambis5469 9 місяців тому

    Hi , nice tutorial, could you please help me with this error ? , when I try to train the model I got this error : raise ValueError("No GPU found. Please install CUDA and try again.") , Do you know how to solve this ?

  • @tachyon7777
    @tachyon7777 8 місяців тому

    Great one! Two things - you didn't show how to configure the cli to enable access to the model. Secondly, it would be useful to know how to use aws for training. Thanks!

  • @saitej4808
    @saitej4808 10 місяців тому

    How to fine-tune with text corpus data? Ex: if I pass latest news how model can understand/ memorise all able answer context based questions on facts

  • @aurkom
    @aurkom 10 місяців тому

    How to change this for tasks like classification?

  • @shaileshtiwari8483
    @shaileshtiwari8483 9 місяців тому

    Is Gpu Machine necessary for llama 7b to be trained?

  • @manabchetia8382
    @manabchetia8382 10 місяців тому

    Thank you. Can you please also show us how to train on GPU #3 or GPU#1 or both GPU#1&3 but not in GPU #0 in a multi GPU machine?

    • @abhishekkrthakur
      @abhishekkrthakur  10 місяців тому +4

      CUDA_VISIBLE_DEVICES=0 autotrain llm --train ..... will run it on gpu 0
      CUDA_VISIBLE_DEVICES=1,3 autotrain llm --train ..... will run it on gpu 1 and 3

  • @sachinsoni5044
    @sachinsoni5044 10 місяців тому

    hey Abhishek, I am a full stack developer and interested in AI. I love to code. I tried learning DS but found no interest in juggling with data. How should i learn?

  • @abdalgaderabubaker6078
    @abdalgaderabubaker6078 10 місяців тому +2

    Any idea to fine-tune it on Apple chip M1/M2? Just have an installation issues with auto train-advanced 😢

    • @allentran3357
      @allentran3357 10 місяців тому

      Would love to know how to do this as well!

    • @jas5945
      @jas5945 10 місяців тому +1

      Bumping because running into so many issues with M1. Cannot believe how little resources are available for M1 right now given that macOS is so widely used in data science

  • @anjalichoudhary2093
    @anjalichoudhary2093 10 місяців тому

    Great tutorial, how can i run the fine tuned model on inference data?

    • @abhishekkrthakur
      @abhishekkrthakur  10 місяців тому

      there are couple of videos on my channel for that

  • @eunoia7151
    @eunoia7151 10 місяців тому

    How do I use a dataset in the huggingface hub?

  • @bhaveshbadjatya2914
    @bhaveshbadjatya2914 9 місяців тому

    When tying to use inference API for finetuned model I am getting 'error': "Could not load model XXXX/XXXX with any of the following classes: (,) How to resolve this ?