{ "cells": [ { "cell_type": "markdown", "metadata": { "id": "Bo5_91rSgTg3" }, "source": [ "**Importing Libraries**" ] }, { "cell_type": "code", "execution_count": 1, "metadata": { "id": "1FOULt5NeSny" }, "outputs": [], "source": [ "import shutil\n", "import os, sys, random\n", "from glob import glob\n", "import pandas as pd\n", "from shutil import copyfile\n", "import pandas as pd\n", "from sklearn import preprocessing, model_selection\n", "import matplotlib.pyplot as plt\n", "from matplotlib import patches\n", "import numpy as np\n", "import os\n", "%matplotlib inline" ] }, { "cell_type": "markdown", "metadata": { "id": "mDgSjs4FMS82" }, "source": [ "Refer [this blog](https://towardsai.net/p/computer-vision/yolo-v5-object-detection-on-a-custom-dataset) for more information. Its an excellent resource. \n", "\n", "**Cloning Official Repo** \n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "3dyetdeOpUIR" }, "outputs": [], "source": [ "!git clone 'https://github.com/ultralytics/yolov5.git'\n", "!sed -i 's/PyYAML>=5.3.1/PyYAML==5.4.1/g' ./yolov5/requirements.txt\n", "!pip install -qr 'yolov5/requirements.txt'" ] }, { "cell_type": "code", "execution_count": 6, "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "id": "rOTf_JHIMV9T", "outputId": "514c1cfa-c1e0-4c87-a3a1-672eb744e2cd" }, "outputs": [ { "data": { "text/plain": [ "'yolov5/tobacco_data.yaml'" ] }, "execution_count": 6, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# Setting the model parameters\n", "# copying the custom_dataset.yaml file to the project repo\n", "# setting number of classes to two (since the tobacco 800 dataset contains 2 classes, Logo & Signature)\n", "shutil.copyfile('Training/tobacco_data.yaml', 'yolov5/tobacco_data.yaml') " ] }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "/Users/vivekgupta/DS/Signature/1. Detection\n" ] } ], "source": [ "cd .." ] }, { "cell_type": "code", "execution_count": 4, "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "id": "rOTf_JHIMV9T", "outputId": "514c1cfa-c1e0-4c87-a3a1-672eb744e2cd" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "train: tobacco_yolo_format/images/train\r\n", "val: tobacco_yolo_format/images/valid\r\n", "\r\n", "nc: 2\r\n", "names: ['DLLogo', 'DLSignature']\r\n" ] } ], "source": [ "!cat yolov5/tobacco_data.yaml" ] }, { "cell_type": "markdown", "metadata": { "id": "Lfn8HpbaO3tD" }, "source": [ "**Setting some augmentations**" ] }, { "cell_type": "code", "execution_count": 5, "metadata": { "id": "wvg-PdCOO26C" }, "outputs": [], "source": [ "# !sed -i 's/perspective: 0.0/perspective: 0.1/g' ./yolov5/data/hyp.finetune.yaml\n", "# !sed -i 's/shear: 0.0/shear: 0.1/g' ./yolov5/data/hyp.finetune.yaml\n", "# !sed -i 's/flipud: 0.0/flipud: 0.5/g' ./yolov5/data/hyp.finetune.yaml\n", "# !sed -i 's/degrees: 0.0/degrees: 0.2/g' ./yolov5/data/hyp.finetune.yaml" ] }, { "cell_type": "markdown", "metadata": { "id": "xCtQwpQHPXnD" }, "source": [ "**Training**" ] }, { "cell_type": "markdown", "metadata": { "id": "jgUydkkSp3gz" }, "source": [ "--img 640 is the width of the images. \n", "`Dataset.yaml` file should be present in the directory pointed by --data. \n", "--cfg models/model.yaml is used to set the model we want to train on. I have used yolov5x.yaml, more information could be found [here.](https://github.com/ultralytics/yolov5#pretrained-checkpoints) \n", " \n", "\n", "Many useful tips and information regarding training and testing could be foung [here](https://github.com/ultralytics/yolov5/wiki/Tips-for-Best-Training-Results). **This will helps to clear a good amount of doubts and errors. Also I recommend you to go through the issues section of the repo if you faces any errors or doubts. Legand has it that you will find the solution for your miseries there.**" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "NKvLEPfHOSQw", "scrolled": true }, "outputs": [], "source": [ "!python yolov5/train.py --img 640 --batch 16 --epochs 100 --weights \"yolov5s.pt\" \\\n", "--data yolov5/tobacco_data.yaml --cfg yolov5/models/yolov5s.yaml --name Tobacco-run --device 'mps'" ] }, { "cell_type": "markdown", "metadata": { "id": "xB2bMpZyrWS3" }, "source": [ "**Testing**" ] }, { "cell_type": "markdown", "metadata": { "id": "ly5RlSNsrZqr" }, "source": [ "**To predict images in a folder** \n", "--hide-labels is used to hide the labels in the detected images. \n", "--hide-conf is used to hide the confidence scores in the detected images. --classes [0, 1, etc] used to detect only the classes mentioned here. For our use case we need only signature class, so use --classes 1.\n", "--line-thickness integer used to set the thickness of bounging box. \n", "--save-crop and --save-txt used to save the crops and labels. \n", "--project could be used to specify the results path " ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "uBOJ9IPvPZ8h" }, "outputs": [], "source": [ "!python yolov5/detect.py --source TobaccoData_Raw/tobacco_yolo_format/images/valid/ --weights 'yolo_model/best.pt' \\\n", " --hide-labels --hide-conf --classes 1 --line-thickness 2 --device 'mps'" ] }, { "cell_type": "markdown", "metadata": { "id": "n9PS6yTCs3td" }, "source": [ "**To predict a single image**" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "kdRwxu7oY53A" }, "outputs": [], "source": [ "!python yolov5/detect.py --source tobacco_yolo_format/images/valid/imagename --weights 'runs/train/Tobacco-run/weights/best.pt' \\\n", " --hide-labels --hide-conf --classes 1 --line-thickness 2 " ] } ], "metadata": { "accelerator": "GPU", "colab": { "collapsed_sections": [], "name": "YOLOv5_Tobacco.ipynb", "provenance": [] }, "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.8.13" } }, "nbformat": 4, "nbformat_minor": 4 }