Master LLMs with our FREE course in collaboration with Activeloop & Intel Disruptor Initiative. Join now!

Publication

How to Get Profits in Grape Farming Using YoloV3
Latest   Machine Learning

How to Get Profits in Grape Farming Using YoloV3

Last Updated on July 20, 2023 by Editorial Team

Author(s): Akula Hemanth Kumar

Originally published on Towards AI.

Making computer vision easy with Monk, low code Deep Learning tool and a unified wrapper for Computer Vision.

Grapes detection

The YoloV3 Training Secrets

Ok, so you’ve decided on the crop (Custom Object, in our case its grapes) and now you need to cultivate(train) it using YoloV3. This is a simple ‘instant money’ task like running some variation of fit() and then eval() methods like other popular python machine learning libraries (e.g. Scikit Learn, Keras).

We’ll take a look at the big picture first and then zoom in into each step. Let’s take a step back and look at what a typical ML process may look like. It goes something like this

  1. Load and Preprocess the Data
  2. Define Model, Optimizer
  3. Train the Model
  4. Infer the Model

1. Loading and Preprocessing the Data

Reference: Google Images

Here we are using the Grapes dataset.

!git clone https://github.com/thsant/wgisd
!mkdir wgisd/data/labels
!mkdir wgisd/data/images
f = open("wgisd/data/classes.txt", 'w')
f.write("Grapes\n")
f.close()

Required YOLO format

wgisd/data (root) 
U+007C
U+007C-------------images (img_dir)
U+007C U+007C
U+007C U+007C------------------img1.jpg
U+007C U+007C------------------img2.jpg
U+007C U+007C------------------.........(and so on)
U+007C
U+007C-----------labels (label_dir)
U+007C U+007C
U+007C U+007C------------------img1.txt
U+007C U+007C------------------img2.txt
U+007C U+007C------------------.........(and so on) U+007C U+007C------------classes.txt

Classes file

List of classes in every new line. The order corresponds to the IDs in annotation files.

Eg.

class1 ( — — — — — — — — — — — — — — → will be 0)

class2 ( — — — — — — — — — — — — — — → will be 1)

class3 ( — — — — — — — — — — — — — — → will be 2)

class4 ( — — — — — — — — — — — — — — → will be 3)

Annotation file format

  • All the coordinates should be normalized.
  • X coordinates divided by width of an image, Y coordinates divided by height of the image.

Ex. (One line per bounding box of the object detected in image)

CLASS_ID -BOX_X_CENTER -BOX_Y_CENTER WIDTH -BOX_WIDTH -BOX_HEIGHT

class_id- x1 -y1- w1- h1

class_id -x2- y2- w2- h2….. (and so on)

Load dataset

gtf = Detector()
gtf.set_train_dataset(img_dir, label_dir, class_list_file, batch_size=2)
gtf.set_val_dataset(img_dir, label_dir)

2. Define Model, Optimizer

Reference: Google Images

Great, you’re now ready to start defining your model. You can choose one from the following.

  • “yolov3”
  • “yolov3s”
  • “yolov3-spp”
  • “yolov3-spp3”
  • “yolov3-tiny”
  • “yolov3-spp-matrix”
  • “csresnext50-panet-spp”

But for now, we’ll just pick “yolov3”

gtf.set_model(model_name="yolov3")

Next, to define the optimizer. You can choose one from the following

  • “sgd”
  • “adam”

Each optimizer has different parameters, but most require at least a learning rate, referred to as lr

Here we will choose “sgd”, after few experiments we found that lr =0.00579 as good starting point.

gtf.set_hyperparams(optimizer="sgd", lr=0.00579, multi_scale=True, evolve=True, num_generations=10)

3. Train the Model

Reference: Goodfriut

Ok, now you’re getting into the real farming. The training step is similar to sklearn or keras.

gtf.Train(num_epochs=10)

4. Infer the Model

Reference: https://pixabay.com/

After you’ve trained your model, the final step is to predict.

gtf = Infer()
gtf.Model(model_name, class_list, weights, use_gpu=True, input_size=416)
gtf.Predict(img_path, conf_thres=0.2, iou_thres=0.5)

Some image inferences, you can see:

Inference 1
Inference 2

This article is inspired by

How to Cook Neural Nets with PyTorch

A recipe for training neural networks using PyTorch

towardsdatascience.com

You can find the complete jupyter notebook in Github.

If you have any questions, you can reach Abhishek and Akash. Feel free to reach out to them.

I am extremely passionate about computer vision and deep learning in general. I am an open-source contributor to Monk Libraries.

You can also see my other writings at:

Akula Hemanth Kumar – Medium

Read writing from Akula Hemanth Kumar on Medium. Computer vision enthusiast. Every day, Akula Hemanth Kumar and…

medium.com

Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.

Published via Towards AI

Feedback ↓