Unboxing Loss Functions in YOLOv8
Last Updated on May 1, 2024 by Editorial Team
Author(s): Vincent Liu
Originally published on Towards AI.
Source: Image by author.
YOLO has long been one of the first go-to models for object detection tasks. Itβs fast and accurate. Besides, the API is concise and easy to work on. The lines of code required to run a training or inference job are limited. After YOLOv8 introduced pose estimation in the framework in the second half of 2023, the framework now supports up to four tasks including classification, object detection, instance segmentation, and pose estimation.
Figure 1. Tasks and loss functions. Source: Image by author.
In this article, we will elaborate on the five loss functions used in YOLOv8. Kindly note that we will only talk about the default loss functions configured in the YOLOv8 repository. Besides, we will also only focus on the representative parameters and skip some scalars and constants for normalization or scaling for better comprehension.
The tasks with corresponding loss functions in YOLOv8 can be found in Figure 1. Letβs dive into each of them in the following sections.
IoU LossFigure 2. CIoU loss. Source: Image by author.cw = b1_x2.maximum(b2_x2) – b1_x1.minimum(b2_x1) # convex (smallest enclosing box) widthch = b1_y2.maximum(b2_y2) – b1_y1.minimum(b2_y1) # convex heightif CIoU or DIoU: # Distance or Complete IoU https://arxiv.org/abs/1911.08287v1 c2 = cw ** 2… Read the full blog for free on Medium.
Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming aΒ sponsor.
Published via Towards AI