Machine Learning in a non-Euclidean Space
Author(s): Mastafa Foufa
Originally published on Towards AI.
Chapter III. What examples of non-Euclidean ML should you remember?
Photo by Greg Rosenke on Unsplash
This post was co-authored with Aniss Medbouhi and is based on his research under Prof. Danica Kragicβs supervision, at the KTH lab in the Robotics Perception and Learning Division.
What you will learn in this article.
Landscape overview of the state-of-the art hyperbolic Machine Learning models for dimensionality reduction. We give you a way to classify all these models.Insights on a PoincarΓ© contrastive embedding method and how to extend the famous Stochastic Gradient Descent to a Riemannian manifold.The fundamentals to understanding hyperbolic VAEs and how to extend the Gaussian distribution to a Riemannian manifold.An easy introduction to the concept of delta-hyperbolicity from Gromov-hyperbolic group theory.
M: Hi Aniss, welcome back to our chat. Iβm eager to hear more about the hyperbolic ML models you are working on. As a PhD student in this field, you must have a lot of insights to share. What can you tell us about the current advances in this area?
A: Hi Mastafa, thanks for having me again. I want to give you an overview of the main concepts and applications of machine learning in hyperbolic spaces, without getting too technical. I hope this will help the general audience to understand the benefits and… Read the full blog for free on Medium.
Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming aΒ sponsor.
Published via Towards AI