Fine-Tuning vs Distillation vs Transfer Learning: Whatβs The Difference?
Author(s): Artem Shelamanov Originally published on Towards AI. What are the main ideas behind fine-tuning, distillation, and transfer learning? A simple explanation with a focus on LLMs. This member-only story is on us. Upgrade to access all of Medium. Fine-tuning vs distillation …
I Switched From Windows To Linux For 1 Month β Here Is What Happened
Author(s): Artem Shelamanov Originally published on Towards AI. This member-only story is on us. Upgrade to access all of Medium. Long story short: my PC broke, and while it was on repairs, I had to use my laptop with nothing but Linux …
The Stargate Project: New Age for Data Scientists?
Author(s): Artem Shelamanov Originally published on Towards AI. The recently announced Stargate Project, with its massive investment and ambitious goals, is likely to have huge impact on the data science field. In this article, we cover main implications and possible consequences. Photo …
Why OpenAIβs o1 Model Is A Scam
Author(s): Artem Shelamanov Originally published on Towards AI. As a data scientist who has worked with LLMs since they were first introduced, I thought that when I heard about o1, it was a joke. When it turned out to be an actual, …