Top 10 AI Articles for April 2022
Artificial intelligence (AI) newsletter by Towards AI #18
If you have trouble reading this email, see it on a web browser.
Hey everyone. I hope you are all well. In this issue, we dive into our 10 favorite AI articles published by our exceptional contributors in April, including some solid techniques on cross-validation, trends in AI, the infrastructure of enterprise-level social media, and much more.
If you haven’t heard, we recently announced an exciting investment in Towards AI to expand the AI platform for the AI community. We have lots of exciting new projects in the pipeline at Towards AI and we are looking forward to making Towards an essential platform for the AI community. Our goal is to make AI more accessible and play a role in ensuring that AI benefits everyone. We aim to build a community that will democratize access to AI by making it easier to learn AI, build AI tools and benefit from AI as a non-professional. Stay tuned for exciting news on the community front very soon!
Automating inventory system management: We covered an inside look at how Gather AI is building the world’s first large-scale and truly autonomous inventory management system. Gather AI’s system uses a fleet of drones powered by its proprietary Autonomy and Machine Learning platform.
Top Ten AI articles for April 2022
Trends in AI — April 2022: This includes an exciting monthly roundup of machine learning research papers and news, including Nvidia’s new h100 GPU, Google’s 540 billion parameter PaLM, Pathways, Kubric, Tensor Programs, bootstrapping reasoning with reasoning, the sparse all-MLP architecture, animating faces with deep learning, and much more.
Cross-validation types and when to use them: This article explains the most common cross-validation methods, when to apply them, and why they’re essential — with code included. Don’t overlook the author’s summary on which strategy to utilize in various situations in the conclusion section.
The combinatorial purged cross-validation method: We have heard of various techniques, but purged cross-validation is one of the most underappreciated on the internet, despite its importance. It’s a time series backtesting method with a lot of robustness. This article explains why standard cross-validation methods fail on time series and how to use purged cross-validation while grasping basic math.
How does Google generate summaries?: Google announced auto-generated summaries in Google Docs in March 2022. How cool is that! Did you know that it’s a mix of algorithms like the PEGASUS model and RNN and Transformer? Learn all about the machine learning-based model behind the new feature.
Transformers: What are they, and how can I make one?: Did the material in Google auto-generated summaries overwhelm you? No worries, we’ve got you covered. This article provides a basic overview of transformers for NLP tasks and instructions for creating a transformer for text generation using PyTorch.
Inside LinkedIn’s machine learning infrastructure: In this era of social networking, who doesn’t utilize LinkedIn? This case study reveals fascinating details regarding LinedIn’s machine learning infrastructure and how it has been built to handle large-scale scenarios.
Beginner tips for getting started with Azure machine learning: This article provides a well-considered collection of things that go well with the DP-100 and the syllabus. Microsoft’s Azure machine learning is a cloud service that assists developers in their data science endeavors. It includes a variety of tools for tracking the progress of your model, versioning your data, securely deploying your model, and more. This is a must-read if you have developed or planning to develop your AI/ML models on the cloud.
Text generation with Markov decision processes: Markov decision processes is among the most efficient methods for dealing with sequences, taming them, or using them as building blocks to generate text automatically. This article introduces the topic and beautifully designed educational visuals as well as implementation code to make learning easier.
All about ensemble techniques: Another graphic chef-d’oeuvre for learning various ensemble techniques such as voting, stacking, bagging, and boosting. It also reveals commonly overlooked techniques such as hard and soft voting, blending and k-folding, bootstrapping, pasting, and random subspace.
Introduction to Intel distribution of OpenVINO toolkit: This article discusses OpenVINO, a deep learning optimization tool. The OpenVINO toolkit offers tools and libraries that optimize neural networks by applying various techniques like pruning, quantization, and speed up inference in a hardware-agnostic approach to Intel architectures. Intel released the toolkit’s most significant update since its launch, which includes more deep-learning models, device portability, and higher inferencing performance with fewer code changes.
We are grateful for your time and hope you enjoyed reading the AI newsletter. If you enjoy the newsletter, please consider subscribing if you haven’t yet, or share it with your friends and colleagues — it is genuinely appreciated.
Thank you for joining us! Until next time,
For previous issues, check out our AI newsletter archive.