Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Read by thought-leaders and decision-makers around the world. Phone Number: +1-650-246-9381 Email: [email protected]
228 Park Avenue South New York, NY 10003 United States
Website: Publisher: https://towardsai.net/#publisher Diversity Policy: https://towardsai.net/about Ethics Policy: https://towardsai.net/about Masthead: https://towardsai.net/about
Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Founders: Roberto Iriondo, , Job Title: Co-founder and Advisor Works for: Towards AI, Inc. Follow Roberto: X, LinkedIn, GitHub, Google Scholar, Towards AI Profile, Medium, ML@CMU, FreeCodeCamp, Crunchbase, Bloomberg, Roberto Iriondo, Generative AI Lab, Generative AI Lab Denis Piffaretti, Job Title: Co-founder Works for: Towards AI, Inc. Louie Peters, Job Title: Co-founder Works for: Towards AI, Inc. Louis-François Bouchard, Job Title: Co-founder Works for: Towards AI, Inc. Cover:
Towards AI Cover
Logo:
Towards AI Logo
Areas Served: Worldwide Alternate Name: Towards AI, Inc. Alternate Name: Towards AI Co. Alternate Name: towards ai Alternate Name: towardsai Alternate Name: towards.ai Alternate Name: tai Alternate Name: toward ai Alternate Name: toward.ai Alternate Name: Towards AI, Inc. Alternate Name: towardsai.net Alternate Name: pub.towardsai.net
5 stars – based on 497 reviews

Frequently Used, Contextual References

TODO: Remember to copy unique IDs whenever it needs used. i.e., URL: 304b2e42315e

Resources

Take our 85+ lesson From Beginner to Advanced LLM Developer Certification: From choosing a project to deploying a working product this is the most comprehensive and practical LLM course out there!

Publication

Scaling Training of HuggingFace Transformers With Determined
Events

Scaling Training of HuggingFace Transformers With Determined

Last Updated on June 17, 2021 by Editorial Team

Author(s): Towards AI Team

Join Determined AI’s third lunch-and-learn session and learn how to scale the training of HuggingFace Transformers with Determined.

Join an exciting lunch-and-learn event by our friends at Determined AI

Training complex state-of-the-art natural language processing (NLP) models is now a breeze, thanks to HuggingFaceβ€Šβ€”β€Šmaking it an essential open-source go-to for data scientists and machine learning engineers to implement Transformers models and configure them as state-of-the-art NLP models with straightforward library calls. As a result, the library has become crucial for training NLP models, like in Baidu or Alibaba, and has contributed to state-of-the-art results in several NLPΒ tasks.

Our friends at Determined AI are hosting an exciting lunch-and-learn covering training HuggingFace Transformers at scale using Determined! Learn to train Transformers with distributed training, hyperparameter searches, and cheap spot instancesβ€Šβ€”β€Šall without modifying code.

Please consider joining on Wednesday, June 30th at 10 AM PT for a hands-on tutorial from Liam Li, a Senior Machine Learning Engineer at Determined AI, and Angela Jiang, a Product Manager at Determined AI (lunch included!).

This fantastic hands-on tutorial will cover the basics of Determined Transformers and walk through how to build a chatbot using a large Transformer language model with distributed training and spot instances.

Join in and come away with an understanding of how to train Transformers at scale with distributed training, experiment and artifact tracking, and resource management, all without needing to modifyΒ code.

Please make sure to register for the Meetup event, the zoom meeting, and if you are on slack, please consider joining Determined’s Slack community!Β 


Scaling Training of HuggingFace Transformers With Determined was originally published in Towards AI on Medium, where people are continuing the conversation by highlighting and responding to this story.

Published via Towards AI

Feedback ↓