Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Read by thought-leaders and decision-makers around the world. Phone Number: +1-650-246-9381 Email: [email protected]
228 Park Avenue South New York, NY 10003 United States
Website: Publisher: https://towardsai.net/#publisher Diversity Policy: https://towardsai.net/about Ethics Policy: https://towardsai.net/about Masthead: https://towardsai.net/about
Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Founders: Roberto Iriondo, , Job Title: Co-founder and Advisor Works for: Towards AI, Inc. Follow Roberto: X, LinkedIn, GitHub, Google Scholar, Towards AI Profile, Medium, ML@CMU, FreeCodeCamp, Crunchbase, Bloomberg, Roberto Iriondo, Generative AI Lab, Generative AI Lab Denis Piffaretti, Job Title: Co-founder Works for: Towards AI, Inc. Louie Peters, Job Title: Co-founder Works for: Towards AI, Inc. Louis-François Bouchard, Job Title: Co-founder Works for: Towards AI, Inc. Cover:
Towards AI Cover
Logo:
Towards AI Logo
Areas Served: Worldwide Alternate Name: Towards AI, Inc. Alternate Name: Towards AI Co. Alternate Name: towards ai Alternate Name: towardsai Alternate Name: towards.ai Alternate Name: tai Alternate Name: toward ai Alternate Name: toward.ai Alternate Name: Towards AI, Inc. Alternate Name: towardsai.net Alternate Name: pub.towardsai.net
5 stars – based on 497 reviews

Frequently Used, Contextual References

TODO: Remember to copy unique IDs whenever it needs used. i.e., URL: 304b2e42315e

Resources

Take the GenAI Test: 25 Questions, 6 Topics. Free from Activeloop & Towards AI

Publication

The Best Alternative to GitHub Copilot: Continue.dev + Free AI
Latest   Machine Learning

The Best Alternative to GitHub Copilot: Continue.dev + Free AI

Last Updated on June 18, 2024 by Editorial Team

Author(s): Vishvaraj Dhanawade

Originally published on Towards AI.

Photo by Olia Danilevich: https://www.pexels.com/photo/man-sitting-in-front-of-three-computers-4974915/

In this article, we will use free AI services for code completion. And save tons of money from a GitHub Copilot subscription. First, we will configure continue.dev with the Groq API key and check how to use it. Ultimately, we will explore private and secure options if you don’t want to share your code. Added my review for small best llm models for coding.

Now let's check about GitHub Copilot and continue

What is GitHub Copilot?

GitHub Copilot is a code completion tool developed by GitHub and OpenAI that assists developers by autocompleting code. β€” wikipedia

Github Copilot is also known as an AI pair programmer. It will help you code faster, and find bugs faster. It will also help you understand the next step.

Simply it will speed up your development greatly.

How does Github Copilot work?

It relies on file data such as filename, code, comments, and user-provided prompts to generate auto-complete code.

OpenAI Codex LLM model is used to generate code. It is just another LLM model like ChatGPT or llama but utilizes a project directory, open tabs code, and prompts to understand more and generate proper code.

But GitHub copilot pricing costs us monthly 10$ for individual developers. Copilot business costs around 19$ per user per month and copilot enterprise costs 39$ per user per month.

What is Continue.dev?

continue.dev is an extension used for VSCode and JetBrains. You can use any LLM as an AI coding assistant. It provides a wide range of features. Please visit the official site to know more https://docs.continue.dev/intro. But it’s an alternative to the GitHub Copilot extension and we can configure multiple models with it.

Continue + Groq: VSCode Setup

Visit Groq Cloud Console, and log in with your email or Google account. Click on API Keys in the left sidebar. Again, click on β€œCreate API Key” to create a new API key. Give a name, and it will generate the API key. Copy it and save it for later use.

Groq Cloud Console: API Keys

Now open VS Code and go to extensions. Then type continue.dev in the search box. You will able to see multiple extensions. Click on β€œContinue β€” Codestral, GPT-4o, and more” created by continue.dev as shown in the below image and install it.

Continue β€” VS Code Extension setup

Let’s setup the continue.dev extension to use in vs code.

Setup Continue in VS Code

Click on the Continue icon in the left sidebar. It will open its panel and provide two options if it's the first time.

Choose β€œUse your API Key” and click on continue. Select the provider as Groq and the model as Llama or Mistral. Then, paste the API key copied from the Groq console. Click on Add Model. (As shown in the above image)

Now you can select code in vs code, and you will get a message above the selected code that cmd+L for adding to chat or cmd+I for editing highlighted code.

Create a new empty file, press CMD + I, and write what you want. It will generate code in an empty file and wait for your permissions, like accept reject.

Continue Extension Shortcuts:

  • alt + cmd + Y = accept
  • alt + cmd + N = reject
  • cmd + shift + enter = accept all
  • cmd + shift + del = reject all

Setup Local LLM Server with Ollama

Visit ollama.com if you want to download the binary and run it. You can also visit the Ollama Github repository to check the docker setup instructions.

Once downloaded, run it locally.

Currently, we will set up Qwen2 llm. But if you want to try different models please visit the Ollama Models library and browse different models.

Open the terminal and run the below command to download Qwen2:7b model.

ollama run qwen2:7b
ollama pull qwen2:7b model on local mahine

That’s it.

Ollama will automatically run the model and get a response from it. We just need to send the model name in the requested data.

Setup: VS Code + Continue + Ollama

Open continue in VSCode by clicking in the left sidebar menu. Now click on the plus (+) icon next to the llm model on the left bottom, as shown in the below screenshot.

continue plus button to add new models

It will open the AI service provider’s list. Select Ollama, you will be redirected to its configuration. It will explain to you how to pull/download models and top models to directly add in configs.

For now, select Autodetect to get the list of all models or you can click on the open config.json file to add the model manually.

ollama local server setup with continue.dev

In case of manual addition, add the below dictionary to the model's list and save the file.

{
"title": "phi3",
"model": "phi3",
"completionOptions": {},
"apiBase": "http://localhost:11434",
"provider": "ollama"
}

Change the model at the left bottom in the continue extension to start using it.

Note: Remove Continue Extension Data

In case you want to remove its data and configuration. You can find its directory at home directory that is /home/vishvaraj/.continue

rm -rf /Users/vishvaraj/.continue

Lots of time while experimenting, I mess up with configs and need to remove them.

Qwen2 vs CodeGemma vs Granite-Code: Review

While experimenting, I tried locally with qwen2, codegemma, granite-code models. Used the same prompt.

Qwen2 model performed a lot better than codegemma and granite-code.

Prompt:

write fastapi application to serve as key-value store

Conclusion

I hope this tutorial helps you to set up a free copilot on your machine. And useful to you. AI tools are a great help to speed up coding and fix minor issues. There will be some differences in functionality and answers compared to Github Copilot as they use the latest model. And whole team is dedicated to its functionality and developer experience.

The same thing can be done with PyCharm or JetBrains tools.

Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming aΒ sponsor.

Published via Towards AI

Feedback ↓