The Best Alternative to GitHub Copilot: Continue.dev + Free AI
Last Updated on June 18, 2024 by Editorial Team
Author(s): Vishvaraj Dhanawade
Originally published on Towards AI.
In this article, we will use free AI services for code completion. And save tons of money from a GitHub Copilot subscription. First, we will configure continue.dev with the Groq API key and check how to use it. Ultimately, we will explore private and secure options if you donβt want to share your code. Added my review for small best llm models for coding.
Now let's check about GitHub Copilot and continue
What is GitHub Copilot?
GitHub Copilot is a code completion tool developed by GitHub and OpenAI that assists developers by autocompleting code. β wikipedia
Github Copilot is also known as an AI pair programmer. It will help you code faster, and find bugs faster. It will also help you understand the next step.
Simply it will speed up your development greatly.
How does Github Copilot work?
It relies on file data such as filename, code, comments, and user-provided prompts to generate auto-complete code.
OpenAI Codex LLM model is used to generate code. It is just another LLM model like ChatGPT or llama but utilizes a project directory, open tabs code, and prompts to understand more and generate proper code.
But GitHub copilot pricing costs us monthly 10$ for individual developers. Copilot business costs around 19$ per user per month and copilot enterprise costs 39$ per user per month.
What is Continue.dev?
continue.dev is an extension used for VSCode and JetBrains. You can use any LLM as an AI coding assistant. It provides a wide range of features. Please visit the official site to know more https://docs.continue.dev/intro. But itβs an alternative to the GitHub Copilot extension and we can configure multiple models with it.
Continue + Groq: VSCode Setup
Visit Groq Cloud Console, and log in with your email or Google account. Click on API Keys in the left sidebar. Again, click on βCreate API Keyβ to create a new API key. Give a name, and it will generate the API key. Copy it and save it for later use.
Now open VS Code and go to extensions. Then type continue.dev in the search box. You will able to see multiple extensions. Click on βContinue β Codestral, GPT-4o, and moreβ created by continue.dev as shown in the below image and install it.
Letβs setup the continue.dev extension to use in vs code.
Click on the Continue icon in the left sidebar. It will open its panel and provide two options if it's the first time.
Choose βUse your API Keyβ and click on continue. Select the provider as Groq and the model as Llama or Mistral. Then, paste the API key copied from the Groq console. Click on Add Model. (As shown in the above image)
Now you can select code in vs code, and you will get a message above the selected code that cmd+L for adding to chat or cmd+I for editing highlighted code.
Create a new empty file, press CMD + I, and write what you want. It will generate code in an empty file and wait for your permissions, like accept reject.
Continue Extension Shortcuts:
- alt + cmd + Y = accept
- alt + cmd + N = reject
- cmd + shift + enter = accept all
- cmd + shift + del = reject all
Setup Local LLM Server with Ollama
Visit ollama.com if you want to download the binary and run it. You can also visit the Ollama Github repository to check the docker setup instructions.
Once downloaded, run it locally.
Currently, we will set up Qwen2 llm. But if you want to try different models please visit the Ollama Models library and browse different models.
Open the terminal and run the below command to download Qwen2:7b model.
ollama run qwen2:7b
Thatβs it.
Ollama will automatically run the model and get a response from it. We just need to send the model name in the requested data.
Setup: VS Code + Continue + Ollama
Open continue in VSCode by clicking in the left sidebar menu. Now click on the plus (+) icon next to the llm model on the left bottom, as shown in the below screenshot.
It will open the AI service providerβs list. Select Ollama, you will be redirected to its configuration. It will explain to you how to pull/download models and top models to directly add in configs.
For now, select Autodetect to get the list of all models or you can click on the open config.json file to add the model manually.
In case of manual addition, add the below dictionary to the model's list and save the file.
{
"title": "phi3",
"model": "phi3",
"completionOptions": {},
"apiBase": "http://localhost:11434",
"provider": "ollama"
}
Change the model at the left bottom in the continue extension to start using it.
Note: Remove Continue Extension Data
In case you want to remove its data and configuration. You can find its directory at home directory that is /home/vishvaraj/.continue
rm -rf /Users/vishvaraj/.continue
Lots of time while experimenting, I mess up with configs and need to remove them.
Qwen2 vs CodeGemma vs Granite-Code: Review
While experimenting, I tried locally with qwen2, codegemma, granite-code models. Used the same prompt.
Qwen2 model performed a lot better than codegemma and granite-code.
Prompt:
write fastapi application to serve as key-value store
Conclusion
I hope this tutorial helps you to set up a free copilot on your machine. And useful to you. AI tools are a great help to speed up coding and fix minor issues. There will be some differences in functionality and answers compared to Github Copilot as they use the latest model. And whole team is dedicated to its functionality and developer experience.
The same thing can be done with PyCharm or JetBrains tools.
Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming aΒ sponsor.
Published via Towards AI