Pencil Sketch Image With Python
Last Updated on January 6, 2023 by Editorial Team
Author(s): Rokas Balsys
Originally published on Towards AI the World’s Leading AI and Technology News and Media Company. If you are building an AI-related product or service, we invite you to consider becoming an AI sponsor. At Towards AI, we help scale AI and technology startups. Let us help you unleash your technology to the masses.
In this tutorial, I’ll show you how we can create a “pencil” sketch image using Python with just a few lines of code.
I’ve always been fascinated by computer vision, especially its power to manipulate images in rapid matrix multiplication. A picture is an array of numbers in Python. So we can do various matrix manipulations to get exciting results. So, in the previous tutorials, we learned how to separate ourselves from the background, detect faces, and do all of this in real-time. In this tutorial, I’ll show you how we can create a “pencil” sketch image with just a few lines of code.
The process is pretty simple:
- Grayscale the image;
- Invert the colors of it;
- Blur the inverted image;
- Apply the Dodge blend to the blurred and grayscale image.
We can pick any image we want for this. But I’ll demonstrate how to create an object we can apply to any image, video, or real-time stream. I’ll do this to expand the functionality of the background-removal project that I am working on in this tutorial series.
OpenCV and Numpy are the only libraries that are needed for the project. We import them with two following lines of code:
Here is one of the commands that can be used to read an image stored on a disc using OpenCV:
This command reads the file “image.png" located in the current folder on the disc and stored in memory as a frame. But as I mentioned, this can be a sequence of frames or an image loaded by other methods.
Show image using OpenCV
The next important step while creating such a sketch in our project is to know how to quickly view the results without saving them on disc. The following OpenCV commands can be used to display the image on the screen:
When these lines are executed, the image will open in a new window with a title as ‘image’:
Grayscale the image
First, what we need to do with our image, is to grayscale it (convert it to black and white). We can do either with the cv2 library or numpy. But numpy doesn’t have any built-in function for grayscaling. But we can easily convert our image to grayscale, knowing the math behind that. But not going into the math, the formula will look following:
Here we are multiplying RGB image channels with appropriate values and concatenating them to a single channel. Because of that, we need to return back to 3 layers image; we do it with the numpy stack function. This is what we get:
Invert the image
Now we need to invert the image. When I am telling invert, I mean white should become black and wise versa. It’s as simple as simply subtracting 255 from each image pixel. Because, by default, images are 8bit and have a maximum of 256 tones:
When we display the inverted image or save it on a disc, we receive the following picture:
Blur the image
Now we need to blur the inverted image. Blurring is performed by applying a Gaussian filter to the inverted image. The most important thing here is the variance of the Gaussian function or sigma. As sigma increases, the picture becomes blurrier. Sigma controls the amount of dispersion and, therefore, the degree of blurring. A suitable value of sigma can be chosen by trial and error:
The results of the blurred image look following:
Dodge and Merge
Colour Dodge blending mode splits the bottom layer from the inverted top layer. This brightens the lower layer depending on the value of the upper layer. We have a blurry image that highlights the boldest edges.
And that’s it! Here are the results:
Here is the complete pencil sketch code for the object:
It is possible to guess that we don’t have much room to play with here other than the blur_sigma parameter during blur. I added an extra function to sharpen the image to solve this problem. The results of the sharpening can be seen in this animated GIF:
It is very similar to the blurring process, except that now, instead of creating a kernel to average each pixel intensity, we are making a kernel that will make the pixel intensity higher and, therefore, more visible to the human eye.
Here is a basic code on how to use the PencilSketch object for our porch image:
Results of the above code you can see the following:
It was a pretty nice tutorial that didn’t require any deep python knowledge to achieve this amazing “pencil” sketch style from any image. While using my project files from GitHub and the Engine object, you can easily apply this effect to any image, video, or real-time web camera stream.
In the next tutorial, I’ll cover something even more exciting. I am thinking about face recognition because we already have face detection implemented. What is left — identify a person in that face.
Thanks for reading! As always, all the code given in this tutorial can be found on my GitHub page and is free to use!
Originally published at https://pylessons.com/pencil-sketch
Pencil Sketch Image With Python was originally published in Towards AI on Medium, where people are continuing the conversation by highlighting and responding to this story.
Join thousands of data leaders on the AI newsletter. It’s free, we don’t spam, and we never share your email address. Keep up to date with the latest work in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.
Published via Towards AI