Pencil Sketch Image With Python
Last Updated on January 6, 2023 by Editorial Team
Author(s): Rokas Balsys
Originally published on Towards AI the World’s Leading AI and Technology News and Media Company. If you are building an AI-related product or service, we invite you to consider becoming an AI sponsor. At Towards AI, we help scale AI and technology startups. Let us help you unleash your technology to the masses.
In this tutorial, Iβll show you how we can create a βpencilβ sketch image using Python with just a few lines ofΒ code.
Iβve always been fascinated by computer vision, especially its power to manipulate images in rapid matrix multiplication. A picture is an array of numbers in Python. So we can do various matrix manipulations to get exciting results. So, in the previous tutorials, we learned how to separate ourselves from the background, detect faces, and do all of this in real-time. In this tutorial, Iβll show you how we can create a βpencilβ sketch image with just a few lines ofΒ code.
The process is prettyΒ simple:
- Grayscale theΒ image;
- Invert the colors ofΒ it;
- Blur the invertedΒ image;
- Apply the Dodge blend to the blurred and grayscale image.
We can pick any image we want for this. But Iβll demonstrate how to create an object we can apply to any image, video, or real-time stream. Iβll do this to expand the functionality of the background-removal project that I am working on in this tutorialΒ series.
Import libraries
OpenCV and Numpy are the only libraries that are needed for the project. We import them with two following lines ofΒ code:
Read Photo
Here is one of the commands that can be used to read an image stored on a disc usingΒ OpenCV:
This command reads the file βimage.png" located in the current folder on the disc and stored in memory as a frame. But as I mentioned, this can be a sequence of frames or an image loaded by otherΒ methods.
Show image usingΒ OpenCV
The next important step while creating such a sketch in our project is to know how to quickly view the results without saving them on disc. The following OpenCV commands can be used to display the image on theΒ screen:
When these lines are executed, the image will open in a new window with a title asΒ βimageβ:
Grayscale theΒ image
First, what we need to do with our image, is to grayscale it (convert it to black and white). We can do either with the cv2 library or numpy. But numpy doesnβt have any built-in function for grayscaling. But we can easily convert our image to grayscale, knowing the math behind that. But not going into the math, the formula will look following:
Here we are multiplying RGB image channels with appropriate values and concatenating them to a single channel. Because of that, we need to return back to 3 layers image; we do it with the numpy stack function. This is what weΒ get:
Invert theΒ image
Now we need to invert the image. When I am telling invert, I mean white should become black and wise versa. Itβs as simple as simply subtracting 255 from each image pixel. Because, by default, images are 8bit and have a maximum of 256Β tones:
When we display the inverted image or save it on a disc, we receive the following picture:
Blur theΒ image
Now we need to blur the inverted image. Blurring is performed by applying a Gaussian filter to the inverted image. The most important thing here is the variance of the Gaussian function or sigma. As sigma increases, the picture becomes blurrier. Sigma controls the amount of dispersion and, therefore, the degree of blurring. A suitable value of sigma can be chosen by trial andΒ error:
The results of the blurred image look following:
Dodge andΒ Merge
Colour Dodge blending mode splits the bottom layer from the inverted top layer. This brightens the lower layer depending on the value of the upper layer. We have a blurry image that highlights the boldestΒ edges.
And thatβs it! Here are theΒ results:
Here is the complete pencil sketch code for theΒ object:
It is possible to guess that we donβt have much room to play with here other than the blur_sigma parameter during blur. I added an extra function to sharpen the image to solve this problem. The results of the sharpening can be seen in this animatedΒ GIF:
It is very similar to the blurring process, except that now, instead of creating a kernel to average each pixel intensity, we are making a kernel that will make the pixel intensity higher and, therefore, more visible to the humanΒ eye.
Here is a basic code on how to use the PencilSketch object for our porchΒ image:
Results of the above code you can see the following:
Conclusion:
It was a pretty nice tutorial that didnβt require any deep python knowledge to achieve this amazing βpencilβ sketch style from any image. While using my project files from GitHub and the Engine object, you can easily apply this effect to any image, video, or real-time web cameraΒ stream.
In the next tutorial, Iβll cover something even more exciting. I am thinking about face recognition because we already have face detection implemented. What is leftβββidentify a person in thatΒ face.
Thanks for reading! As always, all the code given in this tutorial can be found on my GitHub page and is free toΒ use!
Originally published at https://pylessons.com/pencil-sketch
Pencil Sketch Image With Python was originally published in Towards AI on Medium, where people are continuing the conversation by highlighting and responding to this story.
Join thousands of data leaders on the AI newsletter. Itβs free, we donβt spam, and we never share your email address. Keep up to date with the latest work in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming aΒ sponsor.
Published via Towards AI