 GFG App
Open App Browser
Continue

# K-Nearest Neighbor(KNN) Algorithm

In this article, we will learn about a supervised learning algorithm that is popularly known as the KNN or the k – Nearest Neighbours.

## What is K-Nearest Neighbors Algorithm?

K-Nearest Neighbours is one of the most basic yet essential classification algorithms in Machine Learning. It belongs to the supervised learning domain and finds intense application in pattern recognition, data mining, and intrusion detection.

It is widely disposable in real-life scenarios since it is non-parametric, meaning, it does not make any underlying assumptions about the distribution of data (as opposed to other algorithms such as GMM, which assume a Gaussian distribution of the given data). We are given some prior data (also called training data), which classifies coordinates into groups identified by an attribute.

As an example, consider the following table of data points containing two features: KNN Algorithm working visualization

Now, given another set of data points (also called testing data), allocate these points to a group by analyzing the training set. Note that the unclassified points are marked as ‘White’.

## Intuition Behind KNN Algorithm

If we plot these points on a graph, we may be able to locate some clusters or groups. Now, given an unclassified point, we can assign it to a group by observing what group its nearest neighbors belong to. This means a point close to a cluster of points classified as ‘Red’ has a higher probability of getting classified as ‘Red’.

Intuitively, we can see that the first point (2.5, 7) should be classified as ‘Green’ and the second point (5.5, 4.5) should be classified as ‘Red’.

## Distance Metrics Used in KNN Algorithm

As we know that the KNN algorithm helps us identify the nearest points or the groups for a query point. But to determine the closest groups or the nearest points for a query point we need some metric. For this purpose, we use below distance metrics:

### Euclidean Distance

This is nothing but the cartesian distance between the two points which are in the plane/hyperplane. Euclidean distance can also be visualized as the length of the straight line that joins the two points which are into consideration. This metric helps us calculate the net displacement done between the two states of an object. ### Manhattan Distance

This distance metric is generally used when we are interested in the total distance traveled by the object instead of the displacement. This metric is calculated by summing the absolute difference between the coordinates of the points in n-dimensions. ### Minkowski Distance

We can say that the Euclidean, as well as the Manhattan distance, are special cases of the Minkowski distance. From the formula above we can say that when p = 2 then it is the same as the formula for the Euclidean distance and when p = 1 then we obtain the formula for the Manhattan distance.

The above-discussed metrics are most common while dealing with a Machine Learning problem but there are other distance metrics as well like Hamming Distance which come in handy while dealing with problems that require overlapping comparisons between two vectors whose contents can be boolean as well as string values.

## How to choose the value of k for KNN Algorithm?

The value of k is very crucial in the KNN algorithm to define the number of neighbors in the algorithm. The value of k in the k-nearest neighbors (k-NN) algorithm should be chosen based on the input data. If the input data has more outliers or noise, a higher value of k would be better. It is recommended to choose an odd value for k to avoid ties in classification. Cross-validation methods can help in selecting the best k value for the given dataset.

## Applications of the KNN Algorithm

• Data Preprocessing – While dealing with any Machine Learning problem we first perform the EDA part in which if we find that the data contains missing values then there are multiple imputation methods are available as well. One of such method is KNN Imputer which is quite effective ad generally used for sophisticated imputation methodologies.
• Pattern Recognition – KNN algorithms work very well if you have trained a KNN algorithm using the MNIST dataset and then performed the evaluation process then you must have come across the fact that the accuracy is too high.
• Recommendation Engines – The main task which is performed by a KNN algorithm is to assign a new query point to a pre-existed group that has been created using a huge corpus of datasets. This is exactly what is required in the recommender systems to assign each user to a particular group and then provide them recommendations based on that group’s preferences.

## Advantages of the KNN Algorithm

• Easy to implement as the complexity of the algorithm is not that high.
• Adapts Easily – As per the working of the KNN algorithm it stores all the data in memory storage and hence whenever a new example or data point is added then the algorithm adjusts itself as per that new example and has its contribution to the future predictions as well.
• Few Hyperparameters – The only parameters which are required in the training of a KNN algorithm are the value of k and the choice of the distance metric which we would like to choose from our evaluation metric.

## Disadvantages of the KNN Algorithm

• Does not scale – As we have heard about this that the KNN algorithm is also considered a Lazy Algorithm. The main significance of this term is that this takes lots of computing power as well as data storage. This makes this algorithm both time-consuming and resource exhausting.
• Curse of Dimensionality – There is a term known as the peaking phenomenon according to this the KNN algorithm is affected by the curse of dimensionality which implies the algorithm faces a hard time classifying the data points properly when the dimensionality is too high.
• Prone to Overfitting – As the algorithm is affected due to the curse of dimensionality it is prone to the problem of overfitting as well. Hence generally feature selection as well as dimensionality reduction techniques are applied to deal with this problem.

Example Program:

Assume 0 and 1 as the two classifiers (groups).

## Python3

 # Python3 program to find groups of unknown # Points using K nearest neighbour algorithm.   import math   def classifyAPoint(points,p,k=3):     '''      This function finds the classification of p using      k nearest neighbor algorithm. It assumes only two      groups and returns 0 if p belongs to group 0, else       1 (belongs to group 1).         Parameters -            points: Dictionary of training points having two keys - 0 and 1                    Each key have a list of training data points belong to that              p : A tuple, test data point of the form (x,y)             k : number of nearest neighbour to consider, default is 3      '''       distance=[]     for group in points:         for feature in points[group]:               #calculate the euclidean distance of p from training points              euclidean_distance = math.sqrt((feature-p)**2 +(feature-p)**2)               # Add a tuple of form (distance,group) in the distance list             distance.append((euclidean_distance,group))       # sort the distance list in ascending order     # and select first k distances     distance = sorted(distance)[:k]       freq1 = 0 #frequency of group 0     freq2 = 0 #frequency og group 1       for d in distance:         if d == 0:             freq1 += 1         else if d == 1:             freq2 += 1       return 0 if freq1>freq2 else 1   # driver function def main():       # Dictionary of training points having two keys - 0 and 1     # key 0 have points belong to class 0     # key 1 have points belong to class 1       points = {0:[(1,12),(2,5),(3,6),(3,10),(3.5,8),(2,11),(2,9),(1,7)],               1:[(5,3),(3,2),(1.5,9),(7,2),(6,1),(3.8,1),(5.6,4),(4,2),(2,5)]}       # testing point p(x,y)     p = (2.5,7)       # Number of neighbours      k = 3       print("The value classified to unknown point is: {}".\           format(classifyAPoint(points,p,k)))   if __name__ == '__main__':     main()       # This code is contributed by Atul Kumar (www.fb.com/atul.kr.007)

## C++

 // C++ program to find groups of unknown // Points using K nearest neighbour algorithm. #include  using namespace std;   struct Point {     int val;     // Group of point     double x, y;     // Co-ordinate of point     double distance; // Distance from test point };   // Used to sort an array of points by increasing // order of distance bool comparison(Point a, Point b) {     return (a.distance < b.distance); }   // This function finds classification of point p using // k nearest neighbour algorithm. It assumes only two // groups and returns 0 if p belongs to group 0, else // 1 (belongs to group 1). int classifyAPoint(Point arr[], int n, int k, Point p) {     // Fill distances of all points from p     for (int i = 0; i < n; i++)         arr[i].distance =             sqrt((arr[i].x - p.x) * (arr[i].x - p.x) +                  (arr[i].y - p.y) * (arr[i].y - p.y));       // Sort the Points by distance from p     sort(arr, arr+n, comparison);       // Now consider the first k elements and only     // two groups     int freq1 = 0;     // Frequency of group 0     int freq2 = 0;     // Frequency of group 1     for (int i = 0; i < k; i++)     {         if (arr[i].val == 0)             freq1++;         else if (arr[i].val == 1)             freq2++;     }       return (freq1 > freq2 ? 0 : 1); }   // Driver code int main() {     int n = 17; // Number of data points     Point arr[n];       arr.x = 1;     arr.y = 12;     arr.val = 0;       arr.x = 2;     arr.y = 5;     arr.val = 0;       arr.x = 5;     arr.y = 3;     arr.val = 1;       arr.x = 3;     arr.y = 2;     arr.val = 1;       arr.x = 3;     arr.y = 6;     arr.val = 0;       arr.x = 1.5;     arr.y = 9;     arr.val = 1;       arr.x = 7;     arr.y = 2;     arr.val = 1;       arr.x = 6;     arr.y = 1;     arr.val = 1;       arr.x = 3.8;     arr.y = 3;     arr.val = 1;       arr.x = 3;     arr.y = 10;     arr.val = 0;       arr.x = 5.6;     arr.y = 4;     arr.val = 1;       arr.x = 4;     arr.y = 2;     arr.val = 1;       arr.x = 3.5;     arr.y = 8;     arr.val = 0;       arr.x = 2;     arr.y = 11;     arr.val = 0;       arr.x = 2;     arr.y = 5;     arr.val = 1;       arr.x = 2;     arr.y = 9;     arr.val = 0;       arr.x = 1;     arr.y = 7;     arr.val = 0;       /*Testing Point*/     Point p;     p.x = 2.5;     p.y = 7;       // Parameter to decide group of the testing point     int k = 3;     printf ("The value classified to unknown point"             " is %d.\n", classifyAPoint(arr, n, k, p));     return 0; }

## Java

 // Java program to find groups of unknown // Points using K nearest neighbour algorithm. import java.io.*; import java.util.*;   class GFG {     static class Point {         int val; // Group of point         double x, y; // Co-ordinate of point         double distance; // Distance from test point     }       // Used to sort an array of points by increasing     // order of distance     static class comparison implements Comparator {           public int compare(Point a, Point b)         {             if (a.distance < b.distance)                 return -1;             else if (a.distance > b.distance)                 return 1;             return 0;         }     }       // This function finds classification of point p using     // k nearest neighbour algorithm. It assumes only two     // groups and returns 0 if p belongs to group 0, else     // 1 (belongs to group 1).     static int classifyAPoint(Point arr[], int n, int k,                               Point p)     {         // Fill distances of all points from p         for (int i = 0; i < n; i++)             arr[i].distance = Math.sqrt(                 (arr[i].x - p.x) * (arr[i].x - p.x)                 + (arr[i].y - p.y) * (arr[i].y - p.y));           // Sort the Points by distance from p         Arrays.sort(arr, new comparison());           // Now consider the first k elements and only         // two groups         int freq1 = 0; // Frequency of group 0         int freq2 = 0; // Frequency of group 1         for (int i = 0; i < k; i++) {             if (arr[i].val == 0)                 freq1++;             else if (arr[i].val == 1)                 freq2++;         }           return (freq1 > freq2 ? 0 : 1);     }       // Driver code     public static void main(String[] args)     {         int n = 17; // Number of data points         Point[] arr = new Point[n];         for (int i = 0; i < 17; i++) {             arr[i] = new Point();         }         arr.x = 1;         arr.y = 12;         arr.val = 0;           arr.x = 2;         arr.y = 5;         arr.val = 0;           arr.x = 5;         arr.y = 3;         arr.val = 1;           arr.x = 3;         arr.y = 2;         arr.val = 1;           arr.x = 3;         arr.y = 6;         arr.val = 0;           arr.x = 1.5;         arr.y = 9;         arr.val = 1;           arr.x = 7;         arr.y = 2;         arr.val = 1;           arr.x = 6;         arr.y = 1;         arr.val = 1;           arr.x = 3.8;         arr.y = 3;         arr.val = 1;           arr.x = 3;         arr.y = 10;         arr.val = 0;           arr.x = 5.6;         arr.y = 4;         arr.val = 1;           arr.x = 4;         arr.y = 2;         arr.val = 1;           arr.x = 3.5;         arr.y = 8;         arr.val = 0;           arr.x = 2;         arr.y = 11;         arr.val = 0;           arr.x = 2;         arr.y = 5;         arr.val = 1;           arr.x = 2;         arr.y = 9;         arr.val = 0;           arr.x = 1;         arr.y = 7;         arr.val = 0;           /*Testing Point*/         Point p = new Point();         p.x = 2.5;         p.y = 7;           // Parameter to decide group of the testing point         int k = 3;         System.out.println(             "The value classified to unknown point is "             + classifyAPoint(arr, n, k, p));     } }   // This code is contributed by Karandeep1234

## C#

 using System; using System.Collections; using System.Collections.Generic; using System.Linq;   // C# program to find groups of unknown // Points using K nearest neighbour algorithm. class Point {   public int val; // Group of point   public double x, y; // Co-ordinate of point   public int distance; // Distance from test point }   class HelloWorld {       // This function finds classification of point p using   // k nearest neighbour algorithm. It assumes only two   // groups and returns 0 if p belongs to group 0, else   // 1 (belongs to group 1).   public static int classifyAPoint(List arr, int n, int k, Point p)   {     // Fill distances of all points from p     for (int i = 0; i < n; i++)       arr[i].distance = (int)Math.Sqrt((arr[i].x - p.x) * (arr[i].x - p.x) + (arr[i].y - p.y) * (arr[i].y - p.y));       // Sort the Points by distance from p     arr.Sort(delegate(Point x, Point y) {       return x.distance.CompareTo(y.distance);     });       // Now consider the first k elements and only     // two groups     int freq1 = 0; // Frequency of group 0     int freq2 = 0; // Frequency of group 1     for (int i = 0; i < k; i++) {       if (arr[i].val == 0)         freq1++;       else if (arr[i].val == 1)         freq2++;     }       return (freq1 > freq2 ? 0 : 1);   }     static void Main() {     int n = 17; // Number of data points     List arr = new List();     for(int i = 0; i < n; i++){       arr.Add(new Point());     }       arr.x = 1;     arr.y = 12;     arr.val = 0;       arr.x = 2;     arr.y = 5;     arr.val = 0;       arr.x = 5;     arr.y = 3;     arr.val = 1;       arr.x = 3;     arr.y = 2;     arr.val = 1;       arr.x = 3;     arr.y = 6;     arr.val = 0;       arr.x = 1.5;     arr.y = 9;     arr.val = 1;       arr.x = 7;     arr.y = 2;     arr.val = 1;       arr.x = 6;     arr.y = 1;     arr.val = 1;       arr.x = 3.8;     arr.y = 3;     arr.val = 1;       arr.x = 3;     arr.y = 10;     arr.val = 0;       arr.x = 5.6;     arr.y = 4;     arr.val = 1;       arr.x = 4;     arr.y = 2;     arr.val = 1;       arr.x = 3.5;     arr.y = 8;     arr.val = 0;       arr.x = 2;     arr.y = 11;     arr.val = 0;       arr.x = 2;     arr.y = 5;     arr.val = 1;       arr.x = 2;     arr.y = 9;     arr.val = 0;       arr.x = 1;     arr.y = 7;     arr.val = 0;       /*Testing Point*/     Point p = new Point();     p.x = 2.5;     p.y = 7;       // Parameter to decide group of the testing point     int k = 3;     Console.WriteLine("The value classified to unknown point is " + classifyAPoint(arr, n, k, p));   } }   // The code is contributed by Nidhi goel.

## Javascript

 class Point {   constructor(val, x, y, distance) {     this.val = val; // Group of point     this.x = x; // X-coordinate of point     this.y = y; // Y-coordinate of point     this.distance = distance; // Distance from test point   } }   // Used to sort an array of points by increasing order of distance class Comparison {   compare(a, b) {     if (a.distance < b.distance) {       return -1;     } else if (a.distance > b.distance) {       return 1;     }     return 0;   } }   // This function finds classification of point p using // k nearest neighbour algorithm. It assumes only two // groups and returns 0 if p belongs to group 0, else // 1 (belongs to group 1). function classifyAPoint(arr, n, k, p) {   // Fill distances of all points from p   for (let i = 0; i < n; i++) {     arr[i].distance = Math.sqrt((arr[i].x - p.x) * (arr[i].x - p.x) + (arr[i].y - p.y) * (arr[i].y - p.y));   }     // Sort the Points by distance from p   arr.sort(new Comparison());     // Now consider the first k elements and only two groups   let freq1 = 0; // Frequency of group 0   let freq2 = 0; // Frequency of group 1   for (let i = 0; i < k; i++) {     if (arr[i].val === 0) {       freq1++;     } else if (arr[i].val === 1) {       freq2++;     }   }     return freq1 > freq2 ? 0 : 1; }   // Driver code const n = 17; // Number of data points const arr = new Array(n); for (let i = 0; i < 17; i++) {   arr[i] = new Point(); } arr.x = 1; arr.y = 12; arr.val = 0;   arr.x = 2; arr.y = 5; arr.val = 0;   arr.x = 5; arr.y = 3; arr.val = 1;   arr.x = 3; arr.y = 2; arr.val = 1;   arr.x = 3; arr.y = 6; arr.val = 0;   arr.x = 1.5; arr.y = 9; arr.val = 1;   arr.x = 7; arr.y = 2; arr.val = 1;   arr.x = 6; arr.y = 1; arr.val = 1;   arr.x = 3.8; arr.y = 3; arr.val = 1;   arr.x = 3; arr.y = 10; arr.val = 0;   arr.x = 5.6; arr.y = 4; arr.val = 1;   arr.x = 4 arr.y = 2; arr.val = 1;   arr.x = 3.5; arr.y = 8; arr.val = 0;   arr.x = 2; arr.y = 11; arr.val = 0;   arr.x = 2; arr.y = 5; arr.val = 1;   arr.x = 2; arr.y = 9; arr.val = 0;   arr.x = 1; arr.y = 7; arr.val = 0;   // Testing Point let p = {   x: 2.5,   y: 7,   val: -1, // uninitialized };   // Parameter to decide group of the testing point let k = 3;   console.log(   "The value classified to unknown point is " +     classifyAPoint(arr, n, k, p) );   function classifyAPoint(arr, n, k, p) {   // Fill distances of all points from p   for (let i = 0; i < n; i++) {     arr[i].distance = Math.sqrt(       (arr[i].x - p.x) * (arr[i].x - p.x) + (arr[i].y - p.y) * (arr[i].y - p.y)     );   }     // Sort the Points by distance from p   arr.sort(function (a, b) {     if (a.distance < b.distance) return -1;     else if (a.distance > b.distance) return 1;     return 0;   });     // Now consider the first k elements and only two groups   let freq1 = 0; // Frequency of group 0   let freq2 = 0; // Frequency of group 1   for (let i = 0; i < k; i++) {     if (arr[i].val == 0) freq1++;     else if (arr[i].val == 1) freq2++;   }     return freq1 > freq2 ? 0 : 1; }

Output:

The value classified as an unknown point is 0.

Time Complexity: O(N * logN)
Auxiliary Space: O(1)

My Personal Notes arrow_drop_up