Skip to content
Related Articles

Related Articles

ML | Introduction to Data in Machine Learning

View Discussion
Improve Article
Save Article
  • Difficulty Level : Easy
  • Last Updated : 24 Aug, 2022
View Discussion
Improve Article
Save Article

DATA: It can be any unprocessed fact, value, text, sound, or picture that is not being interpreted and analyzed. Data is the most important part of all Data Analytics, Machine Learning, Artificial Intelligence. Without data, we can’t train any model and all modern research and automation will go in vain. Big Enterprises are spending lots of money just to gather as much certain data as possible. 

Example: Why did Facebook acquire WhatsApp by paying a huge price of $19 billion? 
The answer is very simple and logical – it is to have access to the users’ information that Facebook may not have but WhatsApp will have. This information of their users is of paramount importance to Facebook as it will facilitate the task of improvement in their services. 

INFORMATION: Data that has been interpreted and manipulated and has now some meaningful inference for the users. 

KNOWLEDGE: Combination of inferred information, experiences, learning, and insights. Results in awareness or concept building for an individual or organization. 

How we split data in Machine Learning?  

  • Training Data: The part of data we use to train our model. This is the data that your model actually sees(both input and output) and learns from.
  • Validation Data: The part of data that is used to do a frequent evaluation of the model, fit on the training dataset along with improving involved hyperparameters (initially set parameters before the model begins learning). This data plays its part when the model is actually training.
  • Testing Data: Once our model is completely trained, testing data provides an unbiased evaluation. When we feed in the inputs of Testing data, our model will predict some values(without seeing actual output). After prediction, we evaluate our model by comparing it with the actual output present in the testing data. This is how we evaluate and see how much our model has learned from the experiences feed in as training data, set at the time of training.

Consider an example: 
There’s a Shopping Mart Owner who conducted a survey for which he has a long list of questions and answers that he had asked from the customers, this list of questions and answers is DATA. Now every time when he wants to infer anything and can’t just go through each and every question of thousands of customers to find something relevant as it would be time-consuming and not helpful. In order to reduce this overhead and time wastage and to make work easier, data is manipulated through software, calculations, graphs, etc. as per own convenience, this inference from manipulated data is Information. So, Data is a must for Information. Now Knowledge has its role in differentiating between two individuals having the same information. Knowledge is actually not technical content but is linked to the human thought process. 


Different Forms of Data 

  • Numeric Data : If a feature represents a characteristic measured in numbers , it is called a numeric feature.
  • Categorical Data : A categorical feature is an attribute that can take on one of the limited , and usually fixed number of possible values on the basis of some qualitative property . A categorical feature is also called a nominal feature.
  • Ordinal Data : This denotes a nominal variable with categories falling in an ordered list . Examples include clothing sizes such as small, medium , and large , or a measurement of customer satisfaction on a scale from “not at all happy” to “very happy”.
    Properties of Data – 
  1. Volume: Scale of Data. With the growing world population and technology at exposure, huge data is being generated each and every millisecond.
  2. Variety: Different forms of data – healthcare, images, videos, audio clippings.
  3. Velocity: Rate of data streaming and generation.
  4. Value: Meaningfulness of data in terms of information that researchers can infer from it.
  5. Veracity: Certainty and correctness in data we are working on.

Some facts about Data:  

  • As compared to 2005, 300 times i.e. 40 Zettabytes (1ZB=10^21 bytes) of data will be generated by 2020.
  • By 2011, the healthcare sector has a data of 161 Billion Gigabytes
  • 400 Million tweets are sent by about 200 million active users per day
  • Each month, more than 4 billion hours of video streaming is done by the users.
  • 30 Billion different types of content are shared every month by the user.
  • It is reported that about 27% of data is inaccurate and so 1 in 3 business idealists or leaders don’t trust the information on which they are making decisions.

The above-mentioned facts are just a glimpse of the actually existing huge data statistics. When we talk in terms of real-world scenarios, the size of data currently presents and is getting generated each and every moment is beyond our mental horizons to imagine. 


My Personal Notes arrow_drop_up
Recommended Articles
Page :

Start Your Coding Journey Now!