What are the different types of machine learning? - 1.Supervised Learning: In supervised learning, the computer is given data that already has the correct answers (called labels). For example, you show it pictures of dogs and cats and each picture is labeled as "dog" or "cat." The computer learns from these labeled examples so it can correctly identify new pictures on its own. It's like teaching with a quiz where the answers are already provided. 2. Unsupervised Learning: In unsupervised learning, the computer is given data without any answers. It has to figure out patterns or groups by itself. For example, you might give it a bunch of photos and the computer might group all the dog pictures together and all the cat pictures together, even though you didn’t tell it what they were., What is linear regression? - Linear Regression is type of Supervised Learning where we compute a linear relationship between the predictor and response variable. It is based on the linear equation concept given by:  y^=β1x+βoy^​=β1​x+βo​,     where *y^y^​ = response / dependent variable  *β1β1​ = slope of the linear regression  *βoβo​ = intercept for linear regression  *xx = predictor / independent variable(s), what are the different assumptions of linear regression algorithms? - There are 4 assumptions we make about a Linear regression problem: 1.Linear relationship : This assumes that there is a linear relationship between predictor and response variable. This means that which changing values of predictor variable, the response variable changes linearly (either increases or decreases). 2.Normality : This assumes that the dataset is normally distributed, i.e., the data is symmetric about the mean of the dataset. 3.Independence : The features are independent of each other, there is no correlation among the features/predictor variables of the dataset. 4.Homoscedasticity : This assumes that the dataset has equal variance for all the predictor variables. This means that the amount of independent variables have no effect on the variance of data., Logistic Regression is a classification technique and why is its name regression not Logistic Classification? - While logistic regression is used for classification it still maintains a regression structure underneath. The key idea is to model the probability of an event occurring (e.g., class 1 in binary classification) using a linear combination of features and then apply a logistic (Sigmoid) function to transform this linear combination into a probability between 0 and 1. This transformation is what makes it suitable for classification tasks. *In short while logistic regression is indeed used for classification, it retains the mathematical and structural characteristics of a regression model., What is the Logistic function (Sigmoid function) in logistic regression? - The logistic function or sigmoid function, is used in logistic regression to predict probabilities. It takes any real number as input and maps it to a value between 0 and 1 which makes it great for predicting binary outcomes like "yes" or "no." The formula looks like this: f(x)=1/(1+e−z ).   The sigmoid function helps us predict the probability of an event happening. If the output is close to 1, we predict one class and if it's close to 0, we predict the other., What is overfitting and how can be overcome this? - Overfitting refers to the result of analysis of a dataset which fits so closely with training data that it fails to generalize with unseen/future data. This happens when the model is trained with noisy data which causes it to learn the noisy features from the training as well….., To avoid Overfitting and overcome this problem in machine learning, one can follow the following rules:  (1). Feature selection : Sometimes the training data has too many features which might not be necessary for our problem statement. In that case, we use only the necessary features that serve our purpose. (2). Cross Validation : This technique is a very powerful method to overcome overfitting. In this, the training dataset is divided into a set of mini training batches which are used to tune the model. (3). Regularization : Regularization is the technique to supplement the loss with a penalty term so as to reduce overfitting. This penalty term regulates the overall loss function, thus creating a well trained model. (4). Ensemble models : These models learn the features and combine the results from different training models into a single prediction.,

Leaderboard

Visual style

Options

Switch template

Continue editing: ?