Introduction to Machine Learning Week 3 NPTEL Assignment Answers 2025

NPTEL Introduction to Machine Learning Week 3 Assignment Answers 2024

1. Which of the following statement(s) about decision boundaries and discriminant functions of classifiers is/are true?

Options:
a. In a binary classification problem, all points x on the decision boundary satisfy δ₁(x) = δ₂(x)
b. In a three-class classification problem, all points on the decision boundary satisfy δ₁(x) = δ₂(x) = δ₃(x)
c. In a three-class classification problem, all points on the decision boundary satisfy at least one of δ₁(x) = δ₂(x), δ₂(x) = δ₃(x), or δ₃(x) = δ₁(x)
d. Let the input space be ℝⁿ. If x does not lie on the decision boundary, there exists an ε > 0 such that all inputs y satisfying ||y−x|| < ε belong to the same class

Answer :- a, c, d
Explanation:

  • a: True – Decision boundary in binary classification is where δ₁(x) = δ₂(x)
  • b: False – In multi-class problems, not all points on the boundary equate all δ values
  • c: True – Points on the decision boundary satisfy at least one pairwise δ equality
  • d: True – Away from the decision boundary, classification is consistent in a neighborhood

2. Logistic regression model output probability p(xᵢ). Likelihood of data with 4 samples and labels yᵢ.

  • 0.346
  • 0.230
  • 0.058
  • 0.086

Answer :- b (0.230)
Explanation:
The likelihood is computed as: L=∏i=14p(xi)yi(1−p(xi))1−yiL = \prod_{i=1}^{4} p(x_i)^{y_i}(1 – p(x_i))^{1 – y_i}L=i=1∏4​p(xi​)yi​(1−p(xi​))1−yi​

Multiply all four terms based on the label values and predicted probabilities. The calculated result matches 0.230.


3. Which of the following statement(s) about logistic regression is/are true?

Options:
a. It learns a model for the probability distribution of the data points in each class
b. The output of a linear model is transformed to the range (0,1) by a sigmoid function
c. The parameters are learned by optimizing the mean-squared loss
d. The loss function is optimized by using an iterative numerical algorithm

Answer :- For Answer Click Here
Explanation:

  • a: False – Logistic regression models posterior probabilities, not data distribution
  • b: True – Uses the sigmoid function
  • c: False – Uses cross-entropy loss, not MSE
  • d: True – Uses gradient descent, an iterative numerical method

4. Consider a modified form of logistic regression given below where k is a positive constant and β0andβ1 are parameters.

Answer :-
Explanation:


5. Bayesian classifier with class-conditional densities and priors πₖ.

Options:
a. If the three classes have equal priors, the prediction must be class 2
b. If π₃ < π₂ and π₁ < π₂, the prediction may not necessarily be class 2
c. If π₁ > 2π₂, the prediction could be class 1 or class 3
d. If π₁ > π₂ > π₃, the prediction must be class 1

Answer :-
Explanation:

  • a: True – With equal priors, pick class with highest likelihood → class 2
  • b: False – High prior of class 2 doesn’t guarantee it has max posterior
  • c: True – If prior for class 1 is very high, posterior may exceed others
  • d: False – Must consider class-conditional likelihoods too

6. (Not provided – question number missing, assumed skipped)

Answer :- For Answer Click Here


7. Which of the following about two-class LDA is/are true?

Options:
a. It is assumed that the class-conditioned probability density of each class is Gaussian
b. A different covariance matrix is estimated for each class
c. At a given point on the decision boundary, the class-conditioned densities of both classes must be equal
d. At a given point on the decision boundary, the class-conditioned densities may or may not be equal

Answer :-
Explanation:

  • a: True – LDA assumes Gaussian distribution with same covariance
  • b: False – That’s for QDA, not LDA
  • c: True – Decision boundary = equal posterior = equal likelihoods when priors same
  • d: False – At boundary, likelihoods must be equal

8. (Question missing – assumed skipped)

Answer :-


9. Which of the following about LDA is/are true?

Options:
a. It minimizes the between-class variance relative to the within-class variance
b. It maximizes the between-class variance relative to the within-class variance
c. Maximizing the Fisher information gives same separating hyperplane as posterior equalizing
d. Maximizing the Fisher info gives different hyperplane from posterior equalizing

Answer :-
Explanation:

  • a: False – LDA maximizes this ratio
  • b: True
  • c: False – Fisher criterion and posterior-based decisions can differ
  • d: True – The direction found by Fisher’s LDA may not match maximum posterior boundary

10. Logistic Regression vs LDA for binary classification – which is true?

Options:
a. Both algorithms learn same decision boundary
b. Outliers affect LDA’s boundary more than logistic regression
c. Outliers affect both similarly
d. Logistic regression performs better if class distributions aren’t Gaussian

Answer :- For Answer Click Here
Explanation:

  • a: False – Boundaries differ except in ideal Gaussian case
  • b: True – LDA is sensitive to mean/covariance shifts
  • c: False – Logistic regression is more robust to outliers
  • d: True – Logistic regression doesn’t assume Gaussian classes