Need help with this week’s assignment? Get detailed and trusted solutions for Introduction to Machine Learning Week 7 NPTEL Assignment Answers. Our expert-curated answers help you solve your assignments faster while deepening your conceptual clarity.
✅ Subject: Introduction to Machine Learning (nptel ml Answers)
📅 Week: 7
🎯 Session: NPTEL 2025 July-October
🔗 Course Link: Click Here
🔍 Reliability: Verified and expert-reviewed answers
📌 Trusted By: 5000+ Students
For complete and in-depth solutions to all weekly assignments, check out 👉 NPTEL Introduction to Machine Learning Week 7 Assignment Answers
🚀 Stay ahead in your NPTEL journey with fresh, updated solutions every week!
NPTEL Introduction to Machine Learning Week 7 Assignment Answers 2025
1. Define active learning:
- A learning approach where the algorithm passively receives all training data at once
- A technique where the model learns from its own predictions without human intervention
- An iterative learning process where the model selects the most informative data points for labeling
- A method where the model randomly selects data points for training to reduce bias
Answer : See Answers
2. Given 100 distinct data points, if you sample 100 times with replacement, what is the expected number of distinct points you will obtain?
- Approximately 50
- Approximately 63
- Exactly 100
- Approximately 37
Answer : See Answers
3. What is the key difference between bootstrapping and cross-validation?
- Bootstrapping uses the entire dataset for training, while cross-validation splits the data into subsets
- Cross-validation allows replacement, while bootstrapping does not
- Bootstrapping creates multiple samples with replacement, while cross-validation creates subsets without replacement
- Cross-validation is used for model selection, while bootstrapping is only used for uncertainty estimation
Answer :
4. Consider the following confusion matrix for a binary classification problem:

What are the precision, recall, and accuracy of this classifier?
- Precision: 0.81, Recall: 0.85, Accuracy: 0.83
- Precision: 0.85, Recall: 0.81, Accuracy: 0.85
- Precision: 0.80, Recall: 0.85, Accuracy: 0.82
- Precision: 0.85, Recall: 0.85, Accuracy: 0.80
Answer :
5. AUC for your newly trained model is 0.5. Is your model prediction completely random?
- Yes
- No
- ROC curve is needed to derive this conclusion
- Cannot be determined even with ROC
Answer :
6. You are building a model to detect cancer. Which metric will you prefer for evaluating your model?
- Accuracy
- Sensitivity
- Specificity
- MSE
Answer :
7. You have 2 binary classifiers A and B. A has accuracy=0% and B has accuracy=50%. Which classifier is more useful?
- A
- B
- Both are good
- Cannot say
Answer :
8. You have a special case where your data has 10 classes and is sorted according to target labels. You attempt 5-fold cross validation by selecting the folds sequentially. What can you say about your resulting model?
- It will have 100% accuracy.
- It will have 0% accuracy.
- It will have close to perfect accuracy.
- Accuracy will depend on the compute power available for training.
Answer : See Answers
NPTEL Introduction to Machine Learning Week 7 Assignment Answers 2024
1. Which of the following statement(s) regarding the evaluation of Machine Learning models is/are true?
- A model with a lower training loss will perform better on a test dataset.
- The train and test datasets should represent the underlying distribution of the data.
- To determine the variation in the performance of a learning algorithm, we generally use one training set and one test set.
- A learning algorithm can learn different parameter values if given different samples from the same distribution.
Answer :- b, d
2. Suppose we have a classification dataset comprising of 2 classes A and B with 100 and 50 samples respectively. Suppose we use stratified sampling to split the data into train and test sets. Which of the following train-test splits would be appropriate?
- Train- {A:80samples,B:30samples}, Test- {A:20samples,B:20samples}
- Train- {A:20samples,B:20samples}, Test- {A:80samples,B:30samples}
- Train- {A:80samples,B:40samples}, Test- {A:20samples,B:10samples}
- Train- {A:20samples,B:10samples}, Test- {A:80samples,B:40samples}
Answer :- c
3. Suppose we are performing cross-validation on a multiclass classification dataset with N data points. Which of the following statement(s) is/are correct?
- In k-fold cross validation, each fold should have a class-wise proportion similar to the given dataset.
- In k-fold cross-validation, we train one model and evaluate it on the k different test sets.
- In LOOCV, we train N different models, using (N-1) data points for training each model.
- In LOOCV, we can use the same test data to evaluate all the trained models.
Answer :- a, c,
7. Consider the following statements.
Statement P : Boosting takes multiple weak classifiers and combines them into a strong classifier.
Statement Q : Boosting assigns equal weights to the predictions of all the weak classifiers, resulting in a high overall performance.
- P is True. Q is True. Q is the correct explanation for A.
- P is True. Q is True. Q is not the correct explanation for A.
- P is True. Q is False.
- Both P and Q are False.
Answer :- c
8. Which of the following statement(s) about ensemble methods is/are correct?
- The individual classifiers in bagging cannot be trained parallelly.
- The individual classifiers in boosting cannot be trained parallelly.
- A committee machine can consist of different kinds of classifiers like SVM, decision trees and logistic regression.
- Bagging further increases the variance of an unstable classifier.
Answer :- a, c


