Need help with this week’s assignment? Get detailed and trusted solutions for Deep Learning – IIT Ropar Week 3 NPTEL Assignment Answers. Our expert-curated answers help you solve your assignments faster while deepening your conceptual clarity.
✅ Subject: Deep Learning – IIT Ropar
📅 Week: 3
🎯 Session: NPTEL 2025 July-October
🔗 Course Link: Click Here
🔍 Reliability: Verified and expert-reviewed answers
📌 Trusted By: 5000+ Students
For complete and in-depth solutions to all weekly assignments, check out 👉 NPTEL Deep Learning – IIT Ropar Week 3 NPTEL Assignment Answers
🚀 Stay ahead in your NPTEL journey with fresh, updated solutions every week!
NPTEL Deep Learning – IIT Ropar Week 3 Assignment Answers 2025
1. What is the correct dimension of W2 in this setup?
- 4×3
- 3×4
- 3×3
- 3×1
Answer : See Answers
2. Which of the following loss–activation combinations are correctly used in AgroScan?
- Softmax activation and squared error loss
- Sigmoid activation and cross-entropy loss
- Softmax activation and cross-entropy loss
- Linear activation and cross-entropy loss
Answer :
3. Compute the pre-activation values for the hidden layer for the input x=[0.2,0.6,0.1,0.5] with

- [0.9,0.7,0.8]
- [0.8,0.9,0.7]
- [0.8,0.7,0.9]
- [0.7,0.9,0.8]
Answer :
4. In the AgroScan model, after performing the forward pass for a given input sample, the pre-activation values at the output layer (corresponding to the three classes: Healthy, Pest-infected, and Nutrient-deficient) are: 𝑧 = [0.96, −0.27, 1.19]
Use the softmax function to convert these into probabilities.
Based on the computed probabilities, what class will the network predict for this input?
- Healthy
- Pest-infected
- Nutrient-deficient
- All classes have equal probability
Answer :
5. In a classification task, the true label is “Pest-infected “, and the predicted probability for class “Pest-infected” is 0.55. Using the cross-entropy loss function, compute the loss.
Fill in the blank with the answer rounded to two decimal places:.
Fill the blank: _____________
Answer :
6. Given softmax output [0.35, 0.55, 0.10], what is the predicted class in the AgroScan model?
- Healthy
- Pest-infected
- Nutrient-deficient
- None of the above
Answer :
7. How many learnable parameters are there in the above neural network in total?
- 12
- 15
- 21
- 24
Answer : See Answers
8. If the hidden layer used 𝑡𝑎𝑛ℎ instead of 𝑠𝑖𝑔𝑚𝑜𝑖𝑑, what would change?
- Activation outputs could be negative
- Output layer would become invalid
- Pre-activations would change
- Number of parameters would increase
Answer :
9. Which of the following can lead to higher cross-entropy loss during training?
- Predicting low probability for the true class
- Predicting uniform probabilities for all classes
- Predicting 1.0 for the correct class
- Predicting a class different from the true one
Answer :
10. Which of the following are always true for softmax output?
- Output values lie in 0, 1
- Output values sum to 1
- Softmax is invariant to the order of inputs
- Softmax is sensitive to relative magnitudes of logits
Answer :
Case study for the questions from 11 to 20
A research team is developing a neural network model named LeafYield, designed to predict the crop yield (in kg) from leaf-level features extracted via sensors and imaging.
Each sample contains 5 normalized numerical features: Leaf length, Leaf width, Color intensity, Water content, Light absorption level
The neural network contains two hidden layers. First hidden layer has 4 neurons with tanh activation. Second hidden layer has 3 neurons with sigmoid activation. Output layer has one neuron with linear activation.

11. Why is the output activation in LeafYield chosen to be linear?
- Because softmax is unsuitable for regression
- Because we need raw values, not probabilities
- Because sigmoid would squash the output range
- All of the above
Answer :
12.

- 0.70
- 0.71
- 0.72
- 0.73
Answer :
13. Which loss function is most appropriate for LeafYield?
- Cross-entropy
- Binary cross-entropy
- Mean squared error
- Kullback–Leibler divergence
Answer :
14. Given the number of neurons in the layers and that bias is used in all layers, compute the total number of learnable parameters.
Fill in the blank : ____________
Answer :
15. Given output layer activation is linear and final pre-activation value is 3.75, what will be the model’s prediction?
Fill in the blank : ______________
Answer : See Answers
16. Which of the following statements are true about the activation functions used in LeafYield?
- tanh outputs values between -1 and 1
- sigmoid outputs values between 0 and 1
- linear activation has no bounds
- sigmoid is more centered around zero than tanh
Answer :
17. Given the input vector x=[0.1,0.5,0.3,0.7,0.2]T And

Which is the pre-activation vector a[1] for hidden layer 1.
- [0.6,1.2,0.5,1.5]
- [0.6,1.2,0.3,1.5]
- [0.6,1.3,0.5,1.5]
- [0.6,1.3,0.3,1.5]
Answer :
18. Apply tanh activation to the above vector obtained in Question: 17 and round to two decimals.
- [0.54,0.83,0.46,0.91]
- [0.54,0.83,0.29,0.91]
- [0.54,0.86,0.46,0.91]
- [0.54,0.86,0.29,0.91]
Answer :
19. The LeafYield model was tested on 5 plant samples. The true and predicted crop yields (in kg) are:

What is the Mean Squared Error?
- 0.12
- 0.15
- 0.25
- 0.30
Answer :
20. Which layer’s weights will receive gradients first during backpropagation?
- Output layer
- Hidden layer 2
- Hidden layer 1
- Input layer
Answer :
Case study for Questions 21 to 29
A sports analytics company has developed PlayPredict, a neural network model that classifies football players into one of four roles based on physical and tactical attributes, such as Defender, Midfielder, Forward, Goalkeeper.
From video and tracking data, 5 numerical features are extracted: Speed, Pass accuracy, Defensive actions, Dribble attempts, Position heatmap score.
The model has one hidden layer with 4 neurons using the tanh activation function, followed by an output layer.
21. Which activation is best suited for the above scenario?
- Linear activation
- Sigmoid activation
- Tanh activation
- Softmax activation
Answer :
22. What is the predicted class if the output vector contains [0.2,0.1,0.6,0.1]?
- Defender
- Midfielder
- Forward
- Goalkeeper
Answer : See Answers
23. Given logits: [1.2,0.8,2.0,1.0], compute softmax output (rounded to 2 decimals)
- [0.22,0.14,0.48,0.17]
- [0.21,0.14,0.47,0.17]
- [0.28,0.16,0.58,0.27]
- [0.51,0.54,0.47,0.17]
Answer :
24. Why use tanh in the hidden layer?
- Keeps outputs in [0,1]
- Introduces non-linearity
- Allows negative outputs
- Helps output softmax
Answer :
25.

Compute the pre-activation vector a[1].
- [0.9,0.9,0.8,0.7]
- [0.9,0.8,1.1,0.6]
- [0.9,0.9,1.1,0.7]
- [1.0,0.9,1.1,0.8]
Answer :
26. Compute the hidden vector h[1] from the above computed a[1].
- [0.72,0.72,0.66,0.60]
- [0.72,0.66,0.80,0.54]
- [0.72,0.72,0.80,0.60]
- [0.76,0.72,0.80,0.66]
Answer :
27. The PlayPredict model produced the following softmax output for a football player sample:
y^=[0.1,0.7,0.1,0.1]
What is the categorical cross-entropy loss for this prediction (rounded to 3 decimals)?
- 0.105
- 0.357
- 0.500
- 0.845
Answer :
28. When bias is used in both the layers, what is the total number of learnable parameters in the playpreidct network?
Fill in the blank _________________
Answer :
29. Which of the following can be tanh activation output?
- [–0.9, 0.2, 0.8, –0.4]
- [1.5, –1.2, 2.3, 0.5]
- [0.0, 1.0, 2.0, –1.0]
- [–2, –1, 0, 1]
Answer : See Answers


