Need help with this week’s assignment? Get detailed and trusted solutions for Deep Learning – IIT Ropar Week 5 NPTEL Assignment Answers. Our expert-curated answers help you solve your assignments faster while deepening your conceptual clarity.
✅ Subject: Deep Learning – IIT Ropar
📅 Week: 5
🎯 Session: NPTEL 2025 July-October
🔗 Course Link: Click Here
🔍 Reliability: Verified and expert-reviewed answers
📌 Trusted By: 5000+ Students
For complete and in-depth solutions to all weekly assignments, check out 👉 NPTEL Deep Learning – IIT Ropar Week 5 NPTEL Assignment Answers
🚀 Stay ahead in your NPTEL journey with fresh, updated solutions every week!
NPTEL Deep Learning – IIT Ropar Week 5 Assignment Answers 2025
1.

Which of the following best describes the geometric effect of this matrix transformation on the vector 𝑥?
- The vector 𝑥 is scaled but remains in the same direction — no rotation occurs.
- The vector 𝑥 is transformed into a vector orthogonal to itself.
- The vector 𝑥 is mapped to the origin — it lies in the null space of 𝐴.
- The vector 𝑥 is rotated to a new direction and scaled — the direction changes.
Answer : See Answers
2.

- Yes, 𝑥 is an eigenvector of 𝐴 with eigenvalue 𝜆 = 2.
- No, the transformation changes the direction of 𝑥, so it cannot be an eigenvector.
- Yes, 𝑥 is an eigenvector with eigenvalue 𝜆 = 3.
- No, 𝑥 maps to a zero vector under 𝐴, indicating it lies in the null space.
Answer :
3.

Each day, users switch between these apps according to the matrix. If on 𝑑𝑎𝑦 0, 300 users are on AppX and 200 on AppY, which of the following best describes what will eventually happen to the distribution of users?
- The number of users will keep oscillating without settling
- The users will all eventually shift to AppX
- The distribution will stabilize in the ratio of the dominant eigenvector
- The user numbers will keep increasing exponentially
Answer :
4.

Answer :
5. Which of the following statements are true
- The eigenvectors corresponding to different eigenvalues are linearly independent.
- The eigenvectors of a square symmetric matrix are orthogonal.
- The eigenvectors of a square symmetric matrix can thus form a convenient basis.
- A statement which is wrong.
Answer :
6.

- A random vector
- A zero vector
- A multiple of the dominant eigenvector of A
- The vector with the smallest eigenvalue
Answer : See Answers

7. What kind of matrix is 𝑀, why is this classification important for understanding the long-term behavior of as k→∞ ?
- 𝑀 is an orthogonal matrix, so the vectors 𝑣𝑘 remain unchanged in length.
- 𝑀 is a symmetric matrix, which ensures all eigenvalues are real.
- 𝑀 is a stochastic matrix, so it preserves probability distributions and has a dominant eigenvalue equal to 1.
- 𝑀 is a diagonal matrix, so it converges faster under repeated multiplication.
Answer :
8.

interprets this result in the context of web ranking?
- Page B has the highest rank, followed by A and C, based on steady-state visit probability.
- Page A is most likely to be visited, so it should be assigned lowest priority in ranking.
- All pages are equally ranked because the initial vector had equal probability.
- This indicates the convergence failed, as stochastic matrices must lead to uniform distributions.
Answer :
9.

vector after many iterations be different?
- No, because the convergence behavior is governed only by the dominant eigenvalue and eigenvector.
- Yes, because the initial vector strongly favors page A.
- No, because the matrix is symmetric, so initial vector doesn’t matter.
- Yes, because the matrix has complex eigenvalues and depends on initial phase.
Answer :
10.

which of the following best describes the long-term behavior of the sequence xk?
- The sequence will vanish toward zero.
- The sequence will oscillate between states.
- The sequence will converge to a steady-state probability vector.
- The sequence will explode due to the growing norm.
Answer : See Answers
11.

- The sequence will converge to a zero vector because both eigenvalues have magnitude < 1.
- The sequence will oscillate due to eigenvalue signs.
- The sequence will explode since the matrix has entries > 1.
- The vector will converge to a steady state with unit magnitude.
Answer :
12.

Answer :
13.

- The vectors form a basis for R3.
- The vectors are linearly dependent and hence do not form a basis
- The vectors span R3 but are not independent
- Cannot conclude anything from the determinant
Answer :
14. Which of the following statements is always true for a square matrix A∈RnXn that has distinct eigenvalues?
- The eigenvectors of 𝐴 form a linearly independent set
- All eigenvectors of 𝐴 are orthogonal
- The eigenvectors of 𝐴 are linearly dependent
- 𝐴 must be a symmetric matrix
Answer :
15. Why are the eigenvectors of a square symmetric matrix considered special?
- They are always zero vectors
- They are complex even when the matrix has real entries
- They are orthogonal to each other and can form a basis
- They cannot be used to diagonalize the matrix
Answer : See Answers
16. Let 𝐴 be a real symmetric matrix. Which of the following statements is true?
- The vector that maximizes xTAx under the constraint ∥x∥=1 is the eigen vector of A corresponding to its smallest eigenvalue
- The vector that minimizes xTAx under the constraint ∥x∥=1 is the eigen vector of A corresponding to its largest eigenvalue.
- The maximum and minimum of xTAx under ∥x∥=1 are both achieved by arbitrary unit vectors.
- The vector that maximizes xTAx under the constraint ∥x∥=1 is the eigenvector corresponding to the largest eigenvalue of A.
Answer :
17. Which of the following statements are true regarding eigenvectors and symmetric matrices?
- Eigenvectors corresponding to different eigenvalues are linearly independent.
- Eigenvectors of any matrix are always orthogonal.
- Eigenvectors of a square symmetric matrix are orthogonal.
- Eigenvectors of a square symmetric matrix can form a basis.
Answer :
Data for questions 18 and 19
WebTrack Inc. is analyzing session data from users of their 𝑒 − 𝑙𝑒𝑎𝑟𝑛𝑖𝑛𝑔 platform. For each session, they log:
x: Time spent in hours
y: Number of video modules visited
z: Number of clicks made during the session
The analytics team is trying to identify if the number of clicks gives any additional information beyond modules visited. Here’s the revised data from a random sample of user sessions:

They calculate the Pearson correlation coefficient ρyz to assess the similarity between y and z.
18. What does the correlation coefficient ρyz tell us about the relationship between modules visited (y) and clicks (z)?
- They are unrelated
- They are moderately correlated
- They are strongly positively correlated
- They are inversely related
Answer :
19. How should the analytics team handle column 𝑧 when building a predictive model?
- Drop one of y or z to reduce redundancy
- Keep both y and z to increase accuracy
- Use only z because it is slightly higher in some rows
- Drop both y and z since they are similar
Answer : See Answers
20. You are working on a dataset with 100 features. After applying PCA, you reduce it to 2 dimensions. These new dimensions:
- Are linear combinations of original features and orthogonal
- Represent the mean of the original features
- Are highly correlated with each other
- Have lower variance than all other dimensions
Answer :
Common data for Q21 and Q22:
A data scientist is analyzing EEG signals recorded from 100 channels. The signals are known to contain some background noise. She decides to apply Principal Component Analysis and computes the eigenvalues and eigenvectors of the covariance matrix of the data. She retains only the top 10 eigenvectors and discards the remaining 90 components.
21. What is the most principled justification for discarding the 90 components?
- The discarded components correspond to the directions where the data varies the most, so they are likely to be noisy.
- The retained components are likely to contain only noise, so discarding the rest ensures the signal is preserved.
- The discarded components represent directions with very low variance, which often contain noise rather than meaningful structure.
- All PCA components contribute equally to reconstruction, so discarding any of them does not affect signal quality.
Answer :
22. Which mathematical property of the eigenvectors ensures that the new PCA basis does not mix signal and noise directions?
- Eigenvectors of a matrix are always parallel to each other, preserving component alignment.
- Eigenvectors of a symmetric matrix are orthogonal, allowing projection without interference.
- Eigenvectors are complex-valued, making them useful for separating signal from noise.
- Eigenvectors are unit vectors, ensuring normalization of signal power.
Answer :
Information for the questions 23 and 24:
A researcher is working on compressing a dataset of grayscale face images, each of size 64 ×64 pixels (4096 features per image). Using PCA, she finds that the top 150 eigenvectors retain 95% of the total variance.
Out of curiosity, she reconstructs some images using only the remaining 3946 components (those corresponding to the smallest eigenvalues).
23. What is the most likely visual outcome of reconstructing images using only these 3946 low variance components?
- The images will appear distorted or noisy, lacking meaningful structure.
- The images will retain essential facial features with only minor loss in brightness.
- The images will be highly detailed, as fine-grained features are captured in low variance components.
- The images will look the same, since orthogonality ensures perfect reconstruction from any subset of components.
Answer : See Answers
24. Why is it mathematically sound to use only the top-k eigenvectors for dimensionality reduction in PCA?
- Because the top-k eigenvectors span the entire null space of the original matrix.
- Because eigenvectors with smaller eigenvalues are linearly dependent and redundant.
- Because PCA only works with orthogonal matrices of full rank.
- Because these directions correspond to maximum variance, preserving most information with minimal components.
Answer :
Data for the questions 25 and 26
A fintech startup is building a real-time fraud detection system based on thousands of transaction features (e.g., time, location, merchant type, device, customer history, etc.). The high dimensionality of the data is slowing down model training and inference. To resolve this, the data science team applies PCA and retains only the top 25 principal components from the original 300 features.
After transformation, they observe the following:
- The components are uncorrelated.
- The system runs much faster.
- The fraud detection performance remains largely unaffected.
25. Which of the following best explains why PCA was effective in this case?
- PCA minimized the variance within each individual feature, making the data easier to compress.
- PCA transformed the features into a new space where components are uncorrelated and high-variance directions are preserved, reducing dimensionality with minimal loss of information.
- PCA converted all non-linear features into linear ones, which improved model interpretability.
- PCA forced all features to have zero mean and unit variance, which improved classification accuracy.
Answer :
26. What mathematical guarantee offered by PCA ensures that the model is not negatively affected by redundant or overlapping features?
- PCA maximizes correlation between transformed features to preserve group structure.
- PCA aligns features with the original feature axes, making reconstruction error zero.
- PCA ensures that the covariance between the new dimensions is minimized, thereby removing redundancy.
- PCA clusters features into distinct groups based on their variance contribution.
Answer :
Common data for Q27 and Q28:
You are given a dataset of grayscale human face images. Each image is of size 100 × 100 pixels and the total number of images are 500. PCA is applied to reduce the dimensionality by computing the top 100 eigenvectors of 𝑋𝑇𝑋. Each eigen vector is reshaped into a 100 × 100 image and is referred to as an eigenface. Each eigen vector is reshaped into a 100 × 100 image and is referred to as an eigenface. Each original image is now approximated using just 100 scalar coefficents corresponding to these eigenfaces.
27. What is the role of eigenfaces in compressing the face image dataset?
- They duplicate all faces to save space
- They reduce resolution of images for faster display
- They form a lower-dimensional basis to represent face images efficiently
- They are used to colorize grayscale images
Answer :
28. You have 500 images, each originally 10,000 dimensions. After projecting onto the top 100 eigenfaces, how many scalar values are needed to store all 500 compressed images?
- 10000
- 50000
- 5000000
- 600
Answer :
29. What is the correct form of the Singular Value Decomposition of a real matrix A∈Rm×n ?
- A=UΣVT
- A=UDUT
- A=VΣVT
- A=P∇PT
Answer :
30. In the SVD of a real matrix A=UΣVT, what is the nature of the matrix Σ ?
- A diagonal matrix with real values
- A diagonal matrix with non-negative real numbers
- An upper triangular matrix
- A symmetric matrix
Answer : See Answers


