<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/"><channel><title>dimensionality-reduction on AI Logs</title><link>https://ai.ragv.in/tags/dimensionality-reduction/</link><description>Recent content in dimensionality-reduction on AI Logs</description><generator>Hugo</generator><language>en-us</language><copyright>2025 Raghava Dhanya · License</copyright><lastBuildDate>Tue, 21 Apr 2026 20:54:06 +0530</lastBuildDate><atom:link href="https://ai.ragv.in/tags/dimensionality-reduction/index.xml" rel="self" type="application/rss+xml"/><item><title>My Intuition of PCA</title><link>https://ai.ragv.in/posts/intuitive-understanding-of-pca/</link><pubDate>Tue, 21 Apr 2026 20:54:06 +0530</pubDate><guid>https://ai.ragv.in/posts/intuitive-understanding-of-pca/</guid><description>&lt;p&gt;Principal Component Analysis (PCA) is an algorithm that I first learnt in a pattern recognition class in college. I understood the motivation and how to use it, but never really understood why we do what we do in PCA.&lt;/p&gt;
&lt;p&gt;We compute some big matrix then do singular value decomposition on it and then filter out some of the components. Why? What does that actually mean? What are we doing to the data? but again life moves on. I just put it in my bag of tools for ML and moved on.&lt;/p&gt;</description></item></channel></rss>