preetsojitra

A no-nonsense guide on how to actually learn AI/ML without getting overwhelmed.

I’ve lost count of how many people have asked me for resources to get started with AI and Machine Learning. Instead of typing out the same reply over and over, I decided to just dump my thoughts here.

This isn't a formal syllabus. It's the actual path I took, plus a collection of the extra stuff — channels, blogs, and guides that helped me to connect the dots.

First things first: AI/ML is massive. It has insane breadth and depth. If you are a beginner, your strategy should be Breadth-First Search, not Depth-First.

Don't go down a rabbit hole on one single topic immediately. Explore the landscape first. In most industry jobs, you need to know all the algorithms up to a certain depth. You only need to go super deep if you plan to pivot into research or specialize in a specific domain later.

The Prerequisites: Math & Code

Let’s be real: AI is just Math. The CS part is just implementation details.

You don’t need to be a mathematician upfront. My advice? Learn on the go. Don’t try to finish a whole math degree before writing your first line of code. Start building, and if you feel a concept is a bit "rusty," pause, brush up on that specific math topic, and then resume.

The Math you actually need:

The Code:

You need to be good with Python. It’s the standard. If you ask me what to focus on specifically, I’d say strengthen your Object-Oriented Programming (OOP) concepts—classes, objects, inheritance. Believe me, this will be your base.

The "Bible" of ML Resources

If you only use one resource, make it this book: "Hands-On Machine Learning with Scikit-Learn, Keras, and TensorFlow."

This book covers everything: Classical ML, Deep Learning, NLP, Computer Vision, and even Generative AI (like building your own mini-GPT). It doesn't go unnecessarily deep into math, but it gives you enough to understand the "why" and provides the code to show you the "how."

Phase 1: Classical ML (Tabular Data)

Classical ML is mostly about dealing with CSV-style tabular datasets. You’ll spend 80% of your time just messing with data and 20% building models.

Crucial Advice: Scikit-learn abstracts everything away, which is great for productivity but bad for learning. Do not rely solely on the library. In interviews, people struggle because they know the jargon but can't explain how the algorithm works under the hood. Implement things from scratch at least once to understand the mechanics.

Phase 2: Deep Learning (The Fun Stuff)

This is where things get deep (pun intended). This includes NLP, Computer Vision, and Reinforcement Learning.

Even though everyone is hyped about Transformers and LLMs right now, don't jump straight to the latest trends. Master the basic algorithms first. Stick to the "Hands-On" book I mentioned earlier—it’s your bible. Follow it rigorously for a few months.

While I prefer books and blogs because they force you to read and think, here are the video resources that are actually worth your time.

For the Math & Concepts:

The Legends:

For Implementation (Deep Dives):

Tools of the Trade

You can't train Deep Learning models on your CPU. You need GPUs.

  1. Kaggle: Offers datasets, an editor (notebooks), and free GPU access. Also hosts competitions which are worth checking out.

  2. Google Colab: Similar to Kaggle. Pro-tip: If you have a .edu email address, Colab often gives you free access to higher-end GPUs for a year.

Final Thoughts

Good luck. Now go write some code.