SQL Skills for Data Roles

Explore top LinkedIn content from expert professionals.

  • View profile for Venkata Naga Sai Kumar Bysani

    Data Scientist | 200K+ Data Community | 3+ years in Predictive Analytics, Experimentation & Business Impact | Featured on Times Square, Fox, NBC

    231,058 followers

    90% of SQL interviews are built on these patterns. (If you know them, you're already ahead.) SQL interviews aren’t about syntax. They’re about problem-solving and spotting patterns. If you master these 5 patterns, you won’t just answer questions, you’ll impress with clarity and confidence. 1. 𝐉𝐨𝐢𝐧𝐬 & 𝐃𝐚𝐭𝐚 𝐂𝐨𝐦𝐛𝐢𝐧𝐚𝐭𝐢𝐨𝐧 ↳ Know how to connect multiple tables. ↳ Understand inner, outer, and self joins. ↳ Learn how filtering affects results post-join. 2. 𝐀𝐠𝐠𝐫𝐞𝐠𝐚𝐭𝐢𝐨𝐧𝐬 & 𝐆𝐫𝐨𝐮𝐩 𝐀𝐧𝐚𝐥𝐲𝐬𝐢𝐬 ↳ Use GROUP BY to uncover trends. ↳ Add HAVING to filter aggregated results. ↳ Go deeper with nested aggregations. 3. 𝐖𝐢𝐧𝐝𝐨𝐰 𝐅𝐮𝐧𝐜𝐭𝐢𝐨𝐧𝐬 ↳ Rank rows with ROW_NUMBER, RANK, DENSE_RANK. ↳ Compare values using LAG, LEAD. ↳ Partition data for running totals and comparisons. 4. 𝐒𝐮𝐛𝐪𝐮𝐞𝐫𝐢𝐞𝐬 & 𝐂𝐓𝐄𝐬 ↳ Use subqueries to isolate logic. ↳ Break down complexity with CTEs. ↳ Write recursive queries for hierarchy problems. 5. 𝐐𝐮𝐞𝐫𝐲 𝐋𝐨𝐠𝐢𝐜 & 𝐎𝐩𝐭𝐢𝐦𝐢𝐳𝐚𝐭𝐢𝐨𝐧 ↳ Control flow with CASE, COALESCE, NULLIF. ↳ Filter efficiently using WHERE, IN, EXISTS. ↳ Optimize performance with indexes and EXPLAIN. You don’t need to memorize everything. Just understand these patterns deeply. That’s how top candidates stand out. Check out the full breakdown on "𝐇𝐨𝐰 𝐭𝐨 𝐀𝐜𝐞 𝐒𝐐𝐋 𝐈𝐧𝐭𝐞𝐫𝐯𝐢𝐞𝐰𝐬": https://lnkd.in/dVfhtz3V Remember, practice is the key!! I’ve attached a cheat sheet of the most common SQL functions to help you prep faster. ♻️ Save it for later or share it with someone who might find it helpful! 𝐏.𝐒. I share job search tips and insights on data analytics & data science in my free newsletter. Join 13,000+ readers here → https://lnkd.in/dUfe4Ac6

  • View profile for Brij kishore Pandey
    Brij kishore Pandey Brij kishore Pandey is an Influencer

    AI Architect | AI Engineer | Generative AI | Agentic AI

    708,510 followers

    Master the core SQL commands that drive 80% of tasks. This post focuses on practical, real-world applications of SQL for maximum impact. Fundamental SQL Commands 1. 𝗦𝗘𝗟𝗘𝗖𝗧: Retrieving specific data        𝚂𝙴𝙻𝙴𝙲𝚃 𝚏𝚒𝚛𝚜𝚝_𝚗𝚊𝚖𝚎, 𝚕𝚊𝚜𝚝_𝚗𝚊𝚖𝚎, 𝚎𝚖𝚊𝚒𝚕 𝙵𝚁𝙾𝙼 𝚌𝚞𝚜𝚝𝚘𝚖𝚎𝚛𝚜;    2. 𝗪𝗛𝗘𝗥𝗘: Filtering results        𝚆𝙷𝙴𝚁𝙴 𝚙𝚞𝚛𝚌𝚑𝚊𝚜𝚎_𝚍𝚊𝚝𝚎 >= '𝟸𝟶𝟸𝟹-𝟶𝟷-𝟶𝟷' 𝙰𝙽𝙳 𝚝𝚘𝚝𝚊𝚕_𝚜𝚙𝚎𝚗𝚝 > 𝟷𝟶𝟶𝟶;    3. 𝗚𝗥𝗢𝗨𝗣 𝗕𝗬: Aggregating data        𝚂𝙴𝙻𝙴𝙲𝚃 𝚙𝚛𝚘𝚍𝚞𝚌𝚝_𝚌𝚊𝚝𝚎𝚐𝚘𝚛𝚢, 𝚂𝚄𝙼(𝚜𝚊𝚕𝚎𝚜_𝚊𝚖𝚘𝚞𝚗𝚝) 𝙰𝚂 𝚝𝚘𝚝𝚊𝚕_𝚜𝚊𝚕𝚎𝚜    𝙵𝚁𝙾𝙼 𝚜𝚊𝚕𝚎𝚜    𝙶𝚁𝙾𝚄𝙿 𝙱𝚈 𝚙𝚛𝚘𝚍𝚞𝚌𝚝_𝚌𝚊𝚝𝚎𝚐𝚘𝚛𝚢;    4. 𝗢𝗥𝗗𝗘𝗥 𝗕𝗬: Sorting data        𝚂𝙴𝙻𝙴𝙲𝚃 𝚙𝚛𝚘𝚍𝚞𝚌𝚝_𝚗𝚊𝚖𝚎, 𝚜𝚝𝚘𝚌𝚔_𝚚𝚞𝚊𝚗𝚝𝚒𝚝𝚢    𝙵𝚁𝙾𝙼 𝚒𝚗𝚟𝚎𝚗𝚝𝚘𝚛𝚢    𝙾𝚁𝙳𝙴𝚁 𝙱𝚈 𝚜𝚝𝚘𝚌𝚔_𝚚𝚞𝚊𝚗𝚝𝚒𝚝𝚢 𝙰𝚂𝙲;    5. 𝗝𝗢𝗜𝗡: Combining related data        𝚂𝙴𝙻𝙴𝙲𝚃 𝚘.𝚘𝚛𝚍𝚎𝚛_𝚒𝚍, 𝚌.𝚌𝚞𝚜𝚝𝚘𝚖𝚎𝚛_𝚗𝚊𝚖𝚎, 𝚘.𝚘𝚛𝚍𝚎𝚛_𝚍𝚊𝚝𝚎    𝙵𝚁𝙾𝙼 𝚘𝚛𝚍𝚎𝚛𝚜 𝚘    𝙸𝙽𝙽𝙴𝚁 𝙹𝙾𝙸𝙽 𝚌𝚞𝚜𝚝𝚘𝚖𝚎𝚛𝚜 𝚌 𝙾𝙽 𝚘.𝚌𝚞𝚜𝚝𝚘𝚖𝚎𝚛_𝚒𝚍 = 𝚌.𝚒𝚍;    Advanced SQL Techniques 1. 𝗦𝘂𝗯𝗾𝘂𝗲𝗿𝗶𝗲𝘀: Nested queries for complex conditions        SELECT product_name, price    FROM products    WHERE price > (SELECT AVG(price) FROM products);    2. 𝗖𝗼𝗺𝗺𝗼𝗻 𝗧𝗮𝗯𝗹𝗲 𝗘𝘅𝗽𝗿𝗲𝘀𝘀𝗶𝗼𝗻𝘀 (𝗖𝗧𝗘): Simplifying complex queries        WITH monthly_sales AS (    SELECT EXTRACT(MONTH FROM sale_date) AS month, SUM(amount) AS total    FROM sales    GROUP BY EXTRACT(MONTH FROM sale_date)    )    SELECT month, total    FROM monthly_sales    WHERE total > 100000;    3. 𝗪𝗶𝗻𝗱𝗼𝘄 𝗙𝘂𝗻𝗰𝘁𝗶𝗼𝗻𝘀: Calculations across row sets        SELECT    department,    employee_name,    salary,    RANK() OVER (PARTITION BY department ORDER BY salary DESC) AS salary_rank    FROM employees;    4. 𝗖𝗔𝗦𝗘 𝗦𝘁𝗮𝘁𝗲𝗺𝗲𝗻𝘁𝘀: Conditional categorization        SELECT    customer_id,    CASE    WHEN lifetime_value > 10000 THEN 'VIP'    WHEN lifetime_value > 5000 THEN 'Premium'    ELSE 'Standard'    END AS customer_segment    FROM customer_data;    Optimization Tips - Use indexes on frequently filtered columns - Avoid SELECT * and only retrieve necessary columns - Use EXPLAIN ANALYZE to understand query execution plans Learning Strategy 1. Start with simple SELECT queries on a sample database 2. Progress to filtering and sorting data 3. Practice joins with multiple tables 4. Explore advanced techniques with real datasets 5. Participate in online SQL challenges and forums By mastering these SQL commands and techniques, you'll be well-equipped to handle a wide range of data analysis tasks efficiently. Regular practice with diverse datasets will solidify your skills. What's your favorite SQL trick for streamlining data ? Share your insights below!

  • View profile for Shakra Shamim

    Business Analyst at Amazon | SQL | Power BI | Python | Excel | Tableau | AWS | Driving Data-Driven Decisions Across Sales, Product & Workflow Operations | Open to Relocation & On-site Work

    191,845 followers

    Let's talk about 𝐒𝐐𝐋 concepts that not only help in interviews but also make your day-to-day job as a Data Analyst easier. In my experience of facing multiple interviews and working with SQL daily, I've found a few concepts extremely valuable in real-world analytics: 𝐂𝐨𝐦𝐦𝐨𝐧 𝐓𝐚𝐛𝐥𝐞 𝐄𝐱𝐩𝐫𝐞𝐬𝐬𝐢𝐨𝐧𝐬 (𝐂𝐓𝐄𝐬) These help simplify complex queries by breaking them into manageable parts. It makes your query readable and easy to maintain, especially when you're working in teams or on large projects. 𝐖𝐢𝐧𝐝𝐨𝐰 𝐅𝐮𝐧𝐜𝐭𝐢𝐨𝐧𝐬 (𝐑𝐎𝐖_𝐍𝐔𝐌𝐁𝐄𝐑, 𝐑𝐀𝐍𝐊, 𝐃𝐄𝐍𝐒𝐄_𝐑𝐀𝐍𝐊, 𝐋𝐄𝐀𝐃, 𝐋𝐀𝐆) These are game-changers. Instead of writing multiple subqueries, you can easily perform ranking, find running totals, compare rows, and calculate moving averages with one simple statement. 𝐒𝐮𝐛𝐪𝐮𝐞𝐫𝐢𝐞𝐬 (𝐍𝐞𝐬𝐭𝐞𝐝 𝐐𝐮𝐞𝐫𝐢𝐞𝐬) Subqueries allow you to perform complex operations step-by-step. They are great for scenarios where you need results from multiple queries combined into one. 𝐈𝐧𝐝𝐞𝐱𝐞𝐬 & 𝐐𝐮𝐞𝐫𝐲 𝐎𝐩𝐭𝐢𝐦𝐢𝐳𝐚𝐭𝐢𝐨𝐧 Understanding indexing helps your queries run faster. For instance, creating an index on columns frequently used in JOINs, WHERE, or GROUP BY clauses drastically improves performance, especially in large tables. 𝐉𝐨𝐢𝐧𝐬 𝐯𝐬. 𝐒𝐮𝐛𝐪𝐮𝐞𝐫𝐢𝐞𝐬 (𝐖𝐡𝐞𝐧 𝐭𝐨 𝐔𝐬𝐞 𝐖𝐡𝐚𝐭) Many of us get confused about using joins or subqueries. Typically, JOINs are more efficient for large datasets, while subqueries can be simpler to write for smaller or one-time analyses. 𝐂𝐀𝐒𝐄 𝐒𝐭𝐚𝐭𝐞𝐦𝐞𝐧𝐭𝐬 𝐟𝐨𝐫 𝐂𝐨𝐧𝐝𝐢𝐭𝐢𝐨𝐧𝐚𝐥 𝐋𝐨𝐠𝐢𝐜 These are useful for categorizing your data without using multiple queries. A single CASE statement can simplify your logic and save processing time. 𝐀𝐠𝐠𝐫𝐞𝐠𝐚𝐭𝐢𝐨𝐧𝐬 & 𝐆𝐫𝐨𝐮𝐩𝐢𝐧𝐠𝐬 You should know how to effectively use GROUP BY along with aggregate functions like COUNT, SUM, AVG, MAX, MIN. Grouping data properly is fundamental to answering most analytical questions. 𝐃𝐚𝐭𝐞 & 𝐓𝐢𝐦𝐞 𝐌𝐚𝐧𝐢𝐩𝐮𝐥𝐚𝐭𝐢𝐨𝐧𝐬 Real analytics problems often involve time series data. Learn functions like DATE_TRUNC, DATE_PART, DATE_DIFF, DATE_ADD, and DATE_FORMAT to handle date-time data effectively. 𝐒𝐞𝐥𝐟-𝐉𝐨𝐢𝐧𝐬 & 𝐑𝐞𝐜𝐮𝐫𝐬𝐢𝐯𝐞 𝐐𝐮𝐞𝐫𝐢𝐞𝐬 Not all data lives neatly in one table. Self-joins help you analyze hierarchical data like employee-manager relationships or user referral systems. 𝐇𝐚𝐧𝐝𝐥𝐢𝐧𝐠 𝐃𝐮𝐩𝐥𝐢𝐜𝐚𝐭𝐞𝐬 𝐚𝐧𝐝 𝐃𝐚𝐭𝐚 𝐈𝐧𝐭𝐞𝐠𝐫𝐢𝐭𝐲 Knowing how to identify and remove duplicate records using ROW_NUMBER() or DISTINCT ensures accurate and reliable analysis. SQL isn't just about writing queries; it's about efficiency, readability, and solving real business problems. The above topics cover essential areas that have personally helped me improve my productivity and provided great value during interviews. Did I miss any important topic? Drop your suggestions below. Follow Shakra Shamim for more such posts.!

  • View profile for Andy Werdin

    Business Analytics & Tooling Lead | Data Products (Forecasting, Simulation, Reporting, KPI Frameworks) | Team Lead | Python/SQL | Applied AI (GenAI, Agents)

    33,341 followers

    Are you ready to master SQL as a data analyst? Here are some tips to start your journey! 1. 𝗨𝗻𝗱𝗲𝗿𝘀𝘁𝗮𝗻𝗱 𝘁𝗵𝗲 𝗕𝗮𝘀𝗶𝗰𝘀: Start with the fundamental concepts like SELECT statements, WHERE clauses, and logical operations. These are your building blocks for querying your databases.     2. 𝗛𝗮𝗻𝗱𝘀-𝗢𝗻 𝗣𝗿𝗮𝗰𝘁𝗶𝗰𝗲: Practice on platforms like LeetCode, HackerRank, and Mode Analytics to solve SQL problems and build your confidence.     3. 𝗟𝗲𝗮𝗿𝗻 𝗝𝗼𝗶𝗻𝘀 𝗮𝗻𝗱 𝗦𝘂𝗯𝗾𝘂𝗲𝗿𝗶𝗲𝘀: Mastering different types of joins (INNER, LEFT, RIGHT, FULL) and subqueries is important. These skills are needed for complex data manipulation over multiple tables.     4. 𝗪𝗼𝗿𝗸 𝘄𝗶𝘁𝗵 𝗖𝗧𝗘𝘀: Common Table Expressions (CTEs) can simplify your queries and make them more readable. Learn how to use CTEs to break down complex problems into manageable parts.     5. 𝗨𝘀𝗲 𝗥𝗲𝗮𝗹 𝗗𝗮𝘁𝗮: Work with real datasets to understand the context and nuances of data analysis. Kaggle or governmental statistical sites are a great resource for finding interesting datasets to practice on.     6. 𝗥𝗲𝗮𝗱 𝗗𝗼𝗰𝘂𝗺𝗲𝗻𝘁𝗮𝘁𝗶𝗼𝗻: Familiarize yourself with the SQL documentation for the specific database management system (DBMS) you’re using, whether it’s MySQL, PostgreSQL, or SQL Server.     7. 𝗢𝗽𝘁𝗶𝗺𝗶𝘇𝗲 𝗬𝗼𝘂𝗿 𝗤𝘂𝗲𝗿𝗶𝗲𝘀: Learn about query optimization techniques. Efficient queries can significantly improve performance, especially with large datasets.     8. 𝗩𝗲𝗿𝘀𝗶𝗼𝗻 𝗖𝗼𝗻𝘁𝗿𝗼𝗹: Use version control systems like Git to manage your SQL scripts. This helps in tracking changes and collaborating with others.     9. 𝗕𝘂𝗶𝗹𝗱 𝗣𝗿𝗼𝗷𝗲𝗰𝘁𝘀: Build small projects that interest you. Creating your own database and running queries on it makes learning more enjoyable and practical. Follow these tips and you’ll build a strong SQL foundation. While SQL is not the only skill you will need to start a career as a data analyst, it's the most important one for most positions. What are your favorite resources for learning SQL? ---------------- ♻️ Share if you find this post useful ➕ Follow for more daily insights on how to grow your career in the data field #dataanalytics #datascience #sql #learningpath #careergrowth

  • View profile for Mariya Joseph

    Data Analyst at Comscore, Inc | Linkedin Top Voice 2025 | 15k+ followers

    16,584 followers

    During my technical interview days, these were the SQL concepts I kept revising over and over to stay sharp and perform well in every interview: 📌 Joins (INNER, LEFT, RIGHT, FULL OUTER) : I made sure I understood not just the syntax but also when to apply each type in real world scenarios. 📌 Subqueries & Nested Queries : These were common in interviews, so I practiced using them for complex filtering and calculations. 📌 Aggregate Functions (SUM, COUNT, AVG, etc.) : Summarizing data efficiently is crucial, and these functions were often tested. 📌 Group By & Having Clauses : Mastering these helped me group data and filter groups to get meaningful insights. 📌 Window Functions : These were a game changer! They allow you to perform calculations across rows related to the current row. 📌 Normalization & Database Design : A solid grasp of how to structure databases was often tested, and I made sure to get comfortable with these concepts. 📌 Indexes : Knowing how and when to use indexes helped me optimize query performance, especially for large datasets. ✏️ What I learned: Interviews are not just about knowing syntax, but understanding how to apply these concepts to solve real world problems. I practiced consistently, worked on sample problems, and built projects to reinforce my skills. 🌐If you found this helpful, like and repost to reach others who might need it. ✳️Follow for more daily content!

  • View profile for David Langer
    David Langer David Langer is an Influencer

    Author. Analytics educator. Microsoft MVP. I help professionals and teams build better forecasts using machine learning with Python and Python in Excel.

    141,220 followers

    I'm a machine learning consultant, and this might surprise you - I often write more SQL code than anything else. Here are 5 reasons why: 1) The best data is stored in databases. The best data in most organizations is of the "small" variety. And this best data is stored in databases like Microsoft SQL Server. Even if it's not stored in a database, it's almost always stored in a technology that speaks SQL. That means... 2) Clients expect SQL knowledge. I'm going to be completely honest here. I wouldn't land machine learning projects if I told clients I didn't know SQL. When it comes to data science, SQL skills are just assumed. Which leads to... 3) Many clients prefer SQL. It's still common to find organizations where Python isn't "the thang." Clients often prefer as much SQL as possible because SQL is well-known to the staff. I've even encountered situations where SQL is still preferred even though Python is known. 4) Most machine learning code is about the data, not the models. Believe it or not, it takes relatively few lines of code to train an ML model. Most of your time and code is spent working with the data you eventually feed into an ML algorithm. Given all this... 5) I use SQL extensively in machine learning projects: I use SQL to explore and understand data. I use SQL to acquire and clean data. I use SQL to engineer features. This is why, even in 2025, SQL is still a top data skill.

  • View profile for Rishabh Sharma

    Business Analyst | SQL | Python | Power Bi | Excel | Py Spark | Data Analyst | Sharing Daily Learnings in Data Field

    8,395 followers

    Preparing for a SQL interview? 🤓 Here's a checklist to ensure you're ready to ace it: 1🔶 Joins: Master the art of joining tables to extract meaningful insights. Understand different types of joins and when to use them. 2🔷 Group By: Dive deep into grouping data to analyze trends and patterns. Know how to aggregate information effectively using GROUP BY. 3🔶 Window Functions: Level up your skills with window functions. Learn how to perform calculations across a set of rows related to the current row. 4🔷 Core Database Concepts: Brush up on the fundamentals - understand indexes, transactions, normalization, and other essential concepts that form the backbone of databases. 5🔶 Schema Design (Facts and Dimensions): Explore the art of designing effective database schemas. Grasp the importance of organizing data into facts and dimensions for optimal performance. Remember, a strong foundation in these areas will not only help you crack the interview but also make you a more proficient SQL practitioner. Practice, understand the logic behind each concept, and don't hesitate to challenge yourself with real-world scenarios. Good luck! 🌟#sqldeveloper #databricks #linkedin #powerbi #dataanalysis #businessanalytics #ai #growth #learningandgrowing #dataengineering

  • View profile for Priyanka SG

    Senior Data Analyst | 240K LinkedIn | Ex-Target | Always hang out with DATA & AI

    246,006 followers

    Essential SQL Topics for Data Analysts: A Statistical Perspective  1. Aggregations and Statistical Functions    Basic Aggregation Functions: COUNT(), SUM(), AVG(), MAX(), MIN()    Advanced Statistical Functions: Functions like STDEV(), VARIANCE(), and MEDIAN() (though MEDIAN() is not natively supported in some SQL databases).      Examples:        Mean (Average): SELECT AVG(column_name) FROM table_name;        Standard Deviation: SELECT STDEV(column_name) FROM table_name;        Variance: SELECT VARIANCE(column_name) FROM table_name;     2. Percentiles and Quartiles    Calculating percentiles and quartiles to understand the distribution of data is important for outlier detection and other insights.    In SQL, this is often done using PERCENTILE_CONT() or NTILE() functions.      Example:       SELECT PERCENTILE_CONT(0.25) WITHIN GROUP (ORDER BY column_name) AS first_quartile       FROM table_name;        3. Window Functions    Window Functions like ROW_NUMBER(), RANK(), DENSE_RANK(), and NTILE() are essential for performing statistical analysis over specific windows of data.    They allow you to calculate running totals, cumulative averages, and perform ranking within partitions.      Example: Calculating a running total or moving average.       SELECT column_name,              SUM(column_name) OVER (ORDER BY column_name) AS running_total       FROM table_name;        4. Group By and Having Clauses    Using GROUP BY for grouping data into different categories and then applying aggregate functions to each group is crucial for statistical summary.    The HAVING clause helps in filtering data based on aggregate conditions.      Example:       SELECT department, AVG(salary)       FROM employees       GROUP BY department       HAVING AVG(salary) > 50000;        5. Correlation and Covariance    To identify relationships between variables, understanding correlation and covariance is important. SQL supports this via the functions CORR() and COVAR_POP().      Example: Checking the correlation between two columns:       SELECT CORR(column1, column2)       FROM table_name;        6. Hypothesis Testing    Although more complex hypothesis testing is usually done outside SQL (e.g., in Python or R), basic tests like chisquare or ztest can be approximated using SQL logic and aggregation functions.    For instance, contingency tables can be built using GROUP BY and COUNT().  7. Outlier Detection    Detecting outliers using statistical measures such as standard deviations from the mean or interquartile range (IQR) can be done using SQL queries.      Example: Detecting outliers based on mean and standard deviation.             SELECT *       FROM table_name       WHERE column_name > (SELECT AVG(column_name) + 3 * STDEV(column_name) FROM table_name);       Follow more for Priyanka SG #DataAnalyst #SQLServer #Excel #PowerBI #Python #DataVisualization

  • View profile for Jaret André

    Data Career Coach | LinkedIn Top Voice 2024 & 2025 | I Help Data Professionals (3+ YoE) Upgrade Role, Compensation & Trajectory | 90‑day guarantee & avg $49K year‑one uplift | Placed 80+ In US/Canada since 2022

    27,694 followers

    90-Day SQL Roadmap that 𝗮𝗰𝘁𝘂𝗮𝗹𝗹𝘆 𝗴𝗲𝘁𝘀 𝘆𝗼𝘂 𝗵𝗶𝗿𝗲𝗱. This will help you go from beginner to job-ready in 3 focused phases 𝗣𝗵𝗮𝘀𝗲 𝟭 Days 1–30: Foundation + Confidence Boost Goal: Learn how to write simple queries to answer real business questions. 1. What to Learn - SELECT, FROM, WHERE - ORDER BY, LIMIT - DISTINCT, BETWEEN, IN - Basic functions like COUNT(), SUM(), AVG() - Intro to JOINS (INNER + LEFT) 2. What to Practice Analyze sales data: “Top 5 customers by revenue” Clean user logs from a music or video platform 3. What to Build 1 mini project per week Start posting wins on LinkedIn & GitHub Build a “SQL Reflection Log” to track mistakes + lightbulb moments 𝗣𝗵𝗮𝘀𝗲 𝟮 Days 31–60: Intermediate + Storytelling Goal: Build structured queries and communicate business insights. 1. What to Learn - FULL OUTER, CROSS JOIN - GROUP BY, HAVING - CASE WHEN - Subqueries (in SELECT, FROM, WHERE) - Intro to CTEs - Data cleaning in SQL 2. What to Practice “Lost revenue due to missing data” “User retention by weekly activity” 3. What to Build Polished GitHub README with insights Weekly content on lessons or visuals from queries Document your process like a real analyst 𝗕𝗼𝗻𝘂𝘀 𝘀𝗸𝗶𝗹𝗹: Learn the basics of query optimization and indexing (just enough to speak about it in interviews) 𝗣𝗵𝗮𝘀𝗲 𝟯 Days 61–90: Advanced + Job Prep Mode Goal: Think like an analyst and prep for real interviews. 1. What to Learn - Advanced CTEs - Window Functions: RANK(), LAG(), LEAD() - Multi-step logic - Build multi-filter dashboards - Design simple pipelines (high-level) 2. What to Practice “Why are customers cancelling?” “Funnel breakdown: where do users drop off?” 3. What to Build 30 SQL interview questions in 30 days GitHub portfolio tagged by skill (Joins, Aggregations, CTEs, etc.) Final project: SQL case study with insights + recommendations 4. What to Share Final recap post on LinkedIn: “What I learned in 90 days of SQL” Walkthrough video or written summary for recruiters or hiring managers Bonus Habits to Build Along the Way: Write SQL every day, even just 10 minutes Talk through your query logic out loud (or record it) Use GitHub like your personal proof-of-work portfolio Share small wins every week to build visibility By Day 90, you’ll have: ✅ 6+ small projects ✅ 30+ interview-style questions solved ✅ Clean, structured GitHub ✅ LinkedIn proof-of-work ✅ The confidence to walk into interviews and deliver Follow Jaret André for data roadmap posts and job search tips

Explore categories