✅ Day 37/100 – Data Structures & Algorithms Consistency Challenge Not every problem searches inside an array. Sometimes, the search happens in the range of possible answers. That’s exactly what today’s problems explored. 📌 Today’s Topic: • Binary Search on the answer 🧩 Problems I Solved: 1. Find Square Root of a Number 2. Find Nth Root of a Number 📘 What I Learned: • Monotonic functions allow binary search even without arrays • Instead of iterating values, we search within a valid numerical range • Precision and termination conditions are critical in numerical problems 🌍 Real-World / Industry Perspective: This technique is widely used in systems that involve numerical approximation, such as optimization engines, scientific computing, and performance tuning, where exact answers are impractical and efficient estimation is required. ⏱️ Efficiency Note: Using Binary Search: • Square Root: O(log n) • Nth Root: O(log n × log m) depending on precision • Space Complexity: O(1) 💡 Key Insight: Binary search is not limited to data — it’s a strategy for narrowing uncertainty. Day 37 completed. Consistency continues. #DSA #100DaysOfCode #BinarySearch #LearningInPublic #ProblemSolving #SoftwareEngineering
Binary Search in Numerical Problems
More Relevant Posts
-
✅ Day 38/100 – Data Structures & Algorithms Consistency Challenge Some problems don’t ask for the best value directly. They ask for the smallest value that still works. 📌 Today’s Topic: • Binary Search on feasibility 🧩 Problem I Solved: 1. Find the Smallest Divisor Given a Threshold 📘 What I Learned: • The feasibility condition creates a monotonic search space • Binary search can efficiently narrow down the minimum valid divisor • Careful handling of integer division and overflow is essential 🌍 Real-World / Industry Perspective: This pattern reflects real-world optimization problems, such as distributing workloads, allocating resources, or adjusting system parameters to stay within capacity limits while minimizing cost or effort. ⏱️ Efficiency Note: Using Binary Search: • Time Complexity: O(n log m) • Space Complexity: O(1) 💡 Key Insight: When feasibility is monotonic, binary search becomes a powerful optimization tool. Day 38 completed. Consistency continues. #DSA #100DaysOfCode #BinarySearch #LearningInPublic #ProblemSolving #SoftwareEngineering
To view or add a comment, sign in
-
-
🚀 Day 28 — Where do embeddings actually live? (Vector Databases) Yesterday I talked about embeddings — how text becomes vectors in high-dimensional space. But a question naturally follows: If every document becomes a 768-dimensional vector… where do we store thousands or millions of them? The answer: Vector Databases. A vector database is designed specifically to store and search embeddings efficiently. Unlike traditional databases that match exact values, vector databases perform similarity search. Here’s what happens in a RAG system: • Documents → converted into embeddings • Embeddings → stored in a vector database • User query → converted into an embedding • Database → finds the closest vectors (using similarity metrics) • Retrieved context → sent to the LLM Under the hood, it’s not simple linear search. Vector DBs use: • Approximate Nearest Neighbor (ANN) algorithms • Indexing structures for high-dimensional search • Cosine similarity / dot product distance metrics This is what makes retrieval fast — even with millions of vectors. What clicked for me today: Embeddings give meaning. Vector databases give speed and scale. Without a vector database, RAG doesn’t scale. With it, retrieval becomes real-time. Next: how retrieval actually selects the “right” chunks before generation. #Day28 #VectorDatabase #RAG #Embeddings #AIEngineering #LearningInPublic
To view or add a comment, sign in
-
💡 “In DSA, recognizing mathematical patterns can turn an expensive solution into an efficient one.” 🚀 #geekstreak60 — Day 19 Another day of the streak and today’s challenge was about identifying Pythagorean triplets in an array. 📌 Problem Statement Given an array of integers, determine whether there exists a triplet (a, b, c) such that: [ a^2 + b^2 = c^2 ] where a, b, and c come from different indices of the array. A naive brute-force approach would check all triplets, resulting in O(n³) time complexity — clearly not practical for large arrays. 🧠 My Thought Process The key insight was to transform the equation: [ a^2 + b^2 = c^2 ] This allowed me to: 1️⃣ Square all elements in the array 2️⃣ Sort the squared values 3️⃣ Fix c² and search for a² + b² using the two-pointer technique This reduces the search space significantly. 🛠️ Optimized Approach ✅ Squared all elements of the array ✅ Sorted the squared values ✅ Fixed the largest element as c² ✅ Used two pointers (left, right) to check if a² + b² = c² This pattern efficiently identifies whether a valid triplet exists. ⚡ Complexity Analysis Time Complexity: O(n²) Space Complexity: O(1) A significant improvement compared to brute force. 💡 Key Learning Many array problems become easier once you convert them into mathematical relationships. Today strengthened my understanding of: ✅ Pythagorean triplet detection ✅ Two-pointer search technique ✅ Reducing brute-force complexity Consistency continues…! 🚀 #geekstreak60 #npci #DSA #CPP #ProblemSolving #CodingJourney #TechGrowth #ContinuousLearning
To view or add a comment, sign in
-
-
✅ Day 57/100 – Data Structures & Algorithms Consistency Challenge Having two pointers makes operations easier. But it also doubles the responsibility. Today was about controlled deletion in a Doubly Linked List. 📌 Today’s Topic: • Targeted deletion in Doubly Linked List 🧩 Problems I Solved: 1. Delete the Kth Element in a Doubly Linked List 2. Remove a Given Node in a Doubly Linked List 📘 What I Learned: • Both `next` and `prev` pointers must be updated correctly • Order of pointer reassignment matters to maintain integrity • Edge cases (deleting head or tail) require special handling • Doubly linked lists simplify certain operations compared to singly lists 🌍 Real-World / Industry Perspective: These operations are fundamental in systems like: • LRU Cache implementations (removing least recently used nodes) • Task managers removing specific tasks • Undo/Redo systems updating state history • Dynamic playlist or navigation structures When nodes need to be removed quickly without full traversal, doubly linked lists become extremely powerful. ⏱️ Efficiency Note: • Time Complexity → O(n) (for locating the node) • O(1) once the node is identified • Space Complexity → O(1) 💡 Key Insight: More flexibility means more responsibility — every pointer must be updated carefully. Day 57 completed. Structural control improving steadily. #DSA #100DaysOfCode #LinkedList #LearningInPublic #ProblemSolving #SoftwareEngineering
To view or add a comment, sign in
-
-
𝐃𝐚𝐲 𝟓 𝐨𝐟 𝐛𝐮𝐢𝐥𝐝𝐢𝐧𝐠 𝐚 𝐫𝐞𝐬𝐞𝐚𝐫𝐜𝐡-𝐩𝐚𝐩𝐞𝐫 𝐑𝐀𝐆 𝐬𝐲𝐬𝐭𝐞𝐦 Most teams use HNSW. Very few actually understand why it’s fast. Yesterday we talked about scaling RAG to millions of PDFs. Today, let’s open the black box. 𝐓𝐡𝐞 𝐛𝐫𝐮𝐭𝐞-𝐟𝐨𝐫𝐜𝐞 𝐩𝐫𝐨𝐛𝐥𝐞𝐦 Naive vector search does something simple: → compare query with every vector → compute cosine similarity → return top-k It works at small scale. But as data grows, latency grows linearly. That’s exactly what production systems cannot afford. 𝐓𝐡𝐞 𝐤𝐞𝐲 𝐢𝐝𝐞𝐚 𝐛𝐞𝐡𝐢𝐧𝐝 𝐇𝐍𝐒𝐖 HNSW (Hierarchical Navigable Small World) changes the game. Instead of scanning everything… It builds a multi-layer graph of vectors. The intuition I like: • Top layers → highways (fast long jumps) • Lower layers → city streets (precise local search) Search becomes navigation — not brute force. 𝐇𝐨𝐰 𝐬𝐞𝐚𝐫𝐜𝐡 𝐚𝐜𝐭𝐮𝐚𝐥𝐥𝐲 𝐰𝐨𝐫𝐤𝐬 At query time, HNSW roughly does this: 1️⃣ Start at the top layer 2️⃣ Greedily move to closer neighbors 3️⃣ Drop down one layer 4️⃣ Repeat until the bottom 5️⃣ Return nearest candidates Each step eliminates huge parts of the search space. That’s where the speed comes from. 𝐖𝐡𝐲 𝐢𝐭’𝐬 𝐟𝐚𝐬𝐭 𝐢𝐧 𝐩𝐫𝐚𝐜𝐭𝐢𝐜𝐞 ⚡ Because HNSW: • avoids full scans • exploits locality • uses small-world connectivity • achieves sub-linear growth (in practice) Millions of vectors… …but only a tiny fraction are actually visited. 𝐓𝐡𝐞 𝐦𝐢𝐧𝐝𝐬𝐞𝐭 𝐬𝐡𝐢𝐟𝐭 Vector search is no longer about brute force. It’s about graph navigation at scale. And once you see that… Most vector database design decisions start making a lot more sense. — If you're building large-scale RAG systems, what retrieval bottleneck are you hitting right now? #RAG #HNSW #VectorSearch #AIEngineering #ANN
To view or add a comment, sign in
-
-
✅ Day 39/100 – Data Structures & Algorithms Consistency Challenge Optimization problems are not about eating faster. They’re about finding the minimum speed that still gets the job done. That’s exactly what today’s problem was about. 📌 Today’s Topic: • Binary Search on answer (Optimization under constraints) 🧩 Problem I Solved: 1. Koko Eating Bananas 📘 What I Learned: • The search space is not the array — it’s the possible eating speeds • The feasibility condition (can finish within H hours?) is monotonic • Binary search efficiently finds the smallest valid speed 🌍 Real-World / Industry Perspective: This problem models systems operating under limited resources. For example, in job scheduling in computer systems, there are 'n' tasks and limited processing capacity. Designers must determine the minimum required execution speed to complete all tasks within a deadline. Similarly, in databases and indexing, Binary Search is used to quickly find optimal positions, minimizing unnecessary work and improving efficiency. ⏱️ Efficiency Note: Using Binary Search on possible speeds: • Time Complexity: O(n log m) • Space Complexity: O(1) 💡 Key Insight: When constraints are fixed, the real question becomes — “What is the minimum required to satisfy them?” Day 39 completed. Consistency continues. #DSA #100DaysOfCode #BinarySearch #LearningInPublic #ProblemSolving #SoftwareEngineering
To view or add a comment, sign in
-
-
𝗗𝗮𝘆 𝟯/𝟲𝟬 – 𝗔𝗿𝗿𝗮𝘆𝘀 & 𝗖𝗼𝗿𝗲 𝗣𝗿𝗼𝗯𝗹𝗲𝗺 𝗣𝗮𝘁𝘁𝗲𝗿𝗻𝘀 Today I revisited one of the most fundamental data structures: Arrays. An array is a contiguous block of memory where elements are stored sequentially. Because of this structure: • Index access → O(1) • Iteration is predictable • Memory locality improves cache efficiency This is why arrays are extremely fast for read-heavy operations. However, trade-offs exist: • Insert/Delete in the middle → O(n) • Fixed size allocation • Resizing requires copying elements • Arrays may look basic, but they shape how we think about performance and iteration discipline. 𝗖𝗼𝗺𝗺𝗼𝗻 𝗔𝗿𝗿𝗮𝘆 𝗣𝗿𝗼𝗯𝗹𝗲𝗺 𝗣𝗮𝘁𝘁𝗲𝗿𝗻𝘀 While practicing, I focused on identifying reusable patterns rather than isolated solutions: • Two Pointer Technique • Sliding Window (Fixed & Variable) • Prefix Sum • Kadane’s Algorithm • Hashing with Array / Dictionary • Sorting + Linear Traversal • Binary Search • Cyclic Sort (Value-Index Mapping) • Monotonic Stack / Deque • Merge Intervals • Difference Array (Range Updates) • Frequency Counting / Bucket Technique The deeper I explore fundamentals, the clearer it becomes: • Patterns reduce complexity. • Recognition reduces brute force. • Discipline improves scalability. Revisiting basics with better perspective. #DSA #Arrays #BackendEngineering #SystemThinking #SystemDesign #ContinuousLearning
To view or add a comment, sign in
-
-
Searching is one of the most fundamental operations in computer science. Whether you’re working with arrays, trees, or hash tables, the right search technique can make all the difference in efficiency. Here are some simple definitions to keep in mind: Linear Search → Check each element one by one. Binary Search → Split a sorted list in half repeatedly. Jump Search → Hop ahead in fixed steps, then search within the block. Interpolation Search → Estimate the position based on the value (best for evenly distributed data). Exponential Search → Double the step size to find a range, then apply Binary Search. DFS (Depth-First Search) → Explore a tree/graph deeply along one branch before backtracking. BFS (Breadth-First Search) → Explore a tree/graph level by level. Hashing → Use a hash function to jump directly to the index (average O(1) time). 👉 Think of them like this: Linear = walk step by step Binary = cut in half Jump = hop then walk Interpolation = smart guess Exponential = sprint then search carefully DFS/BFS = explore deep vs. wide Hashing = teleport directly #HappyLearning
To view or add a comment, sign in
-
Most Binary Search Tree problems become easier when you realize one simple thing. 𝑰𝒏𝒐𝒓𝒅𝒆𝒓 𝑻𝒓𝒂𝒗𝒆𝒓𝒔𝒂𝒍 𝒐𝒇 𝒂 𝑩𝒊𝒏𝒂𝒓𝒚 𝑺𝒆𝒂𝒓𝒄𝒉 𝑻𝒓𝒆𝒆 𝒘𝒊𝒍𝒍 𝒂𝒍𝒘𝒂𝒚𝒔 𝒓𝒆𝒔𝒖𝒍𝒕 𝒊𝒏 𝒂 𝒔𝒐𝒓𝒕𝒆𝒅 𝒔𝒆𝒒𝒖𝒆𝒏𝒄𝒆. While I was solving a number of Binary Search Tree problems recently, I observed that this concept was involved in a number of problems. Once you realize this concept, it will be easier to approach the problems. For instance: • Finding the Kth smallest element in a Binary Search Tree • Checking whether a given Binary Tree is a Binary Search Tree or not • Finding the predecessor/successor of a given node in a Binary Search Tree • Converting a Binary Search Tree into a sorted sequence Rather than dealing with all these problems separately, realizing this concept will help us approach these problems. This reminds me of a lesson I learned while doing DSA: 𝑺𝒐𝒎𝒆𝒕𝒊𝒎𝒆𝒔 𝒕𝒉𝒆 𝒔𝒐𝒍𝒖𝒕𝒊𝒐𝒏 𝒕𝒐 𝒂 𝒑𝒓𝒐𝒃𝒍𝒆𝒎 𝒊𝒔 𝒏𝒐𝒕 𝒘��𝒊𝒕𝒊𝒏𝒈 𝒎𝒐𝒓𝒆 𝒄𝒐𝒅𝒆, 𝒃𝒖𝒕 𝒖𝒏𝒅𝒆𝒓𝒔𝒕𝒂𝒏𝒅𝒊𝒏𝒈 𝒕𝒉𝒆 𝒑𝒓𝒐𝒑𝒆𝒓𝒕𝒊𝒆𝒔 𝒐𝒇 𝒕𝒉𝒆 𝒅𝒂𝒕𝒂 𝒔𝒕𝒓𝒖𝒄𝒕𝒖𝒓𝒆 𝒅𝒆𝒆𝒑𝒍𝒚. #DSA #learning #BinarySearchTree #SoftwareEngineering #BuildInPublic #insights
To view or add a comment, sign in
-
✅ Day 53/100 – Data Structures & Algorithms Consistency Challenge Deleting the first or last node is easy. Deleting the *right* node requires precision. Today was about controlled pointer manipulation. 📌 Today’s Topic: • Linked List targeted deletion 🧩 Problems I Solved: 1. Delete the Kth Node in a Linked List 2. Delete Node with Given Value X 📘 What I Learned: • Maintaining a reference to the previous node is crucial • Edge cases (k = 1, empty list, value not found) must be handled carefully • Pointer reassignment must happen before losing access to nodes • Traversal + controlled update ensures structural integrity 🌍 Real-World / Industry Perspective: Target-based deletion reflects real-world operations such as: • Removing a specific task from a job queue • Deleting a user session by ID • Removing a record from dynamic datasets In many backend systems, operations are value-driven rather than position-driven — which makes this pattern highly practical. ⏱️ Efficiency Note: • Time Complexity: O(n) • Space Complexity: O(1) 💡 Key Insight: In linked lists, logic errors rarely appear in traversal — they appear in pointer updates. Day 53 completed. Stronger fundamentals every day. #DSA #100DaysOfCode #LinkedList #LearningInPublic #ProblemSolving #SoftwareEngineering
To view or add a comment, sign in
-