Sign in to view Dhruv’s full profile
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
Sign in to view Dhruv’s full profile
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New York, New York, United States
Sign in to view Dhruv’s full profile
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
2K followers
500+ connections
Sign in to view Dhruv’s full profile
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
View mutual connections with Dhruv
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
View mutual connections with Dhruv
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
Sign in to view Dhruv’s full profile
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
View Dhruv’s full profile
-
See who you know in common
-
Get introduced
-
Contact Dhruv directly
Other similar profiles
-
Erin O'Donnell, CBP, CMM, DES
Erin O'Donnell, CBP, CMM, DES
Association for Cannabis Banking
7K followersPlacitas, NM
Explore more posts
-
Josh Pollara
Stategraph • 8K followers
A competitor just told our lead they charge 10x for drift detection. 10x. For a cron job that runs terraform plan. That's not a pricing model. That's contempt for your operational safety. Drift detection is 50 lines of code. Schedule. Compare. Alert. Your intern built this last week. But the moment you need it reliable, documented, at scale, suddenly it's "enterprise functionality." They're charging you $50 for airport WiFi because they know you're trapped. They KNOW drift kills. They KNOW you need this. And instead of making it table stakes, like anyone who actually cared about your infrastructure would, they're betting you're too scared of compliance to push back. We've normalized vendors treating our basic safety as a luxury good. Stop accepting this.
47
8 Comments -
Pritam Kudale
Vizuara • 6K followers
Claude Code with Opus 4.6 is an incredibly powerful tool, but building enterprise-grade applications requires more than vibe coding. It demands a thoughtfully engineered orchestration layer. To address this, I designed a structured orchestration framework built on Claude Skills, Sub-agents, and a coordinated agent team that guides Claude Code through a production-ready development workflow. 𝗦𝗸𝗶𝗹𝗹𝘀: add-feature · build-app · create-repo · fix-bug · implement-issue · merge-phase · plan-project · resume-build · review-pr · run-tests 𝗔𝗴𝗲𝗻𝘁𝘀: conflict-resolver · implementer · orchestrator · planner · reviewer · tester 𝗔𝗴𝗲𝗻𝘁 𝗧𝗲𝗮𝗺: dev-team: a collaborative unit combining implementer, reviewer, and tester to ensure iterative delivery with quality gates. 𝗠𝗖𝗣 𝗜𝗻𝘁𝗲𝗴𝗿𝗮𝘁𝗶𝗼𝗻: GitHub If you’d like to explore the framework in action, check out the repository: https://lnkd.in/drmPqjEx At @FirstPrincipleLabs.ai, our focus is on building AI applications that operate reliably in real production environments. To explore some of the projects we’ve delivered: https://lnkd.in/dJguW2xF I’d love to hear your thoughts and feedback on structured orchestration for AI-driven development.
23
-
Juraci Paixão Kröhling
OllyGarden • 5K followers
I have been collecting OpenTelemetry pain points for the past few months. What is yours? As a maintainer, I see patterns in the issues, Slack conversations, and conference hallway chats. Some problems come up constantly: * SDK maturity across languages. Some SDKs are stable, others are not. Some implement the full specification, others cover only parts. Choosing a language often means accepting different levels of #OpenTelemetry support. * Collector configuration complexity. YAML that grows into hundreds of lines. Pipelines that become difficult to reason about. Debug sessions that take hours when something breaks silently. * Semantic conventions churn. You adopt a convention, then it changes. You update your dashboards, then it changes again. But the less obvious ones interest me more: * Multi-signal correlation. You have traces, metrics, and logs, but connecting them in practice remains harder than it should be. The promise of unified observability meets the reality of fragmented tooling. * Sampling consistency. Head sampling, tail sampling, probabilistic sampling. Getting consistent behavior across services in a distributed system is its own distributed systems problem. I am genuinely curious: what is your biggest OpenTelemetry challenge right now? Drop it in the comments. Not the easy ones, the ones that actually slow you down.
55
40 Comments -
Giuseppe Stuto
Offscript • 9K followers
🚀The “cracked engineers” craze isn’t a moment — it’s a *structure shift.* I’ve been spending a lot of late nights with hacker houses across the Northeast → Bay Area, and the level of builder brilliance (AND ability!) in these rooms is undeniable. The magic is very real and it’s changing how I calibrate signal, community, and where the next generation of category-defining teams will come from. Here’s why I think the hacker house movement is taking the innovation economy by storm quietly, but fast: 🏠Co-living collapses the “startup distance.” Standups in the kitchen. Pair debugging after dinner. Shipping loops that would normally take weeks… happen in days. 🧠Talent wants signal-rich rooms (not big noisy networks). Younger founders are opting into smaller, more intense circles where it’s obvious who’s building and who’s just talking. 🧲Hacker houses are the “intimacy layer” on top of accelerators. Accelerators compress time on company-building; houses compress time on craft, taste, and tempo, the stuff that makes engineers “cracked.” 🕯️Culture matters: hacker houses make “building” the default. When everyone around you is shipping, you ship. When everyone around you is learning, you learn. That’s the compounding. My take: the next legendary companies will look less like “networked on Twitter” and more like “forged in a house where everyone builds.”
44
2 Comments -
Paul Byrne
Razoyo • 5K followers
I built the same app using Replit, Lovable, DataButton, Phoenix.new, and Cursor to see how these AI coding tools stack up. Each one had its strengths. Some handled auth well, others had solid UI or setup. But when the project got more complex, most of them started to struggle. They're great for quick demos or prototypes, but not quite ready for production. If you're testing these tools too, I'd love to compare notes. #AItools #AppDevelopment #NoCode #DevTools #TechExperiment #Razoyo
14
3 Comments -
Yan Cui
THEBURNINGMONK Limited • 50K followers
You can now get CloudFront and a bunch of associated services (WAF, Route53, CloudFront functions, logging, etc.) for a flat monthly fee. This sounds tempting, but there are some nuances you must consider. Announcement post 👉 https://lnkd.in/erVpkJsf Pay-per-use pricing removes waste, but teams often dislike it because it’s difficult to budget and it can result in sudden spikes, whether it's from a successful launch or a DOS attack. However, the phrase "serverless edge compute" in the announcement is misleading. Because Lambda@Edge functions are not supported on the flat-rate plan, only CloudFront functions are. There’s also this line: "You may experience reduced performance if you exceed your allowance, but you won’t incur overage charges" And in the developer guide (linked below) says: "AWS may take appropriate action, which may include reducing your performance (for example, throttling) or requiring a change to your pricing structure." Unsupported features list: https://lnkd.in/eqa6WJRD So when you go over the usage quota, your service can slow down or become unavailable. That means the trade-off looks like this: * Below quota -> you waste money. * Above quota -> you risk throttling. This is fine for personal projects or non-critical apps where downtime is not costly and you care more avoiding a nasty billing surprise. But it's a bad deal if availability and elasticity matter more to you than certainty around budget, which describes most business critical systems. In effect, this is the closest thing AWS has offered to a direct spending cap. Service quotas have always acted as an indirect cap, but they’re rough because they cover concurrency and request rate, not monthly usage. I'm curious, would you consider this for a production workload?
133
19 Comments -
Ivan Lee
Datasaur • 11K followers
A couple months ago I posted there were 2 very different types of vibe coding - coding an app from scratch vs. assisting professional engineers writing production-level code. This ambiguity was causing a lot of confusion. AWS's new Kiro agent introduces the idea of spec-driven development. "By using specs, Kiro works alongside you to define requirements, system design, and tasks to be implemented before writing any code. This approach explicitly documents the reasoning and implementation decisions, so Kiro can implement more complex tasks in fewer shots." I think this could start truly splitting the space along a meaningful difference in use cases. Looking forward to trying this out! https://kiro.dev/
27
3 Comments -
Daria Soboleva
Cerebras Systems • 2K followers
One of my favorite moments from recent TNG Technology Consulting Big TechDay talk was this question during Q&A: "Do MoE experts actually specialize?" As a field we spent most of our effort on optimizing load balancing. But we forgot MoEs original premise: experts need to specialize to not introduce redundancy. As a community we haven't even defined what specialization means. This is both a limitation and a challenge. Some people think domain specialization should emerge, others think that experts should learn how to use different tools. I think specialization is something lower level, experts should learn separate mathematical functions. As a community we need to agree on this definition and optimize for it. Otherwise we introduce redundancy that deployment teams pay for. Full talk: https://lnkd.in/eREQxqJa
83
2 Comments -
Ozan Unlu
Edge Delta • 19K followers
There's been a lot of talk recently about platforms like Datadog, Splunk, New Relic and others making it difficult to understand at a low level what is contributing to your bill. Though I don't believe it's malicious, there just isn't enough business justification for those companies to focus on giving you the clarity or controls. Enter Edge Delta, with telemetry pipelines specifically designed for every view and every tool you need to have understanding about all your data and where it ends up. If you're going to test out one of these observability/security platforms, do yourself a favor and put telemetry pipelines in place first.
41
-
Marcos Heidemann
symphony.is • 13K followers
While everyone was talking about Opus 4.6, for me the true killer feature of the recent Claude Code updates is the agent teams. It's been something i've been trying to achieve with customization for a while. Custom agents, orchestration scripts, specific CLAUDE.md instructions to coordinate work... with some degree of success. But what Anthropic shipped natively is a WHOLE different level. What makes this stand out is the inter-agent communication. We're not talking about simple fan-out/fan-in where you spawn workers and collect results. These agents talk to each other. Peer-to-peer messaging, dependency-aware task graphs that auto-unblock, agents that self-claim work from a shared task list. The lead can even enter Delegate Mode where it does ZERO implementation, only coordination. The image below is from one of my setups. A team manager orchestrating a librarian agent, a PhD lead, and 5 research sub-tasks with blocking dependencies. The librarian unblocks the research tasks, the PhD lead aggregates everything. All coordinated autonomously. And with this a whole new world of orchestration just unveiled itself. Distributing work across agents is the "easy" part. You break down tasks, assign owners, define dependencies. The HARD part, and what MOST stands out now, is aggregation. How do you take the output of 5 parallel agents, each with their own context window, and synthesize it into something coherent? That's the new skill. Anthropic themselves used 16 parallel agents to build a 100,000 line Rust C compiler that compiles the Linux 6.9 kernel. No human actively coding. ~$20,000 in API costs over ~2,000 sessions. We went from pair-programming with AI to managing AI engineering teams. The skills that transfer are the ones from engineering management: task decomposition, context management, knowing when to intervene vs let the team self-organize. This is a new paradigm, and i think it opens up several possibilities we haven't fully explored yet. ref.: https://lnkd.in/dVCe344z
68
12 Comments -
Jean-Paul Smets
5K followers
Unlike what many believe, Docker containers are not portable. Full explanation here: https://lnkd.in/egQwi3mD Bonus: Nix packages are not portable either. But, with binary packages built for each pair of (glibc, kernel) versions, they could become. Full explanation here: https://lnkd.in/ePjQyp7q
27
26 Comments -
Michael Ritchie
Definite • 7K followers
Talked to a founder on Monday that swore codex was better than claude. They had 11 MCPs configured in claude that chewed up over 50% of the context window. Their codex setup had 0 MCPs. Their claude config was like telling someone about railroads, Japan, and Apple then asking them a calculus problem.
7
2 Comments
Explore top content on LinkedIn
Find curated posts and insights for relevant topics all in one place.
View top content