Sign in to view Dan’s full profile
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
Sign in to view Dan’s full profile
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New York, New York, United States
Sign in to view Dan’s full profile
Dan can introduce you to 10+ people at Bloomberg
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
2K followers
500+ connections
Sign in to view Dan’s full profile
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
View mutual connections with Dan
Dan can introduce you to 10+ people at Bloomberg
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
View mutual connections with Dan
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
Sign in to view Dan’s full profile
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
Experience & Education
-
Bloomberg LP
******** ********
-
********** ** ******** **********
****** ** ******* * ** ******** ******* 3.95
-
-
****** **********
******** ** *********** * ** ******** ******** ***********
-
View Dan’s full experience
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
Welcome back
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
View Dan’s full profile
-
See who you know in common
-
Get introduced
-
Contact Dan directly
Other similar profiles
Explore more posts
-
Sita Lakshmi Sangameswaran
Google • 4K followers
✨ Feeling refreshed and energized after a vacation, and I'm excited to share 𝘁𝘄𝗼 𝗶𝗻-𝗱𝗲𝗽𝘁𝗵 𝘁𝗲𝗰𝗵𝗻𝗶𝗰𝗮𝗹 𝘀𝗲𝘀𝘀𝗶𝗼𝗻𝘀 I had the pleasure of recording. If you're building with LLMs and AI agents, these deep-dives are for you. 🤖 𝟭) 𝗕𝘂𝗶𝗹𝗱𝗶𝗻𝗴 𝘄𝗶𝘁𝗵 𝗔𝗗𝗞 & 𝗩𝗲𝗰𝘁𝗼𝗿 𝗦𝗲𝗮𝗿𝗰𝗵 with Kaz Sato 👉 Key Takeaways: How an Agent Development Kit (ADK) works with Vector Search to create sophisticated, production-ready AI systems. 👉 Beyond Basic RAG: We move past simple Q&A to discuss architectural patterns for building powerful semantic search and Retrieval-Augmented Generation (RAG) pipelines. 👉 Practical Implementation: A look at the code and components needed to bring these advanced search agents to life. 🔗 𝗪𝗮𝘁𝗰𝗵 𝗵𝗲𝗿𝗲: https://lnkd.in/gjKvq-MM 📈 𝟮) 𝗠𝗮𝘀𝘁𝗲𝗿𝗶𝗻𝗴 𝗔𝗴𝗲𝗻𝘁𝗢𝗽𝘀 with Dr. Sokratis Kartakis 👉 Key Takeaways: The "Day Two" Problem: Why traditional DevOps and observability tools fall short for monitoring complex, non-deterministic AI agents. 👉 Metrics That Matter: Learn the key metrics for tracking agent performance, cost, and reliability to ensure your agents are effective and efficient. 👉 A Framework for Reliability: Dr. Kartakis shares a practical framework for debugging, evaluating, and continuously improving your agents post-deployment. 🔗 𝗪𝗮𝘁𝗰𝗵 𝗵𝗲𝗿𝗲: https://lnkd.in/gpdC_Jk9 Which topic is more critical for you right now: building new agent capabilities or managing them in production? Let me know in the comments! #AI #GenerativeAI #LLM #VectorSearch #RAG #AgentOps #MLOps
49
3 Comments -
freeCodeCamp
2M followers
If you're a Senior Engineer looking to move up, the next role will likely be a Staff Engineer. And in this guide, Shruti shares tips from her own experience of getting promoted to Staff Engineer at PayPal and Slack. She talks about what Staff Engineers do (and how it's different from Seniors), why you might not be getting promoted, and how to take that next step. https://lnkd.in/g58dnFEG
103
3 Comments -
Artem Mirzabekian
Sovcombank • 7K followers
When math draws a hard line for large language models A recent research paper by Vishal Sikka and his son Varin Sikka takes a very different approach to evaluating large language models. Instead of benchmarks and demos, it uses mathematics. The authors show that LLMs operate within a fixed computational limit. Once a task requires more computation than the model can perform during inference, two things happen: the model cannot reliably solve the task and cannot verify correctness either. Incorrect output becomes unavoidable. This applies directly to many “agentic AI” scenarios - long planning chains, multi-step decision making, global optimization, and autonomous workflows. The paper does not argue that LLMs are useless. Quite the opposite. Within their computational domain, they are incredibly powerful tools. What it shows is that some classes of problems sit permanently beyond what transformer-based models can handle, no matter how much data or training we add. AI can become a phenomenal accelerator - for code, analysis, automation, and knowledge work. At the same time, treating it as a universal reasoning engine or a fully autonomous problem solver creates real risk. The research is a good reminder that understanding the limits of a tool is just as important as admiring its strengths. You can read about in more detail here: https://lnkd.in/dqQBQZxd
12
-
Larry Lansing
Waymo • 1K followers
In lieu of a resume, I present my changelist to add zero-copy "buffer donation" to Tensorflow's tstring class via tstring::assign_as_shared_view(): https://lnkd.in/dDaSfkdV This removes the need for some memcpy calls in the critical path of training data flow. Now you can "donate" any string-like type (e.g. std::string, Cord) to a tstring for use as its memory buffer, with lifetime managed by the tstring itself. This is essentially a tstring::view, but with ownership transfer and reference counting. Example usage: https://lnkd.in/dcYnf6am I've made all manner of neat performance optimizations in my career. This is a rare opportunity to share one externally. Enjoy. :)
38
2 Comments -
Vishesh Sharma
Google • 11K followers
In his latest video, Nithish Kannen breaks down the AI Residency program which he was a part of. What he covers: 1. Mentorship: How residency programs provide access to frontier AI infrastructure and mentorship from world-class researchers. 2. Resume: Nithish shares the exact resume that got him in. 3. Statement of Purpose: How to write a Statement of Purpose that signals research maturity by discussing failures, not just successes. 4. The Interview: The interview process for AI residency in top frontier labs. If you are a student or professional looking for a career in AI Research, do check out the video. Google AI Residency application and YouTube video Link in the first comment. 🖇️ #AIResidency #PreDoc #AIResearch #MachineLearning #Google #CareerGrowth #Engineering #DeepMind #Tech #Fundamentals
66
1 Comment -
Ankit Singh
FITTR • 400 followers
🦺 🛟 Agent Safety > Model Safety LLMs didn't break your app. Agents can. As we wire LLMs to tools, browsers, payments & code the risk shifts from bad text to bad actions. Top failure modes (seen in the wild & in papers): 🚩 Prompt/Content Injection -> Tool Misuse. Malicious web pages or data instruct agents to steal secrets or make unintended API calls. This is now a primary risk for browsing and MCP - style tool agents. 🚩 Over permissioned tools & long-lived tokens. Agents get "god mode" scopes. A single injection becomes account takeover or irreversible ops. Security notes for MCP emphasise least privilege and short-lived auth. 🚩 Unsafe web autonomy. Benchmarks show web agents will attempt harmful tasks unless constrained (posting misinformation, illicit sales, etc.). You must measure misuse not just assume guardrails. 🚩 Supply-chain & retrieval poisoning. Agents trust plugins, third-party tools & indexed data that an attacker can taint. OWASP’s newest GenAI Top 10 calls this out explicitly. 🚩 Process safety gaps. NIST's GenAI Profile (AI 600-1) warns that organizations ship agents without role clarity, human-in-the-loop or incident playbooks. ✅ Model safety reduces bad text. Agent safety prevents bad state. Treat the agent like a junior SRE - with narrow permissions, audited actions and clear escalation paths. If you're building agents, what's the one control you won't ship without? Also drop a comment if you'd like me to share a follow-up post on how to tackle each of these risks. #AgenticAI #AISafety #AIAgents #GenAI #CyberSecurity #MCP #OWASP #NIST #AITrust #ResponsibleAI #AIForBusiness
5
-
Henry M.
Epic • 2K followers
Waymo is reportedly finalizing a monumental $16 billion funding round that would value the company at approximately $110 billion. This would be more than double its valuation last year and one of the largest financings in autonomous vehicle history. This remarkable vote of confidence reflects growing belief in autonomous mobility’s potential to reshape how people move through cities. Since launching commercial robotaxi service, Waymo has steadily expanded its footprint across major U.S. markets. The company now operates thousands of fully autonomous vehicles and delivers hundreds of thousands of paid rides each week, with annual recurring revenue climbing above $350 million as of recently. The safety outcomes have also been impressive (despite highly covered failures at times). Independent data shows its autonomous driving systems continue to outperform human drivers across major safety metrics and reduce collision rates in real-world deployments. Looking ahead, this capital infusion is likely to accelerate robotaxi expansion into additional cities in the U.S. and internationally, while driving continued innovation in electric, driverless mobility. It'll be interesting to see their growth in 2026. #Waymo #AutonomousVehicles #FutureOfMobility #TransportationInnovation #UrbanMobility
97
1 Comment -
Lisa Bildsten
LinkedIn • 16K followers
Llama is officially in orbit! 🚀 We’re excited to announce that our teams at Meta have collaborated with Booz Allen Hamilton to deploy a fine-tuned version of Meta’s open-source Llama 3.2 AI model aboard the International Space Station’s National Laboratory. Together, we’re empowering astronauts to access digital tools and technical knowledge without relying on Earth-based connectivity, marking a new era for AI in space. ➡️ Discover how Space Llama is driving the future of space exploration: https://bit.ly/4dAirWA #LifeAtMeta #SpaceTech #AI #SpaceExploration #OpenSource
16
1 Comment -
James Verbus
LinkedIn • 3K followers
🚀 New workshop recording: Reinforcement Learning for Orbital Transfers (Brown University Physics AI Winter School 2026) The Brown University Department of Physics / Center for the Fundamental Physics of the Universe just posted the public recordings from their 2026 AI Winter School, including my 2.5‑hour hands-on module on reinforcement learning (RL) for orbital transfers. In the session we: ▪️ Used Hohmann transfer as an analytic benchmark (minimum‑Δv two‑burn transfer under ideal assumptions) ▪️ Formulated the task as an RL problem (state / action / reward / termination) ▪️ Trained and debugged policies (discrete + continuous thrust), and analyzed classic failure modes ▪️ Compared learned trajectories vs. the analytic baseline using Δv efficiency + stability diagnostics This work bridges physics intuition, modern RL, and the practical workflow of problem framing + debugging. 🙏 Huge thanks to the Brown organizers for inviting me for a second year in a row, especially Ian Dell'Antonio, Rick Gaitskell, Ariel Green, and Chongwen Lu. If you’re curious, the recording, slides, and code (notebook) are now public (link in comments). #ReinforcementLearning #RL #BrownUniversity #Physics #AI
51
3 Comments -
Chandra Shekhar Joshi
Amazon • 27K followers
I recently helped a Senior Engineer land a critical role, when the odds were stacked against him. He is on a work visa in the US. He comes from a country where the current geopolitical situation is little tricky. And the job market is brutal. He was under pressure. He needed a win. When we started, his experience was solid, but his delivery was hurting him. His stories started with technology and ended with technology. This works for a junior engineer. It is a red flag for a Senior or Staff role. The impact was missing. We had to tear down his preparation and rebuild it. We dug into his projects to find the real substance: - How he handled disagreements with stakeholders. - How he managed difficult customers. - The trade-offs he made between quality and speed. - How he justified complexity to his managers. and many more... We drilled his System Design (HLD) not just on "what" he built, but on the "why." - Why that specific data store? - How did he handle single points of failure? - What were the architectural trade-offs? The result? He cracked a Senior Engineer role at a healthcare startup. It isn't a FAANG company. But it is a place where he creates real impact and feels valued. Most people would stop there. They would relax. He didn't. Two weeks into the new job, he sent me the message in the screenshot below. "My goal is not to be promoted. It is to act and think like a tech lead in my current role." - He isn't asking for a promotion checklist. - He isn't asking for a roadmap. - He isn't waiting for permission. He wants to build the necessary skills to think like a Tech Lead in his current role. He understands that the title is just a side effect of the competence. This is how you own your career. While others wait for a sign that they are ready, engineers like him take action. - They move fast. - They fix their gaps. - And they focus on the next important thing. If you are preparing for mid-senior/staff SDE, EM System Design, Behavioural interviews, and need help, DM me COACH.
44
11 Comments
Explore top content on LinkedIn
Find curated posts and insights for relevant topics all in one place.
View top contentOthers named Dan Ma in United States
573 others named Dan Ma in United States are on LinkedIn
See others named Dan Ma