We've added a new skill: Deep Learning! The Deep Learning test evaluates knowledge of designing, training, and deploying neural network–based models to solve complex pattern-recognition problems across structured data, images, video, and text. Check it out here: https://lnkd.in/e3zEbTRV
Deep Learning Skill Added: Neural Network Expertise
More Relevant Posts
-
Cursor ’s new Subagents and Agent Skills can turn #SoftwareDevelopment into a truly collaborative workflow between multiple specialized #AIs and humans. Instead of a single “do‑everything” assistant, you spin up focused #subagents for tasks like #refactoring, #codeReview #testing, and #deployment, each powered by reusable skills versioned in your repo. This means your #architecturePatterns, #reviewChecklists, and #deployment runbooks become living, executable documentation that every engineer and agent can consistently follow. Over time, teams can build an internal “skill stack” that encodes their best practices, so new projects and new hires instantly benefit from the same high‑quality workflows. Ref: https://lnkd.in/dqdRANew
To view or add a comment, sign in
-
Our IT Director, Jim Webster, and one of our PMs, Shawn Tatum, talked to n8n about how Field is integrating AI to streamline processes and increase responsiveness to our clients. #AI #thoughtleadership https://lnkd.in/e_39erYH
To view or add a comment, sign in
-
What if your dev team could collaborate more efficiently while keeping security at the core of your workflow? This post talks about building a multi-agent workflow for secure development in Cursor AI
To view or add a comment, sign in
-
My colleague Brett Crawley has been doing some awesome work in this space. Multi-agent workflows with parallel codegen, security scanning, performance profiling, all orchestrated autonomously in Cursor. But there’s always a but: every one of those agent calls is burning through tokens. Opus, GPT-5, Codex. At scale, the cost of a single workflow run could exceed what you pay Cursor in a month. Right now VC money covers the gap. When it doesn’t, the economics of AI-assisted development get a lot more interesting. Do we skip thorough testing to keep costs down? Do we push back to offshore development? Or does Moore’s Law kick in fast enough to save everyone? Nobody knows yet. But the bill is coming…
Principal AppSec Engineer @ Mimecast | Author: Threat Modeling Gameplay with EoP (Packt) | Creator of SBOM-Graph | Creator of CAPEC STRIDE Mappings | OWASP Project Lead | Speaker | CISSP, CSSLP, CCSP
What if your dev team could collaborate more efficiently while keeping security at the core of your workflow? This post talks about building a multi-agent workflow for secure development in Cursor AI
To view or add a comment, sign in
-
There is nothing quite like the soul-crushing experience of a minor SDK update taking down production. We’ve all been there. You bump a version from 1.2.1 → 1.2.2, thinking it’s a routine patch. Instead, you spend hours chasing a ghost error because: A field was renamed A response schema shifted A streaming format changed Default retry logic was altered And none of it was clearly communicated. SDK updates shouldn’t turn engineers into detectives. The Reality In distributed systems — especially AI pipelines — the “small” stuff matters. • Schema shifts — Response shapes changing mid-flight • Silent contract breaks — Renaming fields without deprecation • Invisible behavior changes — Timeouts, retries, or defaults quietly modified When backward compatibility is violated, your logic isn’t the problem. The tool you trusted is. To the Maintainers If you’re building the tools we rely on: • Treat SemVer as sacred. If it breaks the contract, it’s a major version. • Deprecate before deleting. • Publish explicit migration notes. • Treat response shape as part of the API surface. Backward compatibility isn’t a “nice-to-have.” It’s reliability engineering. Let’s make boring updates boring again. #SoftwareEngineering #SDK #BackendDevelopment #DevOps #SystemReliability #DeveloperExperience #AI
To view or add a comment, sign in
-
-
Here’s the uncomfortable truth most people are about to learn the hard way. Vibe engineering is here to stay. Vibe coding made shipping cheap and fast. Prompts in. Features out. But speed without trust breaks products. The next big unlock is testing and QA. AI aware. Prompt aware. Built for scale. When code is generated fast, bugs scale faster. Security gaps hide. Logic fails quietly. Vibe engineering builds it. Vibe testing decides if it survives. If you’re learning how to test AI generated systems now, you’re early.
To view or add a comment, sign in
-
-
I've never been a programmer, but I'm always willing to learn something new. Lately I've been playing around with #Claude #Code on a use case. I am incredibly amazed at what it is able to do, but as the code is flying past the screen I have no real idea what it's actually doing. Which I understand, that's part of the point to make creating and developing easier. However, when something goes wrong, I am completely relying on #AI to troubleshoot. Earlier this week, I listened to The Art of Network Engineering in which Andy Lapteff 🛠️💬was interviewing Erika Dietrick about all things dev and how to help #network engineers get comfortable with coding and automation. Erika spoke about that even in the world of AI tools, understanding fundamentals is still important so that you are able to understand for instance what is wrong when something breaks and why. Erika also brought up an interesting point. For her, building is more fun and exciting than troubleshooting. If AI builds all of your code and you don't understand the basics, then you're letting AI do all of the fun stuff and you're stuck only troubleshooting. That was an interesting concept. How I try to approach AI when learning new things is I try to not just accept what it gives me but continue to ask questions. I try to force myself to, at least at a high level understand the what and the why. Great episode, you two! #code #coding #dev #development #automation #networking #AONE Now, just for old time's sake, here's a picture of Andy and I back in the day. Even when I had hair, his was still better.
To view or add a comment, sign in
-
-
Stating the obvious. If you are and/or want to be at the top of your game, GenAI based tools and technologies are a killer addition to your arsenal. Use the time, efforts and mental bandwidth saved to do things that you couldn't do in the past instead of staying in your comfort zone and slacking. Go learn and practice that work related skill you always wanted to aquire, try a new technology, go learn piano, go for a walk, hack learn to vibe code if you are not a software professional. GenAI has made possible many things that you couldn't try in the past. Usual caveat applies, make sure you are acting sensibly while using it...
To view or add a comment, sign in
-
Everyone is complaining about "AI Hype". 😒 I think whats happening is a lot of software engineers experienced a legitimately HUGE boost in productivity in recent months thanks to Opus 4.5, MCP servers, persistent context layers... 🚀🤯 Then, (I myself included) imagined a glorious future that turns out to be kind of a pain in the ass to actually realize and maintain... ✨🦾🤖✨ Especially when you have real work to do to... 👩💻 And the outcomes aren't deterministic... 🤷♀️ And measuring whether changes are actually improving velocity & quality is new and unfamiliar.. 📈 And keeping up with increased velocity (code reviews, QA, deployment, feedback) exposes new bottlenecks that didn't previously exist... 😮💨 I'm constantly finding myself needing to decide whether I stop and improve my agentic systems or log an issue to return later so I can stay focused on my actual work. 🤔 It's the most ENGINEERING INCEPTION I've experience my whole career! 😵💫 So I was pretty hyped! Still am. 🤩 AND the biggest gains I experienced initially now feel like boring table stakes again. 😴 So I don't think people are hyping AI maliciously, I suppose some might, but its a really frothy (am I using that term correctly) environment. It's exciting and confusing and boring and exciting. I don't have a CTA for this post... Just water-cooler talk I guess. ✌️💦 Maybe join me in the comments if you like chatting about this stuff 👋😅
To view or add a comment, sign in
-
Growth has a face. We’re living through one of the biggest shifts in software engineering history. From testing applications ➡️ To testing intelligence. AI is no longer just a tool in the stack. It’s becoming the system we must validate, govern, and trust. As testing evolves, so must we. The future belongs to engineers who: • Understand AI behavior, not just APIs • Validate intelligence, not just workflows • Think in systems, not scripts I’m deeply passionate about shaping this shift — where quality meets intelligence. We’re not just building software anymore. We’re building trust in AI. Let’s build it responsibly. #LearnWithRohit #AITesting #QualityEngineering #AgenticAI #SoftwareTesting #FutureOfQA
To view or add a comment, sign in
-