Sign in to view Dave’s full profile
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
Sign in to view Dave’s full profile
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
Brantford, Ontario, Canada
Sign in to view Dave’s full profile
Dave can introduce you to 10+ people at Movable Ink
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
10K followers
500+ connections
Sign in to view Dave’s full profile
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
View mutual connections with Dave
Dave can introduce you to 10+ people at Movable Ink
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
View mutual connections with Dave
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
Sign in to view Dave’s full profile
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
About
Welcome back
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
Courses by Dave
-
API Test Automation with SoapUI1h 59m
API Test Automation with SoapUI
By: Dave Westerveld
76,405 viewers
Activity
10K followers
-
Dave Westerveld shared thisCheck out my interview. All the fun of API testing, with of course a smattering of my thoughts on AI.Dave Westerveld shared thisDeep Engineering #2: Dave Westerveld on Scalable API Testing and Orchestration Postman’s 2024 State of the API report shows 74% of teams now follow an API-first model. But with the rise of AI agents, gRPC, and GraphQL, testing strategy—not just tooling—is what separates good teams from great ones. In this issue, we speak with Dave Westerveld—testing expert and author of API Testing and Development with Postman—on scaling quality with clarity, contract-first thinking, and system-aware test design. We also unpack: ✅ Postbot and the limits of AI-generated tests ✅ CI-driven contract enforcement ✅ Test parallelization and orchestration trade-offs ✅ Emerging tools like Bruno for Git-native workflows 📖 Read the full issue here: https://lnkd.in/gnxSBbqH 📬Subscribe to receive future issues in your inbox: https://lnkd.in/gGTYT8TJ #APItesting #DevTools #SoftwareQuality #Postman #GraphQL #gRPC #DeepEngineering #LLMopsDeep Engineering #2: Dave Westerveld on Scalable API Testing and OrchestrationDeep Engineering #2: Dave Westerveld on Scalable API Testing and Orchestration
-
Dave Westerveld shared thisI was prepping for the Ministry of Testing #APIChallenge and recorded an exploratory testing session while I tried to figure out how the API worked. I made a brand new youtube channel just to share it. Let me know what you think! https://lnkd.in/ePdSgDXK via @YouTube
-
Dave Westerveld shared thisDave Westerveld shared thisRecently, BugRaptors Project Manager - QA, Rajeev V, interviewed Mr. Dave Westerveld, Senior Test Developer, working for D2L We were glad to have Dave on our #QATalks, where exciting and imperative insights were shared by him on the latest trends in software testing, also he talked about testing methodologies, testing tools, and myths about automation testing. Moreover, he shared with us ways to optimize the costs associated with software testing automation. Read all these interesting insights on testing shared by Dave here: https://bit.ly/3mzC0FP Hope to have another session soon with you! . . . . . . . #softwaretesting #softwaretestingcompany #qualitytestingservices #BugRaptors #QASolutions #QAServices #techtalks #technology #serviceprovider #automationtesting #softwaretestingtrends #automationanywhere #testautomation #businesssolutions #startup #BugRaptorsQARevolution #Don’tLetDownOnQuality #BeWithBugRaptors #QualityRevolution #WeAreRaptors #BugRaptorsIndia #QualityMatters
-
Dave Westerveld shared thisEver feel like your test automation is a waste of time? Maybe you're trying to solve the wrong problem! Check out my latest post on the TestProject blog: #testautomation https://lnkd.in/ezhmGqK
-
Dave Westerveld shared thisI've been flattered and blown away by the book reviews that have come in so far. Writing a book is a lot of work, but being able to help people makes it all worth it. Thanks for the kind words Larry!Dave Westerveld shared thisI've enjoyed my time with a great new book, API Testing and Development with Postman, by Dave Westerveld. Please read my review on Amazon: https://lnkd.in/dYfu6tQ #testing #packt #development #apitesting
-
Dave Westerveld shared thisQueuing Theory and Testing
-
Dave Westerveld liked thisDave Westerveld liked thisCommunity Partner Spotlight Whetstone High School is honored to have Mr. Damian Synadinos as our Family Ambassador. He mentors our CTE students regularly in our Business Foundations classes as they prepare for the Business Professionals of America competitions. Mr. Synadinos travels the world doing motivational speaking and owns the company Ineffable Solutions. https://lnkd.in/gihZv9vA Mr. Synadinos is a valued member of the Whetstone community! THANK YOU! #CTEinCCS #CCSCTEMonth #CTEMonth #LeadersGrownHere #IneffableSolutions
-
Dave Westerveld liked thisDave Westerveld liked thisThe Journey is the Reward! 6 years ago today I signed the letter of intent to join forces with Terminus. Last week a group of SigStars came from all over the country for the Sigstr Reunion in Indy. The Reward? Seeing everyones personal and professional success! Taking what they learned at Sigstr and growing into bonified HR, PM, Eng, CS, Marketing and Sales leaders and professionals in SaaS. A toast to team and our time at Sigstr. Slainte! PS - A special thank you to Justin Keller, Emily Wolfington SHRM-SCP and Kevin Vanes for helping to organize, plan and execute this event!
-
Dave Westerveld reacted on thisDave Westerveld reacted on thisI guess my time as a semi-pro walker is at a (temporary?) end. On Saturday, September 13, I touched the Northern terminus of the Pacific Crest Trail, ending my 2655 mile walk that began at the Mexican border on April 20. I'm going to need some time to readjust to civilization, but at some point, I'd like to do a little more of what I'm best at - leading highly engaged and productive organizations. If you know me or have worked with me, let me know if you know of any roles that may be a good fit. Also consider writing a recommendation. Being back in civilization is weird right now - it will take me a bit to get re-engaged, but looking forward to it.
-
Dave Westerveld reacted on thisDave Westerveld reacted on thisTonight our company had a scavenger hunt social event. We used an app whereby you had to walk to a specific location based on a clue, and once there it used geo-fencing to give you a task. We were instructed to be creative and use AI and so forth. So, what did I do? I informed the team that my mobile testing skills meant I could fake our location, and we can skip the walking part... perhaps we go grab a pint and 'do the scavenger hunt' from there. So that's what we did. Was it in the spirit, absolutely! It was creative, we bonded over a pint and my 'incredible' mobile testing skills 🤣 But it gets better. The challenges included taking pictures to prove you were at the location, so we took a picture in the pub, and sent it to AI to put us in that location 😁 So did we win? No 😢 Why? Because I spelt Padel as Paddle 🤣🤣🤣
-
Dave Westerveld liked thisDave Westerveld liked thisToday is going to be a productive day! The agenda: 1) Complete and review my slides for my upcoming webinar with Qt Quality Assurance about what's next in Test Automation, and designing your stratety. Register -> https://lnkd.in/eSfnS9Jq 2) Finish my Automation Strategy Canvas. This is a canvas to assess/score your current automation efforts and review them against the Automation in Testing principles and suggestions, to identify where to focus next. 3) Review and tweak my slides for my upcoming Keynote at Testit conference in Malmo - https://lnkd.in/e99kE3F6. This talk focuses on how testers/quality engineers have some of the most desirable skills to thrive in an AI context. 4) I'm collaborating with Manybrain LLC / Mailinator to produce some articles/how-to guides on email testing and how to implement it with Mailinator. 5) If I have time, I'll be doing some vibe coding on my test automation playground app in preparation for Agile Testing Days | Nov. 24 - 27, 2025 Automation Deep Dive track. The app and code need to support workshops on targeted testing, patterns, and data generation. I've not updated it since last ATD, so theres some updating required! How, am I going to achieve this? Well, I'm working from my favourite brunch spot right now, and I'm going to move to my favourite craft beer bar after this. Working in public places really helps my focus/ADHD, I find I'm less distracted (you're posting on Linkedin Richard!!!). I am, but this was one of the tasks. One to hold me accountable, but two, to promote all that I'm up to, so have that Richard!!! Have a great day!
-
Dave Westerveld liked thisDave Westerveld liked thisI'm seeing, and being invited to a lot of AI agent tooling demos at the moment. Tools that can automatically test your website, generate tests, automatic POM generate, automatic code generation, data builders, refactor your tests for you, and so forth. It's really cool to see the innovation and the buzz. Our options are rapidly increasing, and it's exciting. HOWEVER. All this demos I'm seeing have one thing in common, a very simple application under test, or a simple existing code base. So, what Richard? It can lead to the illusion these tools are all conquering, that you could just lift and shift into your team. We went on this EXACT same journey with test automation tools, especially in the low code/no code space. So keep your sceptical hat on is my warning, watch these tools, play with them, absolutely get hands on with them. But pay close attention to the context they are being demoed in, and compare that to yours. When the code base, or in AI terms the context gets bigger. Such as a more complex application or a larger code base, the underlying AI architecture has to change to handle this. There will simply be too much context for a single agent to handle, in a effective way. These tools, or at least the ones I'm seeing, aren't built to handle this, yet! It's obviously a journey for them as well. Anyhow, I'm not sure what my conclusion is, just felt compiled to write something. It's very exciting times, just don't be fooled by some of these demoes, be sceptical and experiment.
Experience & Education
-
Movable Ink
******** *********** *******
-
*********
******** ******** ******
-
***
****** **** *********
-
********** ** ********
******** ** ******* *********** undefined
-
View Dave’s full experience
See their title, tenure and more.
Welcome back
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
or
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
Publications
-
Scripting For Testers
LinkedIn Learning
See publicationA course on Scripting that is focused on helping testers stand out in the world of modern testing.
View Dave’s full profile
-
See who you know in common
-
Get introduced
-
Contact Dave directly
Other similar profiles
-
Clare So
Clare So
I'm an experienced software test engineer on building software testing and delivery solutions for mobile apps and web applications.<br><br>Tip for reaching out: If we have not spoken to each other before, please include a note in the connection request explaining why you'd like to connect.
539 followersWaterloo, ON
Explore more posts
-
Ben F.
Loop Software & Testing… • 17K followers
Scrolling LinkedIn this morning and saw multiple posts from QA folks saying some version of: “AI is great at generating test scaffolding and boilerplate… but the thinking hasn’t changed.” I get why these posts exist. They’re trying to defend the value of QA. They’re trying to say: “You still need us.” And they’re right, you absolutely still need strong QA thinkers. But saying the thinking hasn't changed is naive. The thinking has changed. Substantially. AI isn’t just good at scaffolding anymore. It’s increasingly good at: Suggesting what to test Mapping how pieces fit together Generating unit, integration, and API tests Surfacing edge cases you didn’t initially consider So the job is no longer: “What are the test cases for this feature?” It’s: What level should this be tested at, unit, integration, contract, E2E? How was this system actually built? What context does the AI need to reason correctly? How do I evaluate whether the AI did a good job? How do I prompt it to go deeper when it’s wrong? What documentation must exist so AI can operate effectively? Where are the architectural risks? That’s not less thinking. That’s bigger thinking. Yes, you still need a deep understanding of the product. But so do developers. So does product. That alone isn’t what makes QA special. The value now is thinking like a director of the full stack. Understanding: System design Test strategy across layers Risk modeling AI strengths and failure modes Feedback loops It’s no longer about writing test cases. It’s about orchestrating intelligence. If you’re in QA and acting like nothing has changed, you’re going to get left behind. If you lean into the shift? You become exponentially more valuable. The tools changed. The surface work changed. And yes, the thinking changed too. And pretending it didn’t is the most dangerous move of all.
46
17 Comments -
Scilife
86K followers
😱 “Can QA just quickly approve this?” In their minds, QA is a straight line: ➡️ Send document ➡️ Get approval ➡️ Move on Nice. Simple. And completely fictional. Because the real QA path isn’t a straight shot from A to B. It’s a series of well considered steps. i Before anything gets the seal of approval from QA, someone has already mentally walked through a whole chain of questions: 🤔 What is this really changing, and why now? 🤔 What does this touch — process, equipment, risk, training, validation, suppliers? 🤔 Does this clash with an existing SOP, WI, or form somewhere else in the system? 🤔 If an auditor asks “show me the story behind this,” will the evidence make sense? We’re not “being difficult.” We’re just trying to make sure today’s quick win doesn’t become tomorrow’s deviation, recall, or regulatory slip up. Good QA work can often appear invisible because the bad thing… never happens. No batch failure. No deviation. No “how did this get approved?” email. No warning letters. Just a quiet system that keeps functioning, safely, for patients and for the business. So when somebody says, “Can you just quickly approve this?”, we chuckle. Not because it’s really that funny. No, (it’s really not very funny.) But because the contrast between the straight line in their head and the reality of the carefully connected line QA is drawing is, let’s say… striking. Yes, it’s not as simple as a single straight line. But it’s this line that connects today’s shortcut to tomorrow’s consequence. It’s this line that chooses patient safety and long-term quality instead of quick wins. 💭 What’s one thing you wish people knew about what it really takes for QA to approve something? Share it below and help demystify that squiggly line 👇
1,493
60 Comments -
Angad S.
LeanSuite • 31K followers
Your CI program isn't failing because of bad tools. It's failing because of bad thinking. Most continuous improvement efforts treat symptoms, not systems. They see a problem and immediately jump to quick fixes. - Quality issues? Add more inspectors. - Safety incidents? Blame the worker. - Missed deadlines? Schedule more meetings. - Low productivity? Create new KPIs. This is symptom fixing. It feels productive but creates more problems. Systems thinking works differently. - Instead of reacting to problems, you analyze root causes. - Instead of blaming people, you improve processes. - Instead of quick workarounds, you design out problems. - Instead of tracking more metrics, you create feedback loops. - Instead of fixing symptoms, you address underlying culture. The difference is everything. Symptom fixing creates temporary relief. Systems thinking creates permanent solutions. Symptom fixing keeps you busy. Systems thinking makes you effective. Symptom fixing treats problems as isolated events. Systems thinking sees problems as connected patterns. Most CI programs fail because they never make this shift. They stay stuck in reactive mode. They celebrate putting out fires instead of preventing them. They mistake activity for progress. Real continuous improvement happens when you stop fixing symptoms and start changing systems. Which approach is driving your improvement efforts?
137
40 Comments -
Chris Gregan
TwentyFiveTen Consulting • 1K followers
Week 5 - QA with Claude Code: Took last week off for a little skiing up in Canada, but I'm back at it this week. One of the most difficult aspects of QA testing is capturing the real world usage of an application. Anticipating how users may use (and abuse) a piece of software is challenging as a professional because we often know too much about the underlying design and implementation to see through the eye of a novice user. So this week I though I would spin up an instance of my kanban app and see if I could get Claude to unleash a team of agents to use the app as if they were novice users and output some tests to mimic those use cases. I prompted Claude to spin up a few agent processes to act like novice users of my app, record what they do, then translate those actions into automated playwright tests. I divided the agents into the following "personas": - Explorer: Persona verifies the app is understandable at first glance - layout, labels, discoverability - Planner: Persona validates real-world workflows - creating a project board, setting priorities/dates, tracking progress - Stumbler: Persona checks resilience - empty inputs, whitespace, special characters, rapid interactions, data persistence across reloads - Curious: Persona does feature discovery - keyboard shortcuts, labels system, WIP limits, combined search+filter In around 3 minutes, each persona had completed a series of actions against my app and provided output for the creation of new automated tests based on the output. Claude then created a new .js for each persona and populated them with the actions of each. Claude then prompted me to approve the execution of the tests to ensure everything was done properly by each persona and the tests passed. It then kicked off a headless test run, running each suite simultaneously, and outputting the results to the playwright results folder in my branch. The tests it came up with were an interesting mix of actions. For example: test('Wait, what happens if I press the question mark key? test('Oh cool, slash focuses the search bar!' test('accidentally pressing Enter on an empty card input' test('trying the search box - typing in search when the board is empty finds nothing' The tests were well commented and written in the latest version of JavaScript. I was rather impressed with the tests it came up with in such a short time. I specified the number and type of agents to cover the most common type of novice user, but I imagine more focused agent personas could be crafted to explore very specific areas and user actions. If you lack real user input and would rather not rely on "fuzz testing" to gather random user actions and results, Claude can create user agents easily with custom personas to give you better coverage of your application using real world actions, then convert those actions to automated tests in a matter of minutes. I plan to explore this ability a bit more this week.
122
10 Comments -
Neeraj Wasan
Pulse-Smart Checklists… • 15K followers
QA Manager: "Everyone checks temperatures properly now." Me: "Show me yesterday's logs." QA: "All 4°C, perfect!" Me: "Every single reading?" QA: "Yes..." Me: "So your cooler maintains exactly 4°C for 8 hours straight?" QA: "..." Here's what's really happening: Your team learned to write 4°C. They didn't learn to measure it. When every reading is perfect, no reading is real. Real temperature logs look messy: - 3.8°C - 4.3°C - 3.9°C - 4.1°C Because real coolers fluctuate. Perfect logs mean one thing: Someone's better at fiction than food safety. Are you collecting data or collecting lies?
207
28 Comments -
Qambar Rizvi ( SQA)
EXCEED IT Services • 7K followers
On a recent discovery call, a CTO asked us: “Why is everyone obsessed on early QA? A bug is a bug… we’ll fix it anyway.” So we walked him through this image. Same bug. Same root cause. Completely different outcomes. 🪲 Bug found in Development → $1 A quick fix. Minimal disruption. Almost invisible. 🪲 Bug found in Testing → $10 Some rework, a few retests. Still works. 🪲 Bug found in UAT → $100 More teams are involved. Timelines start slipping. 🪲 Bug found in Production → $1000+ Hotfixes. Downtime. Support tickets. Angry users. Lost trust. This is where bugs stop being “technical issues” and become "business risks". And this is why mature product teams don’t ask: “Do we need QA?” They ask: “How early can QA get involved?” Strong QA is about keeping bugs small, cheap, and forgettable. If your bugs are expensive and visible to your customers.. ❌ You don’t have a testing problem. ✅ You have a late-testing problem. Partner with teams that shift this mindset early protect their reputation long before they scale. #QA #SoftwareTesting #QualityAssurance #QAEngineer #TestingCommunity #AgileTesting #CEO #CTO #BugFree #Technology #TechCareers #Automation #Product #Softwares
34
1 Comment -
XTM
14K followers
QA is where good intentions go to die. 💀 It’s essential...but slow, expensive, and frustrating to scale. Traditional QA tools catch surface-level errors: typos, missing text, broken grammar. But they’re rigid. Want to change tone or domain? You’ll need to rewrite a mess of rules. Every time. That’s changing thanks to AI. New AI-powered QA tools don’t just ask “Is this correct?” They ask: 🧠 Does this reflect the source intent? 🗣️ Does it sound natural in the target language? 👥 Is it right for the audience? Even better? They adapt. Feed in your glossaries, preferences, or brand tone, and the system tunes itself. No more rule-writing marathons! 🎉 Human reviewers still have the final say, of course. But with AI as a first layer, teams get to move faster and cheaper. 👉 Watch the full discussion: https://hubs.la/Q03B8bCS0
13
-
Eliya Hasan
ABHI Microfinance Bank, Ltd. • 6K followers
Most QA problems aren’t about tools or people, they’re about processes. If your testers are chasing bugs manually, your devs are guessing coverage, and your releases feel like fire drills, that’s not a resource issue. That’s a design flaw. Process is the real framework. Get that right, and everything else starts to click. #ProcessEngineering #SoftwareTesting #QualityOps #TestAutomation
24
2 Comments -
Michael Bolton
36K followers
Here we are in 2026, and here are all these articles about how AI is changing the landscape of testing and rendering manual testing obsolete. They look... strangely familiar. That's because they're *exactly the same* as those articles from 2018, talking about how *automation* is changing the landscape of testing and rendering manual testing obsolete. Most of them could have been rewritten and rereleased eight years later with a simple search and replace — although maybe the new versions been extruded from a GPT. All of them are based on the idea that testing is about using machinery to click buttons on screens in faster and faster ways. They're all about how to "automate the product" — and none of them is about *testing* the product. None of these articles — and none of the tools they refer to — get to the real point of testing: the enactment of critical thinking about the software and the services that it delivers to people. None of the articles mention risk. None of them talk about finding problems that matter to the business or to its customers. None of them address the need to challenge the product by performing experiments to expose trouble. None of them recognize that the graphical user interface is for humans to use — not machines. The articles and the tools focus almost entirely on demonstrating how the product can be driven by rote through a simplistic workflow. The principle motivation is distraction and avoidance: making sure that no one sets eyes or hands on the product. Testing is the process of evaluating a product by learning about it through experiencing, exploring and experimenting, with a special focus on finding problems that matter before it's too late. The product that's being evaluated can be any of the precursors to the deployed product. We can perform testing — via thought experiment — on ideas and plans and documents and artifacts and mockups and simulations. But when risk is on the line, we need to get experience with the real, built, running product too—unless the business and the customers are okay with unpleasant surprises. Tools can definitely help, but testing cannot be automated. Testing is not a manual process, either; it's a cognitive, analytical, empathetic, social process. When you're reading an article, watching a demo, or assessing a tool, ask: will this help us to find trouble? Will it help us to avoid fooling ourselves into believing that everything is okay? If the answer is No: whatever it's about, it's not about *testing*.
281
50 Comments -
Vivek Vardhan
RaftLabs • 13K followers
Think QA is just about squashing bugs? Think again. There's a bigger picture... Quality assurance isn't just bug-hunting. It's so much more. It's about alignment. Understanding how things affect the business, engaging the right people, and making sure testing meets user needs. It's not just about finding problems. It's about stopping them before they even start. Bugs? Just symptoms. The real issue? Poor quality strategy. Want to avoid common QA pitfalls? Here’s what to watch for: - Lack of clear communication - Ignoring user feedback - Rushing the testing phase - Not involving stakeholders - Overlooking long-term impact Quality can't be an afterthought. So, what's the biggest QA mistake you see every day? Let's talk! #QualityAssurance #SoftwareTesting #TechIndustry
9
-
Anshul Jain
ThinkSys Inc • 4K followers
3 Metrics That Reveal if Your QA Organization is Actually Effective At ThinkSys, we've helped dozens of companies transform their QA practices. These three metrics quickly separate high-performing QA teams from those just checking boxes: 1. Critical + Serious Bugs in Production (Plus Hotfixes Per Release) This is your ultimate scorecard. Every severe bug that escapes to production represents a QA failure—and a potential hit to revenue and reputation. Deploying more than 2-3 emergency hotfixes per release? Your quality gates need immediate attention. 2. Test Coverage For manual teams, track what percentage of your product is covered by your regression suite at the module level. For automated teams, measure code coverage percentage using your testing tools. Without this visibility, you're flying blind on what's actually being tested versus what's being assumed. 3. Regression Cycle Time How long does a full regression take? Best-in-class teams complete it in under 24 hours through automation—we consider 48 hours acceptable. Taking 3-5 days for manual regression? You've created a release bottleneck that's slowing your entire delivery pipeline. The Bottom Line: If you’re lagging on any of these three metrics, there could be critical gaps in your QA process that you need to address to truly become a Center of Excellence. What metrics does your team track? Drop a comment below.
26
1 Comment -
Marcus Merrell
2K followers
How about a change of pace? Rather than tear down years' worth of collective effort to provide a free and open tool for testing web applications, let's talk about how it should be better understood? Put another way: why would you throw away years of effort to implement tests in a new tool if you don't really know what trade-offs your making, or the pros and cons of each? In this post, I explore how Selenium and Playwright should BOTH exist in your test strategy. They have different capabilities, designed for different purposes, and implemented differently. It's time to celebrate those differences, not tear them down.
136
44 Comments -
Jonas Menesklou
AskUI • 15K followers
Most QA engineers I talk to are anxious about 2026, but there’s a mindset shift that some will be able to make. The roles getting eliminated are the ones that were always meant to be temporary. The repetitive clicking. The selector maintenance. The same smoke tests running on loop. That work was never the point of QA. It was the overhead we tolerated because automation was too fragile and too expensive to scale. I've seen teams where testers spend most of their week debugging flaky scripts and maintaining test suites that break with every UI change. They're not doing quality assurance anymore. They're doing infrastructure babysitting. And when leadership looks at the cost of that babysitting versus what an agent can do, the math writes itself. Here’s the mindset shift that makes or breaks a QA engineer. The QA engineers who will matter in 2026 are the ones who stopped defining themselves by test execution a long time ago. They're the ones who understand that their real value was never clicking through forms. It was knowing which forms to click, in what order, under what edge conditions, to expose the bugs no one else would find. Test design. Risk assessment. Exploratory thinking. Understanding the product deeply enough to anticipate where it will break before it does. These are the skills that get more valuable as automation handles the routine coverage. Think about what happened in data analytics. SQL experts didn't vanish when Looker and Tableau arrived. But the ones who only knew how to write repetitive queries got absorbed into teams that needed strategic thinking instead. QA is following the same pattern. The role doesn't disappear - it shifts up the stack. The testers who make it through this transition will be the ones who position themselves as architects of test strategy rather than executors of test scripts. They'll define what needs to be tested and why, while agents handle the how. If you're in QA leadership and feeling the pressure on your team DM me to talk about the transition that’s going on now. Happy to share what I'm seeing work across the teams we work with.
21
-
Oleg Sivograkov
TestFort • 7K followers
Sometimes people ask if QA is keeping up. But more often, the real issue is that no one knows what QA is actually doing. I don’t mean the reports. I mean: who’s testing what? how deep? what’s left out on purpose? what’s getting flagged but pushed anyway? If it is, even a strong QA team ends up looking passive. Or worse — late.
10
2 Comments
Explore collaborative articles
We’re unlocking community knowledge in a new way. Experts add insights directly into each article, started with the help of AI.
Explore More