Scaling Workflow Automation

Explore top LinkedIn content from expert professionals.

Summary

Scaling workflow automation means expanding automated systems to handle more tasks, users, or complexity without breaking or slowing down. This involves redesigning processes, choosing the right tools and infrastructure, and ensuring automations run smoothly as your business grows.

  • Review your process: Take a step back to look at the entire workflow, not just individual tasks, and identify where automation can create a bigger impact.
  • Choose robust tools: Select queueing and orchestration tools suited for high-volume tasks and integrate monitoring to keep everything running reliably.
  • Add safety checks: Include validation, logging, and human approvals at critical steps so automations remain dependable even as they scale.
Summarized by AI based on LinkedIn member posts
  • View profile for Shreekant Mandvikar

    I (actually) build GenAI & Agentic AI solutions | Executive Director @ Wells Fargo | Architect · Researcher · Speaker · Author

    7,793 followers

    𝗡𝗼𝘁 𝗮𝗹𝗹 𝗔𝗜 𝗮𝗴𝗲𝗻𝘁𝘀 𝗮𝗿𝗲 𝗯𝘂𝗶𝗹𝘁 𝘁𝗼 𝘀𝗰𝗮𝗹𝗲. This Brings use to part 6 - Scale and Automate Most agents work great as demos — but fail in production. The difference? Architecture, automation, and continuous improvement. Here’s how to take your AI agents from prototype → production → enterprise: 𝗦𝘁𝗲𝗽 𝟭: 𝗦𝗰𝗮𝗹𝗲 𝗳𝗿𝗼𝗺 𝗦𝗶𝗻𝗴𝗹𝗲 𝗔𝗴𝗲𝗻𝘁 → 𝗠𝘂𝗹𝘁𝗶-𝗔𝗴𝗲𝗻𝘁 𝗦𝘆𝘀𝘁𝗲��𝘀 Don’t overload one agent. Break workflows into specialized roles: • Planner → Executor → Reviewer • Researcher → Writer → Validator Use frameworks like LangGraph or CrewAI to orchestrate. Pass state safely between agents with shared memory stores. Example: A 3-agent workflow for market analysis — Research → Write → Review 𝗦𝘁𝗲𝗽 𝟮: 𝗔𝘂𝘁𝗼𝗺𝗮𝘁𝗲 𝘁𝗵𝗲 𝗘𝗻𝘁𝗶𝗿𝗲 𝗪𝗼𝗿𝗸𝗳𝗹𝗼𝘄 Stop triggering agents manually. Use event-driven automation: • Task queues (RabbitMQ / SQS) for async execution • Webhooks and polling for real-time triggers • Redis for caching and speed optimization • Checkpoints for long-running tasks Example: New ticket → Research → Summarize → Email update — all automated. 𝗦𝘁𝗲𝗽 𝟯: 𝗗𝗲𝗽𝗹𝗼𝘆 𝗳𝗼𝗿 𝗣𝗿𝗼𝗱𝘂𝗰𝘁𝗶𝗼𝗻 Turn your agents into APIs. Deploy with Docker on: • Render, Railway, AWS Lambda, or ECS • Add OAuth + rate limiting + authentication • Use horizontal scaling for high-load tasks • Distribute work with Celery or Lambda workers Example: Dockerized LangGraph workflow that auto-scales during traffic spikes. 𝗦𝘁𝗲𝗽 𝟰: 𝗕𝘂𝗶𝗹𝗱 𝗢𝗯𝘀𝗲𝗿𝘃𝗮𝗯𝗶𝗹𝗶𝘁𝘆 & 𝗚𝘂𝗮𝗿𝗱𝗿𝗮𝗶𝗹𝘀 You can’t scale what you can’t see. Add monitoring from day one: • Log aggregation (CloudWatch, Datadog, ELK) • Prompt tracing with LangSmith • Store outputs for audits and compliance • Safety guardrails with Pydantic schemas and MCP tools • Track API usage and model drift Example: LangSmith traces every agent step and triggers retries on errors. 𝗦𝘁𝗲𝗽 𝟱: 𝗖𝗼𝗻𝘁𝗶𝗻𝘂𝗼𝘂𝘀 𝗜𝗺𝗽𝗿𝗼𝘃𝗲𝗺𝗲𝗻𝘁 𝗟𝗼𝗼𝗽𝘀 Your agent should get smarter over time. Build self-improving workflows: • Reviewer agents catch low-quality outputs • Agent feedback → memory writeback • Continuous learning workflows • Cron-based automation (AWS EventBridge / GitHub Actions) Example: “Agent Health Monitor” reviews outputs every 24 hours, identifies failure patterns, and suggests improvements. 𝗪𝗵𝘆 𝗧𝗵𝗶𝘀 𝗠𝗮𝘁𝘁𝗲𝗿𝘀 • Single agents are toys. Systems are powerful. • Automation isn’t just running tasks — it’s creating self-improving workflows. • Scaling requires: Structure, Orchestration, Observability, Cost Control, Security. 𝗣𝗿𝗼 𝗧𝗶𝗽 Start modular. Add orchestration early. Ship with observability baked in. Then layer continuous improvement. 𝗙𝗶𝗻𝗮𝗹 𝗧𝗵𝗼𝘂𝗴𝗵𝘁 The agent isn’t your system. The system is what makes your agent production-grade. Build workflows that collaborate, self-improve, and handle real-world workloads. That’s next-level automation.

  • View profile for Nina Fernanda Durán

    AI Architect · Ship AI to production, here’s how

    58,511 followers

    The fastest way to scale AI agents in production is to split the stack: one tool for orchestration, another for reasoning. Here’s exactly how the best teams do it today: ⬛ Use n8n for Fast, No-Code Orchestration 🔹Drag-and-drop workflows 🔹Connect easily across your stack 🔹Instant triggers and actions 🔹Deploy in minutes, no code needed Limitations: Not optimized for high-volume LLM calls or complex state management. Example: Trigger: new lead in HubSpot →n8n calls LangGraph API →Agent: generate pitch →Action: send via Gmail Alternatives: ▪️Airflow (for Python-centric teams) ▪️Prefect (for dynamic workflows) ▪️Custom scripts (if you need full control) ⬛ Use LangGraph for Deep Multi-Agent Reasoning 🔹Manage agent memory and logic 🔹Design complex decision flows 🔹Coordinate multiple agents 🔹Fine-tune every thought step Limitations: Requires Python knowledge; LLM costs can increase rapidly at scale. Example: Agent A: research ↔ Memory: retrieve sources ↔ Agent B: generate summary ↔ Decision: approve or refine Alternatives: ▪️Autogen (for conversational agents) ▪️CrewAI (for task-based teams) ▪️Custom DAGs with LlamaIndex Combine Both for Full Power ▪️n8n runs your workflows. ▪️LangGraph handles your logic. ⬛ Best Practices for Scaling 1. Scaling Requires Infrastructure: ▪️Deploy n8n workers on Kubernetes for high-volume triggers ▪️Batch LangGraph calls to optimize LLM costs 2. Not Every Agent Needs Both: ▪️Simple agents: Use n8n alone ▪️Cognitive-heavy agents: Add LangGraph 3. Monitor Key Metrics: ▪️n8n: Trigger latency ▪️LangGraph: LLM cost and accuracy 🔗 Start here: • n8n → n8n·io • LangGraph → langchain·com/langgraph ___________________ ⚡I'm Nina. I build with AI and show how it’s done. Real code. Real tools. Every week. #aiagents #orchestration #langchain #n8n

  • View profile for Andrew Ng
    Andrew Ng Andrew Ng is an Influencer

    DeepLearning.AI, AI Fund and AI Aspire

    2,440,858 followers

    How can businesses go beyond using AI for incremental efficiency gains to create transformative impact? I write from the World Economic Forum (WEF) in Davos, Switzerland, where I’ve been speaking with many CEOs about how to use AI for growth. A recurring theme is that running many experimental, bottom-up AI projects — letting a thousand flowers bloom — has failed to lead to significant payoffs. Instead, bigger gains require workflow redesign: taking a broader, perhaps top-down view of the multiple steps in a process and changing how they work together from end to end. Consider a bank issuing loans. The workflow consists of several discrete stages: Marketing -> Application -> Preliminary Approval -> Final Review -> Execution Suppose each step used to be manual. Preliminary Approval used to require an hour-long human review, but a new agentic system can do this automatically in 10 minutes. Swapping human review for AI review — but keeping everything else the same — gives a minor efficiency gain but isn’t transformative. Here’s what would be transformative: Instead of applicants waiting a week for a human to review their application, they can get a decision in 10 minutes. When that happens, the loan becomes a more compelling product, and that better customer experience allows lenders to attract more applications and ultimately issue more loans. However, making this change requires taking a broader business or product perspective, not just a technology perspective. Further, it changes the workflow of loan processing. Switching to offering a “10-minute loan” product would require changing how it is marketed. Applications would need to be digitized and routed more efficiently, and final review and execution would need to be redesigned to handle a larger volume. Even though AI is applied only to one step, Preliminary Approval, we end up implementing not just a point solution but a broader workflow redesign that transforms the product offering. At AI Aspire (an advisory firm I co-lead), here’s what we see: Bottom-up innovation matters because the people closest to problems often see solutions first. But scaling such ideas to create transformative impact often requires seeing how AI can transform entire workflows end to end, not just individual steps, and this is where top-down strategic direction and innovation can help. This year's WEF meeting, as in previous years, has been an energizing event. Among technologists, frequent topics of discussion include Agentic AI (when I coined this term, I was not expecting to see it plastered on billboards and buildings!), Sovereign AI (how nations can control their own access to AI), Talent (the challenging job market for recent graduates, and how to upskill nations), and data-center infrastructure (how to address bottlenecks in energy, talent, GPU chips, and memory). I will address some of these topics in future posts. [Original text: https://lnkd.in/gbiRs2mi ]

  • View profile for Agnius Bartninkas

    Operational Excellence and Automation Consultant | Power Platform Solution Architect | Microsoft Biz Apps MVP | Speaker | Author of PADFramework

    12,036 followers

    Power Automate Work Queues are not built for scale! That's a fact. When you think about scalability in Power Automate, one thing that will definitely come to mind at some point is queues and workload management. While you might be able to survive without them in some event-based transactional flows that only process a single item at a time, but whenever you process tasks in batches, or when RPA gets involved, you'll need queues. Power Automate comes with Work Queues out of the box. And you would think that's your go-to queueing mechanism for scaling. After all, it's at scale that you really need those queues - to de-couple your flows and make it easier to maintain, support, debug them, as well as make them more robust and efficient. Queues is a must even at medium scale. Heck, we use them even in small scale implementations. But the surprising thing about Power Automate Work Queues is that they are not fit for high scale implementations. And that is by design! The docs themselves (link in the comments) explicitly state that if have high volumes or if you dequeue (pick up work items from the queue for processing) concurrently, you should either do it within moderate levels or use something else. If you try and use Power Automate Work Queues for high scale implementations (more than 5 concurrent dequeue operations or hundreds/thousands of any type operations involving the queues), you'll get in trouble. There can be all sorts of issues that could happen - your data may get duplicated, you may accidentally deque the same work item in multiple concurrent instances, or your flows might simply get throttled or even crash. This is because of the way they're build and the way they utilize Dataverse tables for storing work items and work queue metadata. So, if you do want to scale, it's best to use an alternative. And, obviously, Microsoft wouldn't be Microsoft if they didn't have an alternative tool to do that. The docs themselves recommend Azure Service Bus Queues for high throughput queueing mechanisms. Another alternative could also be Azure Storage Queues, but that only makes sense if the individual work items in your queue can get large (lots of data or even documents) or when you expect your queue to grow beyond 80GB (which is possible in very large scale implementations). Otherwise, Azure Service Bus Queues are absolutely perfect for very large volumes of small transactions. On top of that, they have some very advanced features for managing, tracking, auditing and otherwise handling your work items. And, of course, there's a existing connector in Power Automate to use it. So, while I do love Power Automate Work Queues, I'll only use them in relatively small scale implementations. And for everything else - my queues will go to Azure. And so should yours.

  • View profile for Wilton Rogers

    Faith-Driven AI & Automation Thought Leader | Empowering Businesses to Scale Through Innovation by implementing “AI Agents” that never stop working | Follow my #AutomationGuy hashtag

    21,249 followers

    𝐀𝐈 𝐀𝐠𝐞𝐧𝐭𝐬 𝐢𝐧 𝐁𝐮𝐬𝐢𝐧𝐞𝐬𝐬: 𝐇𝐲𝐩𝐞 𝐯𝐬 𝐏𝐫𝐚𝐜𝐭𝐢𝐜𝐚𝐥 𝐀𝐮𝐭𝐨𝐦𝐚𝐭𝐢𝐨𝐧𝐬 🤖 Most companies don’t need “autonomous agents.” They need reliable automations that use AI at the right moments, without breaking. Here’s what AI agents do well today 👇 ✔️ 𝗪𝗵𝗲𝗿𝗲 𝗮𝗴𝗲𝗻𝘁𝘀 𝘄𝗶𝗻 (𝗽𝗿𝗮𝗰𝘁𝗶𝗰𝗮𝗹 𝗮𝘂𝘁𝗼𝗺𝗮𝘁𝗶𝗼𝗻𝘀) 𝗧𝗿𝗶𝗮𝗴𝗲: classify emails/tickets/leads and assign priority + owner 𝗦𝘂𝗺𝗺𝗮𝗿𝗶𝘇𝗲: meetings, threads, docs → clear action items + owners 𝗥𝗼𝘂𝘁𝗲: send the right task to the right tool/person (CRM, Slack, Jira) 𝗗𝗿𝗮𝗳𝘁: replies, SOPs, proposals, follow-ups (with human approval) 𝗘𝘅𝘁𝗿𝗮𝗰𝘁 + 𝘀𝘁𝗿𝘂𝗰𝘁𝘂𝗿𝗲: pull fields from messy text → clean CRM/Sheets updates ⚠️ 𝗪𝗵𝗲𝗿𝗲 𝗮𝗴𝗲𝗻𝘁𝘀 𝗯𝗿𝗲𝗮𝗸 (𝘁𝗵𝗲 “h𝘆𝗽𝗲 𝘇𝗼𝗻𝗲”) 𝗨𝗻𝗰𝗹𝗲𝗮𝗿 𝗴𝗼𝗮𝗹𝘀: vague prompts = random outcomes 𝗕𝗮𝗱 𝗱𝗮𝘁𝗮: missing context, outdated SOPs, messy CRM = wrong decisions 𝗡𝗼 𝗴𝘂𝗮𝗿𝗱𝗿𝗮𝗶𝗹𝘀: no validation rules, approvals, or fallback paths 𝗘𝗱𝗴𝗲 𝗰𝗮𝘀𝗲𝘀: exceptions and “special customers” cause cascading errors 𝗧𝗼𝗼𝗹 𝗰𝗵𝗮𝗼𝘀: too many integrations, no monitoring, silent failures ✅ 𝗧𝗵𝗲 𝘀𝗶𝗺𝗽𝗹𝗲 𝗿𝘂𝗹𝗲: 𝗟𝗲𝘁 𝗮𝗴𝗲𝗻𝘁𝘀 𝗿𝗲𝗰𝗼𝗺𝗺𝗲𝗻𝗱 𝗮𝗻𝗱 𝗽𝗿𝗲𝗽𝗮𝗿𝗲. Don’t let them finalize and send without checks. 𝗛𝗼𝘄 𝗜 𝗯𝘂𝗶𝗹𝗱 𝗮𝗴𝗲𝗻𝘁 𝘄𝗼𝗿𝗸𝗳𝗹𝗼𝘄𝘀 𝘁𝗵𝗮𝘁 𝗮𝗰𝘁𝘂𝗮𝗹𝗹𝘆 𝘀𝗰𝗮𝗹𝗲: 1. Start with a single workflow (ex: inbound lead → qualify → route) 2. Add human-in-the-loop at the risky step (send / update / approve) 3. Add validation + logging (required fields, confidence threshold, audit trail) 4. Measure time saved + error rate weekly ⏳ 𝗪𝗵𝘆 𝘁𝗵𝗶𝘀 𝗺𝗮𝘁𝘁𝗲𝗿𝘀 𝗻𝗼𝘄: Teams that treat agents like “magic employees” get burned. Teams that treat agents like automation components get compounding ROI. 👉 Want my 𝗔𝗴𝗲𝗻𝘁 𝗥𝗲𝗮𝗱𝗶𝗻𝗲𝘀𝘀 𝗖𝗵𝗲𝗰𝗸𝗹𝗶𝘀𝘁 (10 questions to decide if a process is agent-ready)? Comment “AGENT” oe send a DM and I’ll drop it. #AutomationGuy #ScaleThroughAutomation #AIAutomation #BusinessAutomation #AIAgents Follow me for AI & Automation updates and resources: https://lnkd.in/gjG8gvRd

  • View profile for Wayne Simpson

    Founder & CEO at nocodecreative.io | n8n Experts | Microsoft Partners | AI, Automation & Software Development

    11,338 followers

    Unlocking proper parallel execution in n8n is one of those small tweaks that can transform your automation efficiency. Night and day difference, really. By default, n8n runs subworkflows sequentially, like us Brits queueing up for a bus, but for high-volume tasks, this becomes a proper bottleneck. The trick lies in the 'Execute Sub-workflow' node: simply toggle off 'wait for subworkflow execution' and boom, each subworkflow runs asynchronously. Your tasks process in parallel rather than twiddling their thumbs in a queue. This approach shines when processing items independently, such as bulk data enrichment, multi-channel notifications, or parallel API calls. Instead of watching paint dry while your workflow crawls through each item, everything kicks off at once, slashing execution times. The n8n community has developed several patterns for this, including templates showing how to launch parallel subworkflows and then use a 'wait-for-all' loop to synchronise once the dust settles. Debugging becomes cleaner too, as failed subworkflows won't jam up the entire process. Fair warning though: parallelism demands more resources and adds complexity. Without solid error handling, you might find yourself chasing gremlins across multiple branches. Best suited for folks with some n8n experience, and keep an eye on your server resources. For teams scaling their automations, this parallel workflow pattern delivers substantial value. A practical way to squeeze more juice from your automation infrastructure while keeping operations ticking along nicely.

  • View profile for VINAY REDDY

    CEO | Agentic AI & RPA Transformation Leader | Building Enterprise AI Agents for Compliance, Finance & Energy | Driving the Enterprise Operating System for Agentic AI at Scale | INDIA | MALAYSIA | USA | UAE |

    31,782 followers

    You set up a Process Automation CoE to streamline workflows, boost ROI, and accelerate digital transformation—yet you’re still wrestling with low-impact initiatives, fragmented tech stacks, and skill gaps that stifle progress. Sound familiar? In every RPA CoE, these roadblocks are all too common. But what if you could unlock a blueprint that not only crushes these obstacles, but also turns your CoE into a well-oiled, innovation-driven powerhouse that consistently delivers tangible business value? Pain Points in Process Automation CoEs 1. Lack of Vision and Strategy: Misaligned objectives and absence of a scalable automation roadmap. 2. Limited Stakeholder Buy-In: Resistance to change and poor communication of the CoE’s value. 3. Weak Governance: Lack of policies, standards, and compliance frameworks for automation. 4. Skill Gaps: Inadequate technical expertise in advanced automation, RPA, AI, and ML tools. 5. Fragmented Technology Stack: Poor integration with legacy systems and underutilization of AI and predictive analytics. 6. Poor Process Selection: Automating low-impact processes with minimal ROI. 7. Scalability Challenges: Limited reusability of automation components across business units. 8. Change Management Issues: Resistance to automation and insufficient employee upskilling. 9. Inadequate Performance Monitoring: Limited tracking of ROI, productivity gains, and KPIs. 10. Security and Compliance Risks: Gaps in data governance and adherence to industry regulations. 11. Leadership Deficiency: Absence of a skilled technical leader to align CoE with business goals. Strategies to Strengthen the CoE for ROI and Growth 1. Set Clear Goals: Align CoE objectives with organizational KPIs and define a phased automation roadmap. 2. Build Robust Governance: Standardize policies, compliance frameworks, and success metrics for sustainable automation. 3. Foster Stakeholder Engagement: Conduct workshops, showcase automation success stories, and secure leadership buy-in. 4. Invest in Skills: Upskill teams in RPA,AI/ML. 5. Modernize Technology: Integrate tools into a unified platform and leverage advanced capabilities like AI and IoT. 6. Prioritize High-Impact Processes: Use data-driven methods to identify and automate processes with maximum ROI. 7. Plan for Scalability: Develop reusable automation components and build a sustainable pipeline of opportunities. 8. Change Management: Reskill employees, address resistance, and communicate automation benefits effectively. 9. Monitor Performance: Implement dashboards to track KPIs, optimize processes, and measure ROI. 10. Ensure Security & Compliance: Strengthen data governance and adhere to industry-specific regulations. 11. Appoint Skilled Leadership: Hire a seasoned CoE leader with expertise in process automation, AI, and strategy. #IntelligentAutomation #RPA #AI #ML #DigitalTransformation #CoE #AutomationROI #Leadership #cognitbotz #Innovation #AutomationStrategy #BusinessGrowth

  • View profile for James Haliburton

    Founder, Touch Grass Consulting · AI Strategy for Leadership Teams · 2 Exits · Building Agentic Systems in Production

    4,307 followers

    Only a year ago, I thought some processes were just too complex to automate. Take regulatory validation against ISO standards—a process we’ve been working on with a customer. It typically takes consultants 2+ working days per document because it’s not just about checking boxes. It’s a cognitive data process that requires expert judgment, structured analysis, and careful cross-referencing to ensure compliance. Automating something like this end to end seemed out of reach. But now, we’ve deployed an agentic workflow that processes 20,000 documents every 4 hours—a 10,000x+ speed improvement (!!) with higher accuracy, full auditability, and built-in citations. And this isn’t an isolated case. We’re seeing this across industries, helping businesses scale cognitive data processes that were once entirely manual. What’s surprised me most is that the key to this transformation isn’t just technical. Instead, it starts with human-led iterative process design—embedding expertise into systems that don’t just automate tasks, but amplify impact at scale. This shift is only just beginning, and it raises big questions: What other cognitive data processes are ready for transformation? I’d love to hear thoughts—and happy to share insights and frameworks if there’s interest. #CDPA #CognitiveDataProcessAutomation #AgenticWorkflows #CognitiveData #AI #Automation #Scalability #DigitalTransformation

  • View profile for Philip Lakin

    Director of AI Transformation at Zapier. Co-Founder of NoCodeOps (acq. by Zapier ’24).

    25,904 followers

    Your Automations Are Breaking More Than They’re Fixing. Here’s the uncomfortable reality: For every automation that saves time, there’s another one quietly breaking and costing you hours. You’ve seen it: 👀 A field changes, and suddenly a critical workflow stops working. ⚠️ Nobody catches it until customers are angry, deals stall, or metrics go haywire. 🚒 Firedrill time! A mad scramble to patch things up—until it breaks again. This isn’t a technology problem. It’s a strategy problem. Most automations are built fast and forgotten faster. They’re fragile, disconnected, and no one’s watching to make sure they still work. The result? 🔄 More time fixing than building. 🛠️ More manual work creeping back in. 📉 Less trust in the system you’re trying to scale. Here’s the hard truth: automations fail when no one owns their lifecycle. The fix? Stop building and forgetting. Start managing and evolving. Adopt an Automation Development Lifecycle (ADLC): 🗺️ Plan intentionally: Automations should serve a process, not just a task. Define how it fits into the big picture. 🤝 Build collaboratively: Ops and IT co-create workflows that are scalable and governed. 🚨 Track constantly: Changelogs and alerts flag issues before they become disasters. 🌱 Evolve continuously: Automations should grow with your processes, not stay stuck in the past. When automations are managed like systems—not shortcuts—they work. They scale. They deliver. So, ask yourself: Are your automations working for you, or are you constantly working to fix them? Because if you’re stuck in firefighting mode, it’s not automation—it’s chaos.

  • #AutoCon3 From Clicks to Code: Optical Network Automation Journey at GARR Matteo Colantonio, Optical Network Engineer at GARR, shared their journey to automate the optical network at GARR, an Italian research network. They started by looking at widely adopted tools, including Ansible. It worked to help the team update 92 transponders However, they realized Ansible has scaling limitations when things get complex. In the optical layer, some devices don’t support NETCONF so you have to develop a module. If you have simple procedures, such as pushing config, Ansible is fine. But as you get into complex logic to configure services, not just boxes, you may want to reconsider your life choices. They also tried working with vendor controllers. Provisioning optical circuits can take 40 to 50 clicks across 4 GUIs. The vendor controllers sort of worked. It didn’t replace all the manual clicks. They still had to do manual pre-provisioning work, create cross-connections on some cards, and fix non-meaningful names, and add descriptions. They also don’t have a single optical line system, so the controller API only works with one vendor. The Workflow Orchestrator Framwork They discovered Workflow Orchestrator developed by SURF, a Dutch research network. It’s been open-sourced and lets other organizations adopt the framework. workfloworchestrator.org What do you get out of the box? -It’s a framework, not a turnkey solution, but it lets you define your network services or entities, or domain models for your organization -It lets you track instances -It defines clear procedures, or workflows Everything is stored and tracked in a database for object and relational mapping You start by defining building blocks, such an optical fiber. There’s a fiber name, terminiations, OSS ID, etc. You turn these blocks into Products to manage the lifecycle of a Block. Workflows make things happen. It uses Python functions, so you can do whatever you want. It can handle very complex logic. They went from 50 clicks and 15 to 20 minutes to an automated workflow that takes 50 seconds. Was it Easy? No. It’s harder than getting started with Ansible, but it was worth it. From this project they got: -Central service definitions -Consistent execution of service management -They have a consistent architecture -If new hardware comes in, they can modify clients without having to modify workflows Key Take-Aways: 1. If you want to develop a scalable, maintainable solution, the best option is to go with abstract and composable models, and to go with stateful instances of these models. 2. If you want your network to be programmable, use the devices’ programmable interfaces and YANG models, not just CLI 3. Make sure your transformation is sustainable. Automate one service at a time to nudge people out of their comfort zones

Explore categories