BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Presentations A Game of Patterns

A Game of Patterns

43:02

Summary

Tiani Jones explains how recognizing organizational behaviors and their resulting patterns can reveal opportunities for improvement and lead to better performance. Using examples like chess and a cyber-physical team, she shares practical insights on identifying anti-patterns and fostering generative cultures through mindful adjustments to constraints and practices in software development.

Bio

Tiani Jones has spent over 20 years in Engineering and Technology. Her experience is centered on innovation through experiments that inform product design and XP technical practices. Her special interests are complexity theory, Theory of Constraints, and observability.

About the conference

Software is changing the world. QCon London empowers software development by facilitating the spread of knowledge and innovation in the developer community. A practitioner-driven conference, QCon is designed for technical team leads, architects, engineering directors, and project managers who influence innovation in their teams.

Transcript

Jones: First of all, why patterns? Why do I talk about patterns and why do I care about them, and why do I think they're so interesting to explore? This is the reasoning steps that I go through when I think about patterns and when I talk to them in organizations and teams. The first is that every system is producing patterns. It's a function of the interactions between people, the use of technology in the system. Every system is optimized to do something. Every system design is perfect, you could say, for whatever it's designed to do, whether we acknowledge it or realize it or not.

The behavior then of the system is revealed in patterns, so we can then conclude that the system is optimized to produce patterns. What I've noticed in my work and over the many different domains that I've worked in why patterns is important, is that many organizations pick practices and then think that those will be silver bullets to solve problems. Some of those problems might be slow delivery of value, it could be the product is expensive for the market and they're trying to figure out how to make it less expensive. Whatever the problem, they throw practices at the problem and just have a go at it from there.

Why A Game of Patterns?

Why do I call this a game of patterns in systems and organizations? In minimal gameplay, we ask, did we follow the rules? If you follow the rules in a game without thinking, are you really playing? I don't think you're really playing. I think that the primary game in the complex, adaptive systems, which I'll define more about systems, I think that's recognizing patterns, understanding the patterns that are produced in the organization, how the organization is optimized to produce them. When you play games, you have good gameplay, you have poor gameplay, but you're practicing in order to perform, it's practice to performance.

The ultimate thing that you're looking for is performance. Performance is a sign of good play. You need to know in order to get to good performance, what to notice, what to measure to determine that something's well done. When something's not well done, or the performance is suffering, a deeper look into patterns and behaviors, generated by what you're doing and how you're interacting, will give you a clue into where to start. It will give you insight and awareness.

I like to bring this to life with an example. A few months back, I started playing chess again. I had not played for many years. I could move all the pieces correctly, but very clunkily. I would get a checkmate within minutes, because I really didn't know how to play. I felt confused, sometimes I had low confidence. I couldn't recognize the patterns and use the grid on the board. I would get cornered. I would blunder. I knew how to move the rook, the knight, and the queen. I knew exactly how to move them and what those rules were.

My performance overall was poor in the game, even though I could move these pieces perfectly. I saw Nakamura play with a YouTuber. They just had to put a video out there of them playing together. After playing a round or two, Nakamura noticed what was missing. He said, you don't know the opening patterns. You haven't mastered any closing techniques. There's a few times you should have won the game. That's where I think it brings to life this idea of understanding what good performance is and what it actually means to play a game.

Systems

Now that I've laid out that idea a little bit, I'd like to talk about systems, and then get back to how we bring patterns and behaviors together. I think it's important to understand systems and their properties and their nature to have a deeper conversation about patterns. The first thing about system, this holds true for all systems, is that they're never the sum of their parts, so they're a product of the interactions. No matter what kind of system, this holds true. In systems thinking, it focuses more on steady-state systems generally, with constraints on the parts. Interactions and behaviors are knowable.

The best example I could find was this about a feedback control system, which is taking me back to electrical engineering days. You have three kinds of control in a feedback control system, proportional, integral, and derivative. In the proportional control, there's a multiplier. It's a multiplier known as gain to value to the direct difference between the measured state and the desired state. This is the easiest type of feedback control system to implement. Integral integrates the error between the desired state and the measured state over time, and scales this over gain value. When used in conjunction with proportional control, the integration acts as a low-pass filter and it eliminates steady-state errors in the system. It reduces noise in the derivative error and therefore smooth out the control signal.

Then derivative acts on the current rate of change of the error and is scaled by a gain value, and allows the controller to anticipate future trend of error. Derivatives amplify signal noise. There's latency in systems, which is a time delay between when a real-world event occurs and when the data is fed back into the controller. There's noise in systems, and that's a disturbance on the signal. Mechanical and electrical signals produce these, and it's unavoidable. Noise could be from the environment, it could be from defects, it can be from design implementation decisions. They produce tiny shifts in voltage. All these elements work together and interact to produce something. Looking at them in isolation doesn't really mean much. It's how they work together to perform the design intent that's the most important. It's the sum of the interactions that stands out boldly.

In terms of complex, adaptive systems, there is no ability of the critical aspects. For example, you can probably know that certain behaviors and patterns are present in an organization based on how they do budgeting or how they fund projects, what they incentivize, for example. Anthro-complexity is a key concept in a complex, adaptive system, which is the people element. Peter Checkland defined it in Soft Systems Methodology as the human activity system with uncertainty and unknowability because of the human element. There are other types of complex, adaptive systems in nature. A beehive is complex. An anthill is complex. The ants respond to constraints. How they respond doesn't really change over time except through adapting to the environment.

If you can put ants in different environments, they'll do what they do. They're not self-reflective. Humans, we can actually change the rules of the game. We can change the dynamics. It's another layer of complexity in a system. It's not just reconfiguration, it's reshaping the constraints. There's an example of football. Humans created rules over time, but they've changed. We have an official rule book, but what happens live in a match? The referees interpret what they see live on the field.

Then recently, VAR was introduced, it was added. A ref can now make a call, but they can now be challenged by a VAR ref. This gives deeper understanding to the game. Socio-technical systems is another way of describing a complex, adaptive human activity system. There's a really great white paper on socio-technical systems, "The Evolution of Socio-technical Systems", by Trist. The analysis of these systems include three levels, which is the primary work system, the whole organization, and the macro-social phenomena. Organizations which are primarily socio-technical are dependent on their material means and resources for their outputs. Their core interface consists of the relations between a non-human system and a human system.

The macro-social phenomena, for example, is an interesting thing because when you have work done, say, for example, in a town or in a geographical area, people know each other, and perhaps they live in the same neighborhood. There's friendships. There's cultural elements. In the study around socio-technical systems, there is an insight that those social elements actually influence the ability to do work and had a part to play in an organization. In a large conglomerate, you could also say that macro-social phenomena is manifest in or at play when you analyze bigger bounded groupings of the organization, the geography, the company history. When one organization is purchased and melded into the larger group, the culture, the languages, all of that factors in.

I was speaking with this product leader one time who explained this example that brings this to life. We were actually chatting about Conway's Law, which is also known as the mirroring hypothesis. He mentioned that there was a company he was working with that purchased another company from another state in the U.S., and that the smaller company handled some packaging of their data. The main company was larger and often forgot to communicate with the smaller company. They would forget to include them in meetings. Everyone was co-located in one location except Maine, the Maine smaller company, and they were always frustrated. They would sometimes get voicemails that were left, hoped someone picked up. They would forget them in meetings. Calls would be dropped, emails. There was just no communication.

When he poked into the software architecture and the product architecture of this company, the product performance was perfectly mirrored in this communication breakdown between the larger company and the purchased smaller company. There was data losses and defects directly mirroring that structure. That's an interesting way of thinking about macro-social phenomena and the whole organization in the primary work system analyzing what happens in the socio-technical system. There's also the idea of internal networks. Those could be considered maybe micro-social.

Another example from this is from Lean. A lot of times, people thought that Americans would not understand how to work in a Lean way, and work in the Toyota Production System, because in Japan, from a cultural standpoint, you get lifelong employment. There's employment guarantees. Because of that, there was a feeling of safety and expectation then that you can pull the Andon cord and report problems because you don't feel the fear of retribution because you happen to be the person who's next to the problem and you're standing close to the problem. Sometimes, you would see when you have that fear of retribution and you don't have that freedom to pull the Andon cord, you might bury mistakes versus exposing them because of low safety. Those are some of the things when you start analyzing the socio-technical system and what's actually happening.

Patterns and Behaviors

I'd like to talk a little bit about patterns and behaviors. I like to use this tool that gives us a way to start explaining patterns and behaviors. A colleague of mine, he and I were thinking through this concept around what does it actually mean for an organization to behave in a certain way? What are the patterns then that you might see? How do you link those to practices? This is the beginning of that exploration. This is what we came up with. Using Westrum's Typology, which I really like as a starting point to think through, it's a great framework to think about recognizing organizational cultures. There are three types: pathological, bureaucratic, and generative. We thought through this a little bit, and thought, we think that the behavior in each case is either power-oriented, rule-oriented, or performance-oriented. Then there's constituent patterns that are manifest, that reinforce, and that are seen based on that behavior.

Way on the left, you see in the pathological environment, there's low cooperation, bridging is discouraged, bridging across teams and groups and roles and responsibilities. Novelty is crushed. It's very much like, stay in your lane, and responding to the hierarchy. That's the type of organization. If you go all the way to the right, you see the opposite of that in generative culture, where if it's focused and centered on the mission, focused on performance, then you see patterns of high cooperation, bridging, co-working. If there's a failure, it leads to inquiry and curiosity, what's happening? How do we understand this and how do we work together to work through these failures? Novelty is enacted. Of course, in a large organization, you can have pockets of this, but this is a good way just to start thinking and talking about behaviors and patterns.

The thing about patterns is that they're just patterns until they aren't. Consider this outcome statement. This is an outcome statement that an organization actually shared with us one time. This idea that leaders can enable a generative culture where technology, products, people, and the operating system continuously evolve to enact innovation in the marketplace. We took this idea from Westrum's Typology around rule-oriented behavior, and thought, what are some other ways you could think about patterns that you might see?

In an organization, if you see monitoring and complying patterns, a focus and overemphasis of monitoring and complying throughout the organization, some of the practices that might be linked to that would be maybe traditional project management, siloed information processing. This is where you see companies who are data-rich but information-poor, lots of data collection. Designing for security, where you have things locked down and designed for security and safety without actually covering over and burying where problems are, where you can actually surface them and designing in a different way. This behavior as manifested in these patterns is likely antithetical to this outcome, and therefore, in this case, you would say that these are anti-patterns now. Because the minute that you have an outcome that's stated and then you analyze what the patterns and the behaviors are, if they don't coincide, then you say that those are anti-patterns.

Let's reconsider the outcome. In this case, we have the performance-oriented organization. What you might see in this case then is guiding and enabling type of patterns. The practices that are reinforced by that would be adaptive ways of working, designed for trust, insights-focused, automation, digital threads, that kind of thing, value stream convergence. The point is to really think that there's a few elements at play when it comes to behaviors, patterns, and practices.

First of all, there's social practice. Many of the things we do in organizations, we practice socially, and that's an integration of meaning, and know-how or skill, and technology and material. Patterns emerge from how we use technology and how we structure the organization. There's a relationship between your goal and the materialized patterns. That's really the point to take away from here. I would like to focus on agility as an example to bring this to light, to now, we've gone to these higher concepts, and how do we understand good performance and patterns and behaviors. That it's not just following the rules, like we said in the outset. There's another white paper that I found when we were doing some research on this, called, "Agile Base Patterns in the Agile Canon". It's this idea of base patterns of agility, based on pattern language.

Regardless of an organization, whatever methods or frameworks or agility techniques that they might be employing, these are the base patterns to look for. This idea of patterns came through very strongly in this paper, and I thought it was a really good way to think about it. The first three, to measure economic progress, to proactively experiment, and to limit work in process, those give you agility in an organization. If you want to have a resilient organization, you need to embrace collective responsibility. If you want to move beyond the actor boundary to the whole organization, you solve systemic problems. That's less focus on local optimization and more focus on how is the system behaving, what's actually happening, and going from there.

I'll go through these just to give some explanation about why these patterns are interesting to me. This pattern about measuring economic progress, this is about being insight-focused versus data collection-focused, which is one of the patterns and practices links that we discussed before. Many organizations will measure and report everything that they think is interesting, all the way up to executives. This increases cognitive load and it decreases decision-making quality. Optimal outcomes can only be achieved through informed decision-making. Therefore, it is crucial to employ improved methods of measuring progress that provide better insights. These insights are essential to clearly understand the situation at hand and make confident decisions that lead to success. One of the techniques that we've used before, to help teams and organizations start to design well thought-out metrics that help them measure economic progress are the ODIM technique.

The idea around the ODIM technique is a metric suite that's evolving that has a low number of metrics, where measurements can be performed frequently, that they're balanced so you can expose or reduce the effect of gaming. You can identify when metrics are subjective and reducing bias, and then considering other factors around measures. Those are the main points about that. To proactively experiment to improve, much product development is iterative and involves some experimentation to achieve value. Still, change in a complex, adaptive system must also involve evolutionary and revolutionary experiments. That's on the system itself to understand the nature of the system, what's happening, what are the patterns, what are the behaviors, and then running experiments and tests to achieve the outcomes that you want in the system itself. I like that the white paper called out this idea of becoming an improvement scientist in the organization, that everyone plays a part in that. Then, limiting work in process.

If you're familiar with Kanban and Lean, this is one topic that you're probably familiar with. It's work that's started but not finished. It can be code that's waiting for testing. It could be a design that you've finished but hasn't been approved. It could be a requirement you've started to look at but you don't understand it. It's anything that you've started but haven't finished, and it adds to your cognitive load. It's mentally exhausting, and it's another thing that causes us to mistake activity for accomplishment. It hides process problems and wastes that are lurking in the system. If we don't limit our work in process, we can't see where the bottlenecks in the system are. If we don't identify bottlenecks, then how can we fix them? These are probing questions that following an Agile method won't really answer for you, but when you dive into these questions, solutions, and tactics with the constraints that you modify when you're looking into the system.

Resiliency, which is around embracing collective responsibility. This is the domain of shared responsibility for the system, for the work. Also, there's a couple of ideas from a complexity and a social practice standpoint, one of them is inter-predictability. Inter-predictability is this way that you increase the transmission and sharing and development of skills and knowledge through how you work together. It could be through peer-to-peer co-working, not asynchronously, but working together, building together at the same time. The reason why this is important is because of this idea around tacit knowledge.

First, you had a project or something that you were working on, and someone asked, how do you do X, Y, or Z? You explain it to them. Then when they watch you do that thing, they say, "But you didn't mention that. I just saw you do something different". Then you sit back and you think, yes, there's some things I know how to do that I don't know how to explain, and it just comes to me when I'm doing that task, or doing that work. That's that tacit knowledge, and that's what you gain, inter-predictability, when you're working together with your teammates, and you're able to build bridges through that tacit knowledge, and through observing, and sharing, and then building new things together. Deming was also famous for saying that quality is everyone's responsibility, so that's another way of embracing collective responsibility. In the Azure world, we hear a lot about collective code ownership, where the team is the base unit of value delivery. This is also the domain of skills liquidity.

Just like financial liquidity, where you can move money around, and use it to how you wish to use it, and how you need to use it, imagine that same thing with skills in an organization, being able to move them around, because you've created this inter-predictability. Also, if you've read "The Phoenix Project", and this concept of de-Brenting the organization, where you have single points of failure. In the Phoenix Project, Brent had his hands on the keyboard and he did everything, and the minute he stepped back and helped others learn, it was unblocking the organization, because that tacit knowledge was then being shared, and other people were learning how to do that work.

Then solving systemic problems, moving beyond the actor boundary. This is hunting for where things are out of balance. Where Lean says to eliminate waste everywhere, and it paints waste with an equal brush, Theory of Constraints says to hunt for waste. If you think of the socio-technical system, and looking through the organization, diving into the system, and looking for waste across the system, or how things are linked and connected in interesting and hidden ways, this domain, this is where this comes to life.

Case Study - Patterns in a Cyber-Physical Environment (The Fan Team)

I'd like to give you an example now of a team that I worked with. It's in a cyber-physical product development environment, and it's a non-software example, so step away from some of the typical work that I've worked on, with a team of rocket scientists, and how they wanted to figure out how to work in a different way. It brings to life this concept of patterns and behaviors and their link to practices. In this case study, I call them the fan team. They're research scientists in aerospace industry. They did initial calculations and modeling around how to build an engine fan blade in a new material. They needed funding, and so they assembled a year-long plan on a Gantt chart with all the usual tasks, and dependencies, and swags of how long each phase would take.

The leader who sponsored this effort was concerned. He said, could we present this plan in a different way? Our goal is, one, to prove that we can build a fan blade out of this different material, but it's also to prove that we can work in a completely different way in this research center, and to be more effective as an organization. Is Agile the answer? Should we follow some kind of Agile methodology? How do we do this? Where do we start?

I helped the team get together, and first of all, just describe a new plan for their funding. Instead of this Gantt chart, this rigid plan, where they tried to think of all the tasks in sequence that went out for maybe a year, I helped them create a presentation on how they would perform the following experiments. The workshop was as simple as it could be. I had a whiteboard with risks, assumptions, issues, dependencies on the board, on giant Post-its. They had a place for open questions, and then the whiteboard where they mapped out the experiments.

I asked them, what are you actually trying to accomplish here? Just start that, like stripping away the everything that you've done so far, and how you had initially started, and thought you were going to do this proposal. You've done some simulations. You've made some calculations. You think there's something here. What are you trying to do? They said, we're trying to prove that we can put a fan blade in an aircraft out of this material and that will work. I said, if you were going to test that in real life, you're going to produce this fan blade, how would you do that? What would you do? He said, we can go over to this shop over here. They can actually manufacture it for us. We have a lab where we can throw the blade, that fan blade on a rotor and blast it at the power that they would expect that it would perform at. I would say, write that down. They wrote that down.

Then the first question was, will the fan fail when pushed at the required power? They mentioned in their first experiment, the test bench that they would set up, the experiment details, what they're measuring for, and where they're going to focus. I said, if test number one goes well, experiment number one succeeds, what will you do in experiment number two? They said, the next step would then be to put a housing on it. Just like you see on an aircraft, the blade spins within a housing, and that would mimic the next step of the real-life test. I said, so the question is, if it doesn't fail, will it fail when put it into a housing mimicking a typical engine fan housing? He said, yes, that's it. Experiment number two with some modifications, what they're going to measure, what they're going to do, how they're going to run that experiment. I said, if experiment number two goes well, then what are you going to do? They said, if the housing doesn't fail, then we have some additional things that we can do to get as close as possible to 100% efficiency.

We'll begin to do the tooling of the blades and refining and getting the blades smooth to get the perfect efficiency and some other tweaks. I said, write that down. They presented this plan instead of the Gantt chart plan, three experiments, who the team was, where they do the tests, where they source the fan blade, and they got funding. The other thing is, from a complexity standpoint, they preserved a little bit of optionality. They prioritized, which is about prioritizing certain options based on their expiry. In this case, they deferred commitment on that blade tooling design because they knew they didn't have to answer that up front, and they found that they often would get caught up and spend too long on the blade tooling design. They received funding on that.

Then we kicked off the project. The first thing we talked through was this idea of visualizing work in progress. They visualized their workflow on an Excel Kanban, and they had a system for updating it as a team. Why did they do it this way? First of all, this is a top-secret program. They couldn't put anything on the wall. In their context, they couldn't have a team room. They weren't used to actually working in team rooms. They all had their own separate desks and would often just hand things off to each other. That was a completely foreign concept, and they couldn't actually do it based on the top-secret nature of the product. They didn't have any electronic tools. They didn't have Trello. They didn't have Jira. They couldn't really use that also, for they had security concerns. They asked me, could we use Excel? I said, of course. Let's just design a starting workflow together. We designed a starting workflow.

Two weeks later, I came in to help them again, and we discussed those challenges, and I gave them some rules of thumb. By then, they had added a color-coding system, and they had one person that would update the board in Excel when they got together in their morning session, a.k.a. standup, before their morning coordination meeting three times a week. They developed this system for knowing what was in progress, what was up next, what was blocked, and who was working on what, and that kind of thing. They just leveraged Excel, after some priming on how to use a Kanban.

The next thing is they worked cross-functionally, and met several times a week. I gave them a simple rule. I said, for work that you typically hand off between roles, look for opportunities to pair up, to parallelize, to work together, to sit next to each other literally, and collapse that work into one set of tasks, one moment of work together. Perform it simultaneously. If it helps, even mark that work on your board. If that sounds familiar, that's like pairing, maybe like mobbing, but definitely embracing collective responsibility and limiting work in process.

The other thing that they also did was they had a rule for how many work items would be in progress at a given time. For them, this was so different because they would just throw a ton of things in progress. From one system engineer to a mechanical engineer, they would hand things off. Now they were actually thinking intentionally about how to achieve flow, how to maximize flow, how to maximize problem solving together with the tools that they had at hand. The next thing is they worked on solving systemic problems. This one was a complete surprise to me. There was one person that was responsible for and had the knowledge for how to set up the test lab. That person was overbooked. They were booked for a month at a time. You couldn't get time with them to set up the lab, and it was blocking this team from doing their work, even when the lab was open.

One person from the team paired up with this person and documented the setup, published it, and published it somewhere where everyone could view. That person gave them the thumbs up that, yes, if you follow these steps for the setup, running the lab, and the shutdown of the lab, this is safe. It removed this bottleneck for the future. I couldn't have anticipated that. By the team getting together and working in this new way, they learned how to solve a systemic problem together and to achieve value delivery together.

The other thing that happened was there were leaders that lit the spark that put this team together and wanted to achieve this mission of solving this problem around the engine fan blade, and also how they work and their methods for working. The leaders would just unblock the team when there was something in the way. They didn't tell them what to do exactly. They gave them no direction other than the strategic vision to demonstrate the new way of working based on principles of decentralization of some decision-making and control, small batches and experiments, resiliency and responsiveness. The team, in collaboration with the leadership, rid themselves of review gates. In this case, they usually followed the V-model of engineering, which has lots of stage gates and review steps, lots of inspecting quality in versus building it in.

In this case, they had live demos in the lab with the physical product, with stakeholders, with leaders, with other scientists working on the problem. They were focused on the problem-solving together. In record time, they were able to prove the viability of this material in a near-world application. The other thing that happened is that they caught a major design flaw and they were able to correct it very early. The chief rocket scientist said that he noticed that in a similar project previous to this one, it took them months and so far downstream that they uncovered a similar design flaw of a similar nature that they saved themselves maybe eight months by working in this different way. That was the success they had from that approach.

What were the observed patterns? What emerged? How did this actually work? In anthro-complexity, because humans work with constraints, they change the constraints, I could not assume that this team would go into the room and build something. I couldn't assume that they would get somewhere. My small part was just to help set the conditions to make it likely to make them successful through some simple rules and heuristics. Constraints management by modulating the constraints and not telling people what to do.

Then, once we establish those heuristics and constraints and put them into action, the questions we ask are, are the patterns I expect to play out playing out? Can I make some changes to the constraints to see if the patterns I want begin to emerge? This is not just an ethical statement, but the minimum amount of process produces better outcomes for everyone. The minimum constraints to get started equals better results. It's better to actually start somewhat under-constrained. What that means is this idea of incremental elaboration. Every intervention you make into a system causes other problems. Now that those problems happen, we then begin to notice new things if we practice by modulating constraints.

Now we might notice, the work in progress is going higher, or the work item size is too high. We need to break this work down smaller. Or we have this external dependency now that we need to grapple with, can we untangle from this dependency? Can we reduce dependencies in some way? It's this practice to performance that I mentioned in the outset that you start to notice things when you work in this way, when you're under-constrained and you actually have within the team's grasp to do that problem solving on the system. When the solutions are presented to teams and forced onto them, that's where you get things like learned helplessness because they're trying solutions to things that they cannot yet observe or perceive.

I tried to map this on a Wardley map to think about this in a different way. The map is meant to explain value chains and the current state for an applicable market. I assumed that the organization, a whole organization, is an applicable market from this standpoint. From an intervention standpoint, we need to diminish the value chain, way on the right, from functioning. The way we might do that is intervening where you have the Gantt charts and the schedule data, the data collection focus. If we're talking about certainty and ubiquity in the organization, there are certain practices that are standard and assumed and just done, versus all the way over to the left in the genesis, which are more emerging practices in an organization. It's something new, it's never been tried, or it's floundering. In the example I gave previously in rule-oriented behavior, it depends on the monitoring pattern, which depends on data collection focus practices.

Say if you have perfect authority, you can outlaw the use of spreadsheets data in presentations, and you could outlaw Gantt charts and schedule data. Maybe the outlawing for the fan team, for example, would be presented as a challenge to them, and it would erode support for the data collection focus, and then that outlaw or challenge can be framed as a constraint for a given context for the team. This opens up the opportunity space for alternative practices. It opens up opportunity space for doing things in different ways.

At the end of this project, the experienced rocket scientist said, I remember how we used to work. We'd just start building until we got it right, but somewhere along the way, all this red tape and standard work came in and we began to slow down. At some point, standard work made sense for this organization, but then it was used everywhere for everything, including engineering and research and development, and it slowed it down and stifled it. Imagine research and development being so focused on the schedule and scope. It seems a bit crazy when the whole point was problem solving a design problem for new technology and how to do that as effectively as possible.

Conclusion

In my conclusion, I state a few things, the same thing in different ways. The first complexity topic here would be bounded applicability. That's to beware of addiction to practices, the silver bullet solution to problems without knowing if it's meant for your context. The fact that interventions cause problems. The point is to learn and build skills in what to measure and what to notice, all centered on the desired outcome, the relationship between your goal and the materialized pattern. There's a really great blog post about this by Jabe Bloom, where he wrote, the failure to recognize the bounded applicability of our tools results in less effective utilization of those tools. Teams using the wrong process for the wrong problem may lose confidence in the tool, which is very useful in other domains, which they will now avoid.

The other concluding idea I had was around learning how to play well, and going back to the idea that performance is the sign of good play. You need to know what to notice and what to measure to determine something is well done. Principles, like Agile principles and practices, can create constraints that allow habitudes. You need to leverage these practices as constraints in order to observe and notice the patterns that emerge in your context, and what emerges as you use them is something unique to your organization.

The awareness of anthro-complexity in these systems, it changes our expectations: so what we expect of ourselves and what we expect of others in product development as leaders and team members. We can't expect each other always to have the right answers and consistently get results, especially if we haven't created the conditions for learning through an understanding of our patterns and moving constraints to change the possibility space. I really can't emphasize enough the importance of creating conditions for learning. One brief anecdote about this is a team that I worked with very early in my Agile experience, the tech lead, he was on the verge of firing a young developer. He called him a poor performer. After some team training on XP, the lead, along with the team, decided to play with a couple of constraints, one being, let's test pair programming and rotating in the pairs. That young developer began to flourish in the new collaboration format to the point that the lead thought he was a whole new person.

The fact of the matter is the constraints have been modulated so that this young developer could now develop and build together with his team versus getting an assignment in isolation in their respective cubicles. It was a step towards being more performance-centered. There was less siloed information processing. Really, there is a human level to all this that can make work better for everyone.

 

See more presentations with transcripts

 

Recorded at:

Apr 29, 2025

BT