This text was generated using AI and might contain mistakes. Found a mistake? Edit at GitHub
This text was generated using AI and might contain mistakes. Found a mistake? Edit at GitHub
Hi.
And this is sort of one of the episodes that we are going to do as a preparation for the Software Architecture Gathering, which will take place in Berlin.
And there is a rebate code for 15% off, which is SATV_SAG2515, like 25 for the year and 15 for the 15% discount.
And we would be happy, both of us, I guess, would be happy to welcome you at that conference.
So first of all, Barry, can you say a few words about yourself?
Yes.
Hi, everyone.
My name is Barry O’Reilly.
I’m a software architect and researcher, and I’m based in Stockholm in Sweden.
I work primarily with research within software architecture, with teaching within software architecture, and also consulting within software architecture.
I’m previously a chief architect at Microsoft Europe, and I’ve been working in the industry since the late 90s, which is a very, very long time.
And yeah, good to be here.
Yeah, great to have you on the show.
And so one of the questions that came to my mind is, so what made you start with research neutrality theory?
So why are you interested in this subject?
So yeah, ## Barry’s Journey to Residuality Theory
I got started with this about 12, 13 years ago, when I was given a project.
I was at Microsoft.
I was given a project, and I was asked to train up the next generation of software architects, to come up with a training program for them.
And I went to a meeting with some senior people, and I said, look, what do you want me to teach them?
What is it that you want me to teach them?
Enterprise architecture?
Should I teach them patterns?
What should I teach them?
And what they said to me was, well, we want you to teach them whatever it is that you do.
Whatever you’re doing works, so let’s teach that.
And this was during the big transformation at Microsoft, where we were going from the sort of on-premise world into cloud.
There’s a lot of nerves.
People weren’t really sure how this thing worked.
And I left that meeting.
I thought to myself, I have no idea what it is that I do as an architect.
I know that it works.
I know that my stuff is running.
I know that customers are happy.
I know that there’s a level of quality to this stuff, but what is it that I do every day?
And I went back through, you know, the first sort of 15 years of my career, and said, well, what is it that I do?
And I looked at the literature.
So there’s stuff like TOGAF, says do this and do that, and there’s patterns, and just do this and do that.
And I realized that I’m not doing any of these things.
I don’t follow anything that’s in any of these books.
I have come up with a process by myself, which I don’t understand, but I know that it works.
And I went to talk to some other senior architects, and I said, well, what do you do?
And, you know, these are senior people who I respected, who had built big stuff, including parts of Azure.
And I said, what do you do?
How do you do this?
And they didn’t know either, and I started to get a bit scared then, you know.
But when I talked to junior architects, I said, what do you do?
They had very clear, very definite.
They knew exactly what it was that they did, but their stuff wasn’t working, so it wasn’t really much help.
So I started to dig into this, and I found that there were things I was doing repeatedly in projects that didn’t make any sense, that weren’t in the books.
And so one of the things I discovered, there’s a book from 1981, it’s called The Reflective Practitioner by Donald Schon.
And Schon says that looks perfectly normal.
If you work in a complex environment, you don’t actually have a way of doing things.
What you do is you survive, and you do some things wrong, and you do some things right.
And over time, you build up this magical gut feeling that helps you navigate complex scenarios that would overwhelm anyone who hadn’t got your experience, but you’ve built up this gut feeling.
And that’s what I realized, what we were doing.
And what I was doing in my projects, and the thing I noticed I had in common with other senior architects was that we were all incredibly comfortable in uncertainty.
Whenever stakeholders couldn’t answer questions straight, whenever the goals weren’t entirely clear, whenever we didn’t really know what it was we were doing, senior architects would be comfortable in that space, and they’d relax and say, let’s figure this out.
Whereas the juniors would, the people who weren’t doing so well at architecture would say, no, I’m not doing anything until you tell me exactly what it is we have to do.
Until I can talk to customers, until I get a specific requirement, I’m doing nothing.
And I realized that this ability to handle uncertainty was the thing that was making the difference between successful architecture and unsuccessful architecture.
And I realized that what I’d been doing in my practice was that I would turn up, and the first thing I would do is I would listen.
I would say, what is the problem here?
And even if I didn’t really understand the problem, I would solve it.
I would draw an architecture, and I would say, here it is.
Here’s a solution to something that might be in the neighborhood of the problem we’re talking about.
And then I would start breaking the architecture.
I would start saying, well, what would happen if something in the environment changed?
What if I’m wrong about the number of customers?
What if I’m wrong about what customers actually want?
What if I’m wrong about lead times?
What if I’m wrong about the margins in this project?
What if I’m wrong about the social implications of what we’re doing?
And as I questioned myself, then I found that my architecture started to get stronger and stronger and stronger.
And the more I stressed the architecture, the more I realized that the more the architecture matured and became stable and became solid.
And I was able to do this without investing a lot of time asking my peer stakeholders, what are your non-functional requirements, or torturing them in that way, forcing them to answer questions they didn’t understand.
And I realized that what I was doing was I was starting off with a very simple model.
I was being happy with a simple model because I knew it was wrong.
And then I would try to break that model, that mental model I had of the business context and the relationship that I had to the code.
And as I pushed and pushed and pushed and broke it, eventually I would reach this tipping point where the architecture would start to survive no matter what I threw at it, no matter what way I pushed on the model, no matter what I changed.
I would have an architecture that seemed to survive things that weren’t in the specification.
And when I look back over the sort of history of the projects I’ve built, the stuff that I’ve done, you can see that these things lived for a very long time.
It was very hard to break them and they were able to absorb changes, sometimes massive changes, like the move to mobile and things like that, without breaking.
And so I started to put these ideas together.
And at the start it was like, well, there’s this weird thing.
If you’re really negative and pessimistic, then eventually you end up with a really good architecture, but I couldn’t understand it.
And then I left Microsoft, I went, moved on, did other things.
And people would call me occasionally and say, hey, those ideas you were talking about, do you want to come to this conference and talk about them?
Do you want to come here and talk about them?
And I did a little bit more work and I talked about them.
And then I went to this conference called Domain Driven Design Europe, where they did the whole YouTube thing and the ideas got pretty popular.
And more people started calling and saying, come here and come there.
And I started to, one of the things that started to worry me was, you know, these are just my ideas.
They could be nonsense.
I mean, it works for me.
And people seem to appreciate the ideas, but are they true or is this just another trend?
Is this just something that sounds good?
So I went back to university, found a professor who could help me along the way.
And I said, look, I have these ideas, this idea that if you randomly stress an architecture, it makes the architecture more likely to survive unknown forms of stress, which is what residuality theory is.
And I said, I want to test this idea.
And so for the last couple of years, I’ve been moving this from those initial ideas to something that is properly described in scientific terms and with proper experimental design and a validation that shows that this actually works.
And I guess sort of to sum it up, I think there are two very interesting thoughts here.
So one of them is that we are, let’s say, in this mess, like we don’t really have a good idea about how to do your software architecture and you want to tackle that by using proper science and like literally science, as you said.
And the other idea that I find interesting is the one that basically says, okay, so if we build some architecture and we see how different requirements would make it break, then we come to some conclusion.
And I think even that’s, and we will probably discuss that later on, that’s a little bit counterintuitive because we seem to build very specific.
I would argue that quite often, like the general approach, something that does more than is required is actually something is over-engineering.
And your idea clearly contradicts that and for good reason.
So those are the two things that I sort of think are interesting or are very interesting.
The one that basically says, okay, we are going to do this in a scientific manner.
And the other one that basically says, no, it’s not over-engineering to build a system that can actually stand some other requirements.
So I think this was a good introduction about what this is in a nutshell and also why you started.
So what is it really?
Like can you give a few more details and can you talk about a residuality theory in more depth?
Yeah, so residuality theory can be stated really, really simply as I do in the book.
And what it says is that a random simulation of stress, stress being anything that’s outside your current model, your current understanding of a business system, not just a technical system, but the entire business system, a random simulation of stress is a better way of producing architecture than defining requirements or risks or iterating over code.
And when you do this, you produce an architecture that is more likely to survive off spec, an architecture that is more likely to survive when at some time in its future, something in the business context changes that wasn’t in the original specification.
And that was my original gut feeling around what I was doing.
So I would have said that 10 years ago, this is what you do to make sure that your architecture is going to survive over the longer period, it’s going to survive, it’s going to have an architecture, most of the time an architecture is going to have to meet requirements that you don’t know about as an architect.
And when that architecture crashes and burns because that requirement that we didn’t know about can’t be met by that architecture, it has to all be torn down, that will fall back on you.
They’ll say, what a terrible architecture, who designed it like this?
And it will be your fault.
And the whole point of architecture, the only reason that we have architecture, the only reason that we’re talking about it is that we have to build systems for futures that we don’t know.
If we only had to build functions, if we only had to collect requirements, if we only had to build features, then we would not need architecture, we would just build the features and we would, you know, that’s easy.
And if anyone who’s watching this, you know, you’re interested in architecture, by the time you get to architecture, building features, writing code, that should be easy for you.
That’s sort of the prerequisite to coming into architecture.
And so, residuality then makes the statement, a random simulation of stress will produce very strong, resilient architectures, which is what we, that will be able to meet undescribed needs.
And that’s what we want to do as architects.
And so, architecture, I believe, really came into being when Dijkstra discovered that we had a serious problem, that our code did not hold up to unknown changes in the business environment.
And so, residuality then is a theory that says why this is, then it gives a, we dig in and what we do is we give a scientific explanation.
Why would this happen?
Because it doesn’t make sense, right?
From the start, when I started saying these things, I was like, look guys and girls, this is weird.
I don’t know why this works, but when I do it, I get a good result.
And every time we do it, we get a good result and we don’t know why.
And so, I spent a bunch of years digging into why does this happen?
Why does a random simulation of stress, a bunch of made up stuff, rather than specific requirements, which we spent half a century working on before, why does it do a better job than that?
And what I started to do then was I started to dig into the, given that I realized that the architects who could survive, who could do this work, were comfortable with uncertainty.
I started to read a little bit around uncertainty, which led me to the work of Nassim Taleb, which led me into a huge rabbit hole called the complexity sciences.
And what I found in the complexity sciences was through the work of, among others, Prigogine and Stuart Kaufman, a scientific explanation as to why a random simulation of stress would lead to an architecture that survived.
And Stuart Kaufman was a biologist who worked in the 1960s.
And his body of work was trying to figure out how do we get to organisms?
How do we get from a bunch of amino acids floating about in a primordial soup?
How do they find each other?
How do they connect?
And how do we get to the beetles?
How does that happen?
And he described this as a series of networks, random Boolean networks, and the way that they connected to each other and the way that they behaved allowed systems to grow to a point that we call criticality.
Criticality is the point in any system where the system has a sufficient configuration to be able to move when things change, but not so complicated that it collapses under the effort of maintaining itself.
And you’ll find that pattern in biological systems and economic systems and social systems, and in software systems, and in software architectures.
And so you can draw straight lines from his work on the structure of biological systems and evolution straight to the structure of software systems, and the number of components that we have, and how linked they are to each other, and how that affects a system’s ability to respond to unknown unknowns that at some point may turn up in its environment.
Okay, so it is something that we can also observe in nature, so that’s great concerning in particular the scientific foundation.
I guess one of the questions that are more or less obvious is, so how do I do that?
So you’re talking about stresses or stressors, I think you call them in the proper terms.
So I take some architecture for some problem.
I don’t know, we could make something up, so we could take our average e-commerce system, or maybe you have a better idea.
So you actually talked about, in the talk that I saw, you talked about e-mobility and the charging stations.
So what would some stressors be, and how can I come up with them?
Yeah, so the way that it works is, and so we have the tool that we use is called a stressor analysis, and I deliberately designed tools for this theory that are really, really simple, so that no software vendor would come along and try to sell you all tools for doing it.
And so this is just an Excel spreadsheet, and we record stressors, and stressors are basically, it’s model stress that we’re looking for.
So I have a model of a system, so in the case of the fast car charging platform that we were building, we had a model of the system that was very naive, and your initial models you have for any system you’re working with are naive.
And in that naive model, someone was going to register to be a customer, they get a little key ring, they drive up, they hold up the key ring at the charger, and it sends a message up to a back-end system that checks their subscription and starts to charge remotely.
And so that’s the naive architecture, and it’s based on a model that we know as architects, because we’ve been around for a while, the initial model, the initial picture we have that we build up of a system is always wrong.
We’re always going to discover new things, we’re always going to understand it differently at the end of the project than we did at the start.
And so what we start to do is we start with the assumption that this model is wrong, and I’m not going to wait until I have code in production to find out that no, actually we misunderstood this, this is not how it works.
I’m going to start now as an architect stressing this system.
And I’m going to say, okay, well, we’ve assumed that customers are going to behave in this particular way, we’ve assumed that they’re going to pay for this, we’ve assumed that the market’s going to grow in a certain way, what if those assumptions are wrong?
And so you start to throw stress at the system, like we build this and nobody comes.
What do we do then?
Does that actually affect my software architecture?
What would happen if we built this and no one used it?
And so we don’t have an answer to that question, so we go to our business stakeholders and say, what would you do if no one used this?
Would you pivot?
Would you change?
Is there another way to make money out of this kind of thing that would then affect my architecture?
And so we start to look for things that might go wrong, and so we’re assuming that this little key ring is going to work all the time, and so we ask the question, what happens if someone drives up and the key ring doesn’t work?
It breaks, and we realize that this would be a huge problem for us because that person might not be able to leave, they might not have enough electricity in their car to go anywhere else, they’re going to be upset, there’s going to be queues, going to be problems, and that’s very much a business stressor.
It’s got nothing to do with the back-end technology, and we start to say, well, how are we going to solve this problem?
So we go to the business people and we say, what are you going to do if these key rings break?
And they say, yeah, actually, now that you’ve lifted that point, we need to change this, and we’re going to add license plate recognition so that people can charge and leave without having to identify themselves in any way, and then we’ll send them a bill afterwards.
Brilliant.
Now, just that little change completely changes our architecture.
There’s a whole set of new processes.
Billing has completely changed, registration has completely changed, and so we have to change our architecture.
But how do you come up with these?
I mean, basically, you’re describing two stressors.
So one stressor is you’re not successful in the marketplace whatsoever, and I would be surprised if there is any answer to that.
I mean, you said you could pivot, and the other one is… and you throw that at some business people or whoever it is, and then you take it from there.
Or is there a more structured approach where you say, okay, these are the things that usually are stresses that you should consider?
So what we try to do is, first of all, we catch all the things that, in a structured way, all the things that you would expect to go wrong have to be there.
A server goes down, a queue gets full, all of this stuff, yeah, that has to be there.
And that should be a part of most people’s normal approach to architecture.
Unfortunately, it’s not, but that’s discipline.
We need that systems engineering.
Outside of that, we need to practice this.
A lot of technical people are very nervous about this.
They don’t feel qualified to do this.
But I will throw things at an architecture, like what happens if there’s a fall in the dollar against the euro, right?
Where does that turn up in my architecture?
What happens if there’s a war breaks out somewhere or a pandemic?
Or what happens if there’s a drop in the price of aluminum?
How does this impact?
Can I talk about this in a way that relates it back to my architecture?
And those things have to be fairly random.
And you have to, because we do have a tendency, especially as technical people, as mathematically minded people, to zero in on the things that we believe are probable.
But the wider you make the scope, the better the result will be.
So when I run a standard project, we’re looking at 200 to 250 stressors.
They’re never the same.
They’re always driven by the context.
They’re driven by brainstorming, by talking to stakeholders.
And there’s little tricks that I teach in the courses that I’ll teach in the workshop at Software Architecture Gathering, where we pick up on assumptions and say, I’ve assumed that this is always going to happen.
What happens if my assumption is wrong?
I’ve assumed that a customer behaves in this way.
What if they don’t?
I’ve assumed that a competitor behaves in this way.
What if they don’t?
And this takes a bit of practice.
And what I find is a lot of technical people are very stiff.
They find this very difficult.
They find it challenging.
They’re afraid even.
They maybe feel like I’m not qualified to have this conversation.
And what I find in the workshops, in the longer form workshops, is that I have to get people to relax.
And once they relax, this becomes really quite easy.
But it’s that fear of being wrong.
And that’s a huge thing for architects because one of the things that I say about residuality is that when you’re a programmer, all of your effort goes towards being correct all the time.
But when you’re an architect, you have to be critical in that it’s a very, very different thing than correctness.
So, there are a few comments or questions in the chat.
So, one is actually one that is quite – it’s like 15 minutes old.
So, Stefan Sonnenberg-Karsten said, every model is wrong, but some are useful.
And I mean, that’s a final citation from I don’t know who.
Then Michael Trapp said, are these just theoretical considerations about which stress scenarios would break the system?
In my experience, errors are often only found when new requirements are actually implemented.
Okay.
So, yes, these are theoretical considerations because we’re thinking about them before we’ve written any code.
You can get quite far with these stress scenarios without actually doing things, but there will always be errors that pop up in production that you won’t have thought of that you will have missed for some reason.
What we’re trying to do is to bring an architecture that’s going to make it easier to solve those problems.
So, one of the things that’s really important to remember about residuality is that it won’t solve all of your problems.
It’s not a silver bullet.
It won’t solve every possible future scenario.
Things will still break.
And I think there is these other things.
So, before we talk about the next few questions, I think you should talk about that part where you basically say, well, if you take care of one stressor, then chances are that you also take care of a few other stressors because I think that’s sort of the missing link.
Yeah.
So, what Kauffman showed back in 1966 was that if you have a complex system with millions and millions of elements in that complex system, and you look at all the possible combinations, if you take an element in the system, it has a number of possible states, and the total combination of all the elements in all the different states is astronomical numbers for any even moderately complex system.
And what this means is that a complex system is, by virtue of its number of possible states, completely overwhelming for a human being.
It’s impossible for us to interact with a complex system and all of its possible states.
What Kauffman discovered was that when you put these elements of a system in and you create links between them, you create relationships between them, those relationships constrain the number of states that a system can arrive in.
And so, for example, Kauffman did this using something called random Boolean networks, which we won’t get into today, but in a random Boolean network, he used a random Boolean network with 100,000 elements, and he said, look, with 100,000 elements and their Boolean, so they only have two possible states, there is two to the power of 100,000 potential states in this system.
So, if you’re a software engineer and you’ve got a system with two to the power of 100,000 requirements, it’s over, right?
You’re never going to capture them all.
You’re never going to be able to write them down before the universe ends.
But what he said was that when you start then connecting these nodes to each other and make them dependent on each other and make them able to affect each other, the number of potential states falls from two to the power of 100,000 down to 317.
Now, 317 is still nasty, but we can deal with 317.
We can’t deal with two to the power of 100,000.
And so, what this tells us as architects is that we need to stop thinking about every tiny little detail of every element in our systems, and we need to focus on the 317, not on the two to the power of 100,000.
And what we do when we come up with these stressors is that we’re trying to find those 317, or a much smaller number of states that the system is going to arrive in, and we make sure that our architecture is able to survive in those 317 states.
Can you break that down to the example that you gave with the charging stations?
Yeah.
So, what happens is, so you have all of these possible things that could happen in your system.
You can’t break down too many of them to capture.
You won’t get them.
You won’t capture them by doing risk management or talking to your stakeholders.
And so, what we do instead is we come up with these random stressors.
And so, I gave an example already.
What happens when the key fob fails?
And when the key fob fails, we shift our system a little bit.
We say, we’ll use this ALPR thing, this automatic license plate recognition.
We’ll bill people when they go home.
We make another couple of changes to the system.
One of the things we realize is that people are going to drive into our chargers, so we want to have some redundancy because they’re going to be damaged.
And we also want to have security cameras because we want to make insurance claims for those who drive into them.
Another thing we realize is that there’s nothing stopping people, or our initial assumption is that people will drive up, plug in their car, charge for 25 minutes, and go away.
And so, we question that assumption.
What if they don’t do that?
What if they plug their car in, and then they leave their car there for a very long time?
And, you know, initially people say, but why would you do that?
The car is charged.
And then we realize that at golf courses, this is exactly what people do.
They plug their car in, they go away, play a round of golf, and then they come back and our business model is screwed.
And so, what we do to solve this problem, then, we make a little change to our architecture.
And the change is that we’re going to charge on a sliding scale.
And so, if you’re there over the 25 minutes, then it gets progressively more expensive every minute, which leads to a three-year or $400 parking spot, which is good for us.
We will make more money out of this behavior, and we’ll also discourage it.
And those are all point-to-point things, right?
So, we have a problem and a solution, and that’s the way we’ve been taught to think as architects.
So, what we’re trying to harness with the residuality theory is that as we add together all these little point-to-point things, as we discover these little things, what we’re doing is we’re discovering, every time we move, one of those 317 possible states that a system can return to.
And once we’ve discovered one of those states, then, and we build our architecture to survive it, not necessarily functionality, but our architecture to survive in those states, then what we notice is that random things will happen in the future of that system.
But because the system is destined to land in one of the limited number of attractor states that Kaufman describes, then we will survive that stressor that we don’t know is there.
Yeah.
And to me, I’m not sure whether you would agree that this is a way to put it.
So basically, what I learned is, so there might be something like the automated license plate recognition that is something that I would implement as part of my architecture.
And I basically did that because of some stressors.
So as you said, because maybe the ring is broken, and therefore I do the billing because of the license plate recognition.
And then I realized that I can also use that to sue people who broke my system or just drove into it.
So it turns out that there are some things, like in this case, the automated license plate recognition, that are actually useful for multiple different things.
And therefore, there is just one thing that you sort of implement, and it has a lot of impact on different scenarios that I think of.
Yeah.
So chances are that I only think about one scenario, like the billing.
And then I realize, oh, wow, I have this license plate recognition, and I see that people are actually mistreating the charging stations, and now I have their identity so I can go to them and have them pay for that they drove into it and these kinds of things.
So there is one thing that I implement, and it turns out that it’s useful for multiple different things, even things that I didn’t think about.
So is that a way to put it?
Yes, absolutely.
Yeah.
And so when this happens, what happens is that we add enough components to our architecture.
We broaden our architecture out until it reaches this point of criticality where it’s able to survive the things that it hasn’t been designed for.
And this is what Kaufmann sees in biological systems as well, and this is called – in complexity theory, we call this edge of chaos or criticality – is the ability of a system to survive things in its environment that it hasn’t been designed for, it doesn’t know about.
Yeah.
And so there is this comment by David O. on YouTube, and he said, I cannot yet grasp the aspect of randomness regarding stressor identification.
It seems that this is highly dependent on the expertise of domain experts involved.
And I guess the answer, therefore, would be, well, if you get some stressors, you probably get the right ones because the ones that you’re missing will have the same impact on the architecture.
Yeah.
And so the thing is that you don’t have to get stressors that are correct.
They don’t have to be correct.
They just have to identify the attractor spaces that a business context can arrive in.
And this is one of the things that developers struggle with when they’re trying to use this method, is that they’re trying to be correct.
So they’re trying to only predict stressors that will actually happen, but you have to let go.
And this is what the randomness is about.
You have to let go of that need to be correct.
You have to be prepared to step outside of what we normally do, and all of our tools point us towards being correct in our predictions, correct in our assumptions all the time.
You don’t have to be correct with this method.
You just have to make a lot of noise.
If you’re very technically minded, the closest analogy to this is diffusion models in machine learning, where you introduce lots and lots of noise, and then you can actually produce pictures and solid things from it.
So Pascal made a remark.
So he said, Residuality theory reminds me of threat modeling.
I don’t just mean vulnerabilities, but also how the system can survive, remain usable and secure, and where I take or allow risks.
Yeah.
So there very definitely is some…
We tend to look at new ideas through the lens of old ideas.
So we try to say, well, this is just like this, in order to make it easier for us to understand.
Right.
So that happens with risk management, and when we’re talking about this, and we’ll maybe get to that later.
Threat modeling is very much what this is about.
But threat modeling, as it’s practiced today, tends to focus on realistic scenarios.
We don’t need these things to be correct.
We don’t need them to be realistic, because we’re changing the structure of our architecture based on them.
We’re not actually responding to each and every event as if it’s something that’s actually going to happen.
What we’re doing is we’re investigating where does our architecture crack?
Where does it break?
Where does it fall apart?
And so there are some similarities.
I would find that if you took someone who’s used to doing threat modeling, they would be a little bit too stiff when they worked with residuality, not able to bring the randomness that’s needed, and still very focused on being correct.
And there’s this focus on being correct and being efficient all the time.
And I think we have to get architecture right before we start thinking about being efficient.
I guess the problem with threat modeling is if you forget one threat, then that might be a problem.
Well, as you said here, it’s a different thing.
Yeah.
And so the goal in threat modeling is one-to-one, right?
So we’re getting a one-to-one mitigation of risk.
Residuality doesn’t really care about that one-to-one mitigation.
It’s looking to get this behavior of criticality when we have a system that has the property of being able to survive unknown sources of stress.
That’s what we’re trying to get to.
Then Pascal again, he said, the different security zones also remind me how I can combine components not only with a security focus, but also with a component focus.
Where do you see the big difference if there is one?
But I think you answered that.
Yeah, I think that’s a continuation of Pascal’s first question.
Yeah.
Okay.
Then Ingo Eichhorst said, after reading your two books, applying the ideas nearly every day and promoting them to my sometimes puzzled colleagues and friends, I have one question.
When is the next book released?
Okay.
So there is a third book, which is half written, which is on the technical aspects of architecture and the technical aspects of residues.
That is in the pipeline.
But the big book, which is essentially a rewritten version of my PhD thesis, should be coming around the end of the year or early next year.
And so the book Residues, that some of you might have read, is called the little book, but the big book is coming.
And it has the data and the experimental data and the results in it.
So I would definitely link to your books on LeanPub.
That makes some sense.
Peter Kolbe, so he asks, who pays the evaluation of these random 100 class stress scenarios?
No offense just from experience?
Okay.
I’m not offended at all.
So whenever we run the workshops, what you’ll find is that a team who’ve never met these ideas before, a team who’ve never met each other before, will be able to produce well over 100 stressors and they’ll be able to evaluate them and they’ll be able to move their architecture and get a result in the space of a couple of days.
When I do this, I can generate 200 stressors in the period of a couple of hours.
In every project, in every serious complex project that I’ve ever worked in, I’ve wasted at least 80 hours in the first few months on pointless meetings.
So while other people are doing these pointless meetings, I can do this work on a bit of paper while they’re talking.
And so the cost is virtually nothing.
And this work has to be done anyway.
If you go and find a senior architect and you look at what that senior architect is doing, right, what they’re doing is they’re walking around the architecture from a bunch of different perspectives using a bunch of different tools.
They’re actually stressing.
They’re doing everything that I’m talking about, but they put different labels on it, different names, whatever the hot tool of the minute is that they’re using.
They’re actually stressing their own models.
This work gets done anyway by a senior.
If you have a senior architect in your team, they just can’t articulate it.
So residuality, in my mind, is something that happens in the mind of every senior architect and it happens in every successful project.
We do this anyway.
Now we’re just being explicit about what it is that we do.
Yeah.
And also, I mean, what you said, I think it’s very important.
So there are other things where you can save more time.
And also, the investment is probably very worth it.
I have to admit that I’m always, I don’t know, it tells quite a lot about an industry that we have these kinds of questions because, you know, and I’m not arguing that you have these questions in particular from business people, but at one point it should be clear that the investment that we do in architecture is returned many fold during the project and later on.
And I mean, the example that you gave with the charging station, I think makes it very explicit, right?
If you don’t think about that, then you will have a problem sooner or later and it’s going to be an expensive one.
Yeah.
So I’m not sure what to make of this.
So they said, isn’t not very expensive to expect any stress from my architecture?
So I guess that’s the point that you made, that this is actually, well, cheap.
I think one of the things that comes up is, is it expensive to meet all these imaginary stressors that might not actually happen?
In my experience, when you come up with a stressor, you realize that, ah, this would break our architecture.
But if I move this boundary over here, then it won’t break my architecture.
And the cost of moving that boundary isn’t even transparent.
It’s not even a cost.
It’s a design change.
Sometimes we’ll come up with things that do cost a lot of money.
Then you’ve got to go and you’ve got to seek the business approval for that.
You don’t have to implement everything that comes up in your stressor analysis.
Yeah, so can you give an example for that?
Because if we stick to the automated license plate recognition, that is something that is an additional feature and, you know, maybe it’s not that expensive, but still it involves some costs.
So can you give an example where it’s actually almost for free?
Um, you, so one of the, so the automatic license plate recognition, um, for example, that caused us to go into the architecture, where the naive architecture was built on the assembly. into the system they can either be identify themselves up front or they can just plug the charger in and we’ll let them charge and make sure that we have their license plate.
Two separate processes we had to break a whole bunch of stuff apart to allow those two separate processes to exist at the same time and so we had to shift component boundaries and say oh well actually we’ve coupled the charging to the keyring and what happens then and this is fascinating 10 years later, 10 years after the fact, the European Union comes with a new set of regulations and those regulations are called APIR and they govern electric charging of cars and one of the regulations is that you have to by law allow for ad hoc charging so ad hoc charging means that I can blip my credit card and charge and move on because now we never built for credit cards we weren’t thinking about that we didn’t want to do that because it’s expensive but now it’s been mandated because we broke that architecture up because we decoupled from the authentication mechanism, the identification mechanism we were able to just drop that in to this architecture ad hoc charging without doing anything and that cost us nothing.
That breaking up of those components doesn’t cost anything in an architecture.
Most of your business people don’t know where you’re setting boundaries in your code they don’t care either if you say there needs to be three components here not two there isn’t a business person on earth that cares about that so that’s a kind of invisible thing that doesn’t cost a great deal of money.
And I think that’s also another great example of how your architecture is fit for things that you didn’t think about when you originally designed it and it’s fit for that because of different stressors as we mentioned so I think that’s good.
So let’s see Michael Trapp he said what if a stressful situation were to break the entire system however it could be very expensive to cope with that scenario at what point would you ignore this scenario is it okay to ignore scenarios at all?
Which basically says sorry that there is such a scenario which would break the entire system.
Yeah so what one scenario that we had in the in the going back to the electric car example one stressor that I had is what what happens if this market collapses what happens if no one buys electric cars and what happens to our software and the answer from our business people was well it’s over then we go home the software doesn’t matter we don’t have to do anything and fine and that’s one of the ways we do it.
To answer Michael’s last question which is very important it is okay to ignore these scenarios it is okay to ignore a stressor but the rule is you’re not allowed to sweep it under the rug you have to write it down you have to say how would I solve this how would I solve this stressor if there’s anything that can be done at all and then it’s okay to not have it in the architecture it’s okay to say look we know this is here we know that there’s an attractor here the system can arrive in this attractor if the system arrives in this attractor in the business context then it’s over for us that’s just and that’s a business decision not a technical decision and that’s perfectly okay but it’s not okay to say that will never happen or yagny and sweep something under the rug that those are excuses to stop us thinking you’ll notice that a lot of the questions and a lot of the things that come up around residuality they’re trying to stop us they’re trying to say no look don’t think about that you know and my argument as an architect is thinking is cheap okay thinking does not cost a great deal of money it burns very few calories and when someone tries to stop you thinking the impact of not thinking can be felt six months down the line whenever you have a change that comes to your architecture you have to rip the whole thing down and put it back together which is a good point because you just mentioned yagny you ain’t gonna need it and that is one of the things where you basically say okay stop thinking here and because you’re not going to need this anyways and i’m not sure whether you agree but it seems that your approach is quite different from that right because you’re basically saying yeah well think about all the things that you can think of yeah i think that yagny is um is sometimes used as an excuse to stop thinking but there’s two ways to think about it and so there’s functional yagny where you say we need this feature um and that is you know that’s a business question do we really need this feature or do we just want to build it or you know but whenever i say our architecture needs to be able to move to these different attractors that’s an architectural question it’s very very different and we it’s perfectly okay to think about it and you might say that this could happen um we could arrive in this attractor this is what we need to do in our architecture we need to think about it then are we going to implement it that’s going to be a business decision but at least we have an architecture and what happens when you build an architecture this way is that you end up instead of having an architecture described as a set of components or a pattern you have a stack of residues and each of those residues is going to survive in in one or more of these attractors and you can make the choice before you go to implementation you can say we’re going to pull this one we’re going to pull this one we’re not going to do this we’re going to consciously not survive in these attractors and but as long as we get our architecture to the point of criticality which is the point where it starts to survive things that aren’t in its specification then we’ve done our job as architects and we can actually measure that using these methods we can measure this i can show you numbers and say this architecture is better than where we started off and that’s the the big innovation here the big thing that’s different with this method is that we can scientifically show that our architecture is better than where we started and i think that’s that’s uh quite something so port any upcoming workshops in australia um so nothing planned but i i am getting a lot of email from australia and new zealand um i i have to navigate some difficult discussions at home and i was invited to do yow this year in australia but 16 days is too too much time away from family so we’re trying to figure out if we can do something um next summer um in especially new zealand because of the trout fishing um but uh there’s nothing planned and is there content summer europe or australia uh new zealand and if i’m in new zealand i might as well come to australia as well um but summer like in because summer’s winter on yes european summer next like next july so winter yeah okay yeah okay yeah okay yeah well caught yeah um and and the other question is is there some content apart from your talks that sits between the books and the workshop so you will do that target software texture gathering in the workshop but i guess here are the questions how do i get yes so so we’re we’re working on this so my so so i’m i’m doing a phd on this stuff my wife’s phd is in digital pedagogy so she is building helping me to build digital material that gets the message across and so the thing we’re experiencing is that there is no one who takes the three-day course who doesn’t understand how this stuff works and how to get it to work and so we’re trying to figure out what is it that’s happening in those three days that’s allowing people to jump from being a little bit confused to to understanding this and how do we package that in digital form and those digital courses are going to be directed to the united states and to australia and to asia because it’s very very difficult for me to cover the entire planet given the demand that’s that’s out there at the minute for these ideas okay yeah but that’s a great position to be in right yeah so um michael asked who pays when scenarios that came true but were not considered actually happened so i think that’s what we discussed about yeah how you would not want to save money in the architecture process probably um and then there is alexander herald and uh he says this reminds me a lot of design thinking when i think of stressors yeah okay um so that’s probably another one of these where where you try to understand this in terms of something that you understood before like with the um traveling and then peter made a longer point so he says a good point a proper risk analysis is of course part of the architect’s job anyway i was more referring to the explained really random risk scenario which might not be that plausible i’m just used to time limits as well as scope limits at some point but i was surprised that barry does the whole analysis including impact on the solution design for that many additional scenarios that quickly um yeah which basically i to me it boils down to this um what what you said i think it’s quite powerful about um not stopping to think and how thinking is actually uh how should i put it inexpensive in a way yeah and there is a good point there so when we talk about risk analysis um and people get risk analysis and risk management and residuality mixed up so risk analysis and risk management or point predictions they’re saying here is a risk here is a mitigation we must mitigate this risk and residuality is saying here’s a bunch of stress none of the stress that i have mentioned in this list is important or real or likely to happen what i can do with this bunch of stress is that i can use it to train my architecture into a very particular behavior and that behavior is criticality which is the survival of against unknown forms of stress and they’re two very very different things yeah i think that’s that’s a great point and then uh ask about uh these digital causes so when can we expect them online and i i’ll ask i’ll ask my wife we’re we’re okay we’re hoping for next year now okay then let’s see there is another one in the um in the in the forum so uh when you change your architecture to meet stressors if that stressor never happens in real life maybe you did upfront design and made other demands harder to implement um inevitably uh inevitably it is so um so that question the person the person who’s written this is saying um what happens if we build for something that doesn’t actually happen isn’t that going that’s going to cost us money it’s going to make things difficult in the future when you build an architecture based on a set of requirements based on a set of point risks that you’ve identified you have built for things that aren’t going to happen and you have built you have not built for things that are going to happen there is no perfect world where i build an architecture that directly corresponds one-to-one to everything that’s actually going to happen in the future i cannot predict the future so the only option we have is to do what i’m doing in residuality which is to train the architecture to the point of criticality you have to we all have to train to criticality you have to we all have two lungs and the reason we have two lungs is that if you get stabbed in one of them you can continue breathing in the other how many of us have been stabbed in the lung probably very few of us on this call so it hasn’t it been a complete waste of our energy our biology and all the food we’ve eaten to keep that other lung breathing all this time it’s a strange way to look at the problem i think yeah i’m just i mean so so the the question basically says if that’s kind of a stress sign never happens in real life maybe you did upfront design and made other demands harder to implement i mean from what you said it seems to me that the answer would be no i actually made other demands easier to implement because it happens that some stressor actually shows up in real life that i didn’t think before but my design took that other stressor in account and therefore ended up with something that made it easier like you said with the patent what is it license plate recognition where you basically said okay we are going to use that for payment and then you it turned out you can also use that uh to fight someone to to deal with people who actually damage the charging station so it actually made that requirement easier to implement and that’s sort of i have to admit that it’s almost seems magical to me right because it’s so so different from what we are probably used to yeah it does seem magical it seemed magical to me until i dug in and found out that there is actually a scientific explanation for why it happens the thing that would worry me about that last comment is that and the person seems to be accusing us of doing upfront design you should do upfront design that’s not a position i’ll bend on you should design your systems before you build them that’s the point of architecture yeah and it’s uh i mean the the original term was big design up front yeah where you’re trying to do these extensive designs that are over the top so design up for it good point here yeah yeah and so one of the things that’s going to happen in your project that might make you not be fond of the idea of big design up front is that there’s going to be flux there’s going to be change in requirements so trying to do big design up front is a waste of time if you have an architecture that’s critical that’s residual then you’re going to be better able to cope with that flux and requirements that you’re that most of us experience so this um there’s a whole other discussion to be had on over and under and over engineering there that’s uh that we might not have time for now yeah so um you said that this is as complex as object orientation in in um i think one of the remarks or maybe when when we prepared the episode so what does that mean so i said the ideas in residuality are so big and they’re so different than what went before that they are easily as intellectually demanding as object orientation and when most of us learned object orientation it took two university terms to grasp the ideas and a few years out in the field before we understood which ones were good and which ones we didn’t want to use residuality is easily that big and so this isn’t a sort of five minute thing that you learn on a youtube video and then immediately implement it this is this is a big idea and i think it’s an important idea and it’s a scientifically verified idea um and it it has the potential to really lift your architectural practice and to make life easier for all of us across the industry but it does require a lot of effort if you spent 45 minutes or you know 23 and a half 22 and a half minutes watching my video on double speed on youtube that’s probably not going to be enough to grasp all the ideas so if you think that this is too much this is too big and then it’s just it just requires a bit more time like i said no one’s left three-day course not understanding how to how to use these ideas and how to get them into production yeah so i just talked to someone i think uh by the beginning of this week and uh he basically said that he felt a little bit lost and i guess what what you’re saying in a way is that this is to be expected because it’s such a huge different idea of of doing things so maybe that’s encouraging it’s just that you’re not there that yet it’s not that you’re too stupid to actually understand it yeah and there is uh you know i could teach these ideas in 20 minutes and say look you take a spreadsheet you think of stressors you do this and you get a result but no one will believe me so we have to spend time going into the complexity science going into this complexity theory and a little bit of the mathematics behind it before we can show this is why it actually works you don’t have to grasp all of those ideas perfectly in order to be able to to make this work and so when we do the the longer form workshop we do the theory and we do the practice and people leave knowing they can do the practice but they might not fully grasp all of the theory just that just like some people might not be able to perfectly define polymorphism if you put them on the spot but they can still work with object oriented code yeah i think that’s a great analogy so um how can we find how can you find more uh how can you find out more and start yeah there are the books and there are quite a few a growing list of youtube videos on on these things and which can be watched multiple times and like i said i’m doing a workshop at um at the software architecture gathering in berlin i’m also coming to berlin to do a full three-day workshop um in december and through avant scoperta um you’ll get i think uh you’ll get the link uh after this and the the the three-day courses seems to be the thing at the minute that gets people to move on this to really understand it and there are people who have completely understood these ideas they’ve written blogs about it they said we’ve done this we’ve built this thing using residuality it worked for us and i’ve never met them so it is possible to get there from the books from the videos um but it requires more you you won’t just watch the video once and then uh implement this um it’s it’s it requires a lot of thought a lot of work yeah and it’s something that i told you when we we did the preparation and um i would like to repeat it here again so i watched one of your videos and then i had some some as a preparation for this for this episode and then i did a call and had some architectural discussion and in my my my mind was in a way wired differently so i i recognize that we are actually talking about some stressors like some some requirements some things that are different and i was like yeah if we if we take care of that then probably we will also take care of this this and this and you know it it just it opens a new perspective so i think that’s also yeah it was quite interesting and that’s just from from watching one of your videos so i think that’s great yeah and so when i use oh so here is a good final question so the final question is by alexander here and he asked when i use this theory how do i know that i’m on the right track so that’s a perfect question it’s almost like we paid someone to ask that um so in the book um i have described the and in the talks as well i’ve described the experiment that you can run and where you run a single set of stressors over your naive architecture this changes your architecture it breaks it up into a number of residues you recombine those to produce a final architecture and then you use a new set of stressors to see which one of these two architectures is more likely to survive and there’s a little calculation there you can get a positive or a negative number if you get a positive number you’re on the right track and you know that you’re going in the right direction and so this is analogous to the training and testing set in machine learning so what we’re actually doing with residuality theory is we’re not designing an architecture anymore we’re training an architecture and which is a completely different uh concept yeah and alexander just reassured us that he was not being paid yes so thanks a lot uh looking forward to see you at a software texture gathering and yes thanks for spending the time and i hope to talk to you again soon thanks for inviting me thank you thank you thank you everyone for your questions