I cannot thank Randy enough for describing the velocity initiative to us. It is invaluable information that helps us understand how it's all put together!
You are very welcome. Glad to hear that it was valuable. If you are interested in learning more details about this initiative, check out this talk from QCon SF 2021: www.infoq.com/presentations/ebay-velocity/.
Is it a good idea to merge many smaller repos into one big repo? What considerations need to be made when doing this? I came out of this talk wondering if my team of 4 should merge our 20 odd interrelated code repos into a single monolith repo. There’s so much code/design patterns we wish we could extract into libraries and share with the other projects, but the amount of time has proven to be just too prohibitive. A common problem is that when developing or changing features, we need to update the dependent libraries/services first and release before working on the actual feature that will benefit users. This is time consuming and error prone. We need to be 100% accurate with our change to the dependent libraries/services otherwise we will need to perform multiple releases. With a monolithic repo I can see how it would reduce the barrier and time costs as we could develop all the services and libraries in step and release all at the the same time. But I’m hesitant to take this step as I’m not sure if there are any negative or unintended side effects.
The risk of moving to a single repo, is that some people may relax about boundaries between modules, and so increase coupling. I am not very convinced by this argument, but I have heard other people express it. Why not try it in steps? Identify two components that nearly always change together, and merge those two, and see how it works out. Have you seen this video, where I talk about this topic? The Monolith vs Microservices Debate ua-cam.com/video/bWZVx6TgVvc/v-deo.html
@@ContinuousDelivery thanks Dave for your reply. Smaller steps seems like a good idea. Yes I’ve watched and now rewatched that linked video. It has useful discussion points. I think partially the anxiety is about unlearning “good” practices and getting comfortable with these proven better practices you are teaching us. I’ll report back how we went once we’ve had some time to trial it and build up our experience. Thanks again.
@@br3nto Everything is a tradeoff. Multiple repos forces clean separation between the modules. A monorepo permits easier cross-module refactoring. On your specific example of having to regularly change dependent libraries along with services, that is might indicate that your domain decomposition isn't ideal. If things change together, you often want to put them together in a single logical module.
@4:35 The one thing I would say is that I've found it is sometimes easier to iterate with microservices, because people are more willing to throw something away, or spin up something entirely new. I think there's a good argument for starting with a monolithic core, that centers on what you identify as your most important core business domain, and then progressively split off services from that core, or spin up brand new services as you are able to identify needs, with readiness and willingness to throw anything away if need be. Starting with a "monolith" is good, in my mind, as long as you take care not to let that monolith grow too large. And that in my experience is often a push against "product design" departments who have massive lists of features they've invented in their head, or stolen from "competitor research", without any iteration or feedback from real users. Another thing Randy doesn't go into here is that microservices aren't just about scaling, I often find the biggest benefit of microservices architecture (and to some extent just distributed service architecture) is simply that it enforces boundaries between domains. You can get that with a disciplined monolith, but it is very hard to maintain that discipline.
Great video as always, I really enjoy your interviews (as well as your regular videos). Regarding the point of deploying more often: do you mean in this context deploying to production, or only some internal Dev system? How does "continuous deployment" (to production) relate to scrum with its sprints?
Yes, deploying means to production. CD doesn't actually say deploy more often, it says "work so your software is always releasable", then you have the option to deploy more often. So you could choose to release whatever you have at the end of each Sprint, or you could release after every commit. The second option gives you more, and better, feedback, so is to be preferred, but both work fine.
@@ContinuousDelivery What Dave says. In the eBay case, we are talking about deploying all the way to production. In a world of two-week sprints, you would be deploying multiple times per sprint. The way we think about it is that ideally every PR becomes a production deployment.
At 29:23 you mention the Standard Model of elementary particles, which interact "asynchronously" by exchanging "messenger particles" (virtual bosons). Could you please point out the source of this "quantum metaphor" as applied to software? The metaphor has also another aspect, and that is that the bound states of elementary particles (e.g., atoms) have discrete *states* and these states are naturally hierarchical. For example, the main energy states of a hydrogen atom (numbered by the quantum number 'n') have "substates" of angular momentum (numbered by the quantum number 'l'), and those states have further "substates" of the projection of the angular momentum (numbered by the quantum number 'm'). This would mean that event-driven components correspond to *state machines*, which are naturally hierarchical (like Harel Statecharts). Furthermore, the state nesting corresponds to the *symmetry* of the problem with respect to given events. This extended "quantum metaphor" could suggest a deeper connection between the actor model and state machines, in which actors *are* hierarchical state machines.
@@ContinuousDelivery Yes, as I said in the first sentence, my question is about your mentioning of the standard model of particle physics (or perhaps you meant just the Feynman diagram representation of fundamental interactions). So my question again is about the origin of this "quantum analogy" as applied to *software*. I've described the "quantum analogy" in my book "Practical Statecharts in C/C++" published in 2002, but this is the first time I hear this analogy used in other areas of software development.
@@StateMachineCOM I wasn't quoting anything, and I don't think Randy was either, I think that there is a relationship at the level of "information theory". Some people in modern physics believe that "information" is really the at root of reality and everything else is somehow emergent from that. At this level quantum and particle physics is really about the exchange of information, and so is all about message passing. It was merely an observation on what looks to me like a parallel, less an analogy, but more something fundamental about the exchange of information in any context.
The strange adulatory "It's Nicole!" for ten minutes at the end: it's not. Those metrics were there already for software ops teams. The big change is that the tools got better enough that DevOps works, and you have s software and ops team in one. And then ops metrics work for software teams and boom, someone uncovers that by sending out surveys. Accelerate is a good book, but there's no need to treat it like THE good book!
I think you have missed something very important about the value that Dr Fosgren added to our industry. Her work (with others) collated hundreds of thousands of data sets over the last ten years and, with sound scientific analysis, shows the correlation of these metrics with actual results and the impact on software delivery performance. Without that, the metrics can show a dev team what progress they are making, but not predict the impact of behaviours and techniques.
@@ContinuousDelivery understood - the legwork in conducting and tabulating those surveys should not be underestimated. (Although correlation is weak from a scientific perspective, I don't think it matters when it comes to the level of precision of surveys anyway.) My point is that the 100000s of FTE in creating the SRE/DevOps software tooling and culture that allowed the surveys to be filled in and analysed massively outweighs the size of the book's contribution, as good as it is.
@@centerfield6339 I think that the correlation is in-line with other sociology, which is never as clear as clear as physics. Its always messy, statistical probabilities rather than 6 sigma results where the variables have been strongly controlled.
@@ContinuousDelivery I will echo Dave's point here. As we both say in the video, Dr. Forsgren's contribution is putting those DevOps techniques on a solid scientific footing. She doesn't claim to invent any of the practices, but she shows why and how they work in a way no one had done before. In my direct experience, this solid foundation removes entire classes of objections and skepticism, and changes the conversation from "how do we measure success" to "how do we improve these metrics that matter".
I cannot thank Randy enough for describing the velocity initiative to us. It is invaluable information that helps us understand how it's all put together!
Great to hear!
You are very welcome. Glad to hear that it was valuable. If you are interested in learning more details about this initiative, check out this talk from QCon SF 2021: www.infoq.com/presentations/ebay-velocity/.
absolutely great channel
Thanks again Mr. Farlane (and Mr. Shoup in this occasion) for sharing your knowledge. Really valuable indeed.
Our pleasure!
This is a brilliant discussion, thanks so much.
Glad you enjoyed it!
I got my copy of Modern Software Engineering today! Thanks for the great channel!
Great stuff you two. Thanks!
incredible case study 📚 of continuous delivery 🚚
Yes, really interesting insight into SW dev at scale and how to introduce CD into a big existing operation.
great episode. thank you for sharing !
Glad you enjoyed it!
I love the content.
I only have an issue with the circulating frame and the moving background causing nausea like seasickness.
Poor you
Is it a good idea to merge many smaller repos into one big repo? What considerations need to be made when doing this? I came out of this talk wondering if my team of 4 should merge our 20 odd interrelated code repos into a single monolith repo. There’s so much code/design patterns we wish we could extract into libraries and share with the other projects, but the amount of time has proven to be just too prohibitive. A common problem is that when developing or changing features, we need to update the dependent libraries/services first and release before working on the actual feature that will benefit users. This is time consuming and error prone. We need to be 100% accurate with our change to the dependent libraries/services otherwise we will need to perform multiple releases. With a monolithic repo I can see how it would reduce the barrier and time costs as we could develop all the services and libraries in step and release all at the the same time. But I’m hesitant to take this step as I’m not sure if there are any negative or unintended side effects.
The risk of moving to a single repo, is that some people may relax about boundaries between modules, and so increase coupling. I am not very convinced by this argument, but I have heard other people express it.
Why not try it in steps? Identify two components that nearly always change together, and merge those two, and see how it works out.
Have you seen this video, where I talk about this topic? The Monolith vs Microservices Debate
ua-cam.com/video/bWZVx6TgVvc/v-deo.html
@@ContinuousDelivery thanks Dave for your reply. Smaller steps seems like a good idea. Yes I’ve watched and now rewatched that linked video. It has useful discussion points. I think partially the anxiety is about unlearning “good” practices and getting comfortable with these proven better practices you are teaching us. I’ll report back how we went once we’ve had some time to trial it and build up our experience. Thanks again.
@@br3nto Everything is a tradeoff. Multiple repos forces clean separation between the modules. A monorepo permits easier cross-module refactoring. On your specific example of having to regularly change dependent libraries along with services, that is might indicate that your domain decomposition isn't ideal. If things change together, you often want to put them together in a single logical module.
@4:35 The one thing I would say is that I've found it is sometimes easier to iterate with microservices, because people are more willing to throw something away, or spin up something entirely new.
I think there's a good argument for starting with a monolithic core, that centers on what you identify as your most important core business domain, and then progressively split off services from that core, or spin up brand new services as you are able to identify needs, with readiness and willingness to throw anything away if need be.
Starting with a "monolith" is good, in my mind, as long as you take care not to let that monolith grow too large. And that in my experience is often a push against "product design" departments who have massive lists of features they've invented in their head, or stolen from "competitor research", without any iteration or feedback from real users.
Another thing Randy doesn't go into here is that microservices aren't just about scaling, I often find the biggest benefit of microservices architecture (and to some extent just distributed service architecture) is simply that it enforces boundaries between domains. You can get that with a disciplined monolith, but it is very hard to maintain that discipline.
thanks much
Great video as always, I really enjoy your interviews (as well as your regular videos).
Regarding the point of deploying more often: do you mean in this context deploying to production, or only some internal Dev system? How does "continuous deployment" (to production) relate to scrum with its sprints?
Yes, deploying means to production. CD doesn't actually say deploy more often, it says "work so your software is always releasable", then you have the option to deploy more often. So you could choose to release whatever you have at the end of each Sprint, or you could release after every commit. The second option gives you more, and better, feedback, so is to be preferred, but both work fine.
@@ContinuousDelivery What Dave says. In the eBay case, we are talking about deploying all the way to production. In a world of two-week sprints, you would be deploying multiple times per sprint. The way we think about it is that ideally every PR becomes a production deployment.
At 29:23 you mention the Standard Model of elementary particles, which interact "asynchronously" by exchanging "messenger particles" (virtual bosons). Could you please point out the source of this "quantum metaphor" as applied to software? The metaphor has also another aspect, and that is that the bound states of elementary particles (e.g., atoms) have discrete *states* and these states are naturally hierarchical. For example, the main energy states of a hydrogen atom (numbered by the quantum number 'n') have "substates" of angular momentum (numbered by the quantum number 'l'), and those states have further "substates" of the projection of the angular momentum (numbered by the quantum number 'm'). This would mean that event-driven components correspond to *state machines*, which are naturally hierarchical (like Harel Statecharts). Furthermore, the state nesting corresponds to the *symmetry* of the problem with respect to given events. This extended "quantum metaphor" could suggest a deeper connection between the actor model and state machines, in which actors *are* hierarchical state machines.
Not sure I follow your description, but Randy and I were talking about the standard model of particle physics en.wikipedia.org/wiki/Standard_Model
@@ContinuousDelivery Yes, as I said in the first sentence, my question is about your mentioning of the standard model of particle physics (or perhaps you meant just the Feynman diagram representation of fundamental interactions). So my question again is about the origin of this "quantum analogy" as applied to *software*. I've described the "quantum analogy" in my book "Practical Statecharts in C/C++" published in 2002, but this is the first time I hear this analogy used in other areas of software development.
@@StateMachineCOM I wasn't quoting anything, and I don't think Randy was either, I think that there is a relationship at the level of "information theory". Some people in modern physics believe that "information" is really the at root of reality and everything else is somehow emergent from that. At this level quantum and particle physics is really about the exchange of information, and so is all about message passing. It was merely an observation on what looks to me like a parallel, less an analogy, but more something fundamental about the exchange of information in any context.
When can you interview Dr. Forsgren?
🤔 that's a good idea...
The strange adulatory "It's Nicole!" for ten minutes at the end: it's not. Those metrics were there already for software ops teams. The big change is that the tools got better enough that DevOps works, and you have s software and ops team in one. And then ops metrics work for software teams and boom, someone uncovers that by sending out surveys.
Accelerate is a good book, but there's no need to treat it like THE good book!
I think you have missed something very important about the value that Dr Fosgren added to our industry. Her work (with others) collated hundreds of thousands of data sets over the last ten years and, with sound scientific analysis, shows the correlation of these metrics with actual results and the impact on software delivery performance. Without that, the metrics can show a dev team what progress they are making, but not predict the impact of behaviours and techniques.
@@ContinuousDelivery understood - the legwork in conducting and tabulating those surveys should not be underestimated. (Although correlation is weak from a scientific perspective, I don't think it matters when it comes to the level of precision of surveys anyway.) My point is that the 100000s of FTE in creating the SRE/DevOps software tooling and culture that allowed the surveys to be filled in and analysed massively outweighs the size of the book's contribution, as good as it is.
@@centerfield6339 I think that the correlation is in-line with other sociology, which is never as clear as clear as physics. Its always messy, statistical probabilities rather than 6 sigma results where the variables have been strongly controlled.
@@ContinuousDelivery I will echo Dave's point here. As we both say in the video, Dr. Forsgren's contribution is putting those DevOps techniques on a solid scientific footing. She doesn't claim to invent any of the practices, but she shows why and how they work in a way no one had done before. In my direct experience, this solid foundation removes entire classes of objections and skepticism, and changes the conversation from "how do we measure success" to "how do we improve these metrics that matter".