Unlike the branch plans which are documented in a very coherent and detailed way in the latest Rangers branching guidance, I'm not aware of a similar quality of documentation written by somebody that describes how to handle the /MAIN/Bin. I'm sure that somebody has written about this sort of thing, I just do not know where to refer you . But I will certainly share my own personal experience with binaries.
Let's start with what you already know. You know that it is annoying to have to wait for a compilation of 27 projects in order to test the latest version of your application. First, you do realize that there are a variety of compilation commands accessible from Visual Studio? If you change a line in your application which is at the very top of the call stack, you can use the build option and it will only compile the module that changed in any modules in that solution that are dependent upon it. In contrast, if you choose the rebuild option, it will build all of the projects regardless of whether any of them have changed.
Second, you can create multiple solutions and save them with separate solution files. Each of these solutions can consist of a different arrangement of relevant projects. We used to use this trick in several circumstances. Our entire solution consisted of over 50 projects. And occasionally we would get tired of how long it took to build. In certain circumstances, we would be focusing on just a few of the projects. We used automated testing aggressively with those projects, and so we had a solution that just contained that small number of projects. It was much faster and easier to work on. The downside to that approach was that all of the applications that depended on these lower DLLs, might have their behavior changed with all of our changes. So we could introduce regression into our dependent applications and modules and not know right away. In contrast, when we were working in the large cumbersome solution, we could make a change to the lower DLL and run all of our automated tests. If we caused regression, we could identify it and correct quickly. But keep in mind for lower DLLs that are often widely used utilities, it is uncommon that you are going to test every possible dependent application within the normal development cycle on your workstation. Those kinds of regression effects can be uncovered with the continuous integration server that runs automated tests for all software within your entire portfolio. Thus, there is no one best way to do it. Each approach has its trade-offs.
You may have already known both of these tricks listed above: the incremental compilation option in Visual Studio, and the partial solution approach. But since you were complaining about building all 27 projects, I thought that I should start out by pointing out the obvious ways to minimize the pain of building 27 projects. Even if you know these two tricks, maybe some other developers who will benefit from the advice.
Now let's talk about the Bin.directory you see in some branching models. What I'm going to describe now is just one way to use it. Not the only way or necessarily the best way. In some cases, we would develop and test code in the module, and as part of a carefully controlled and tested release, we would take a copy of the binary output after testing and validation and we would place it in this special /MAIN/Bin folder. Then, any project or solution in the MAIN branch could refer to /MAIN/Bin to reference custom and third-party DLLs. Because these projects are referencing a DLL rather than another project, you would not even include the project that corresponded to the referenced DLL in your solution. There are pluses and minuses to this approach. So now, if you take a lot of those lower modules and capture their carefully tested binaries and place them into this shared area,/MAIN/Bin, and refer to them within your application project, then you are not having to recompile those dependencies each time. This makes your application solution much smaller and as a result, it compiles much more quickly. But now you have given up the ability to rapidly incorporate changes from those other projects into your application solution. It all depends upon the separation and responsibilities of the various modules and the way in which you are changing them in the normal course of a development cycle. If you were to jump around a lot and work one moment on code in a much lower level module, in three minutes later you are working on code way up in the application part of the stack, it would be annoying to have to rebuild the lower solution, take its binaries, place them in the shared area, and then go work on the other part of the application. In that scenario you would be better off just relying on the incremental compilation to speed things up as much as possible and have all the projects in one solution is you do today.
But if you are referring to the utility module that is relatively stable, or perhaps one that changes but that is changed by another team for different purposes, then you can probably just refer to it as a DLL. Then on a periodic basis, the suppliers of those modules can update them with enhanced functionality, and push those updated versions into the shared area, where your solution will pick it up the next time you get the latest version. But in honesty, we did not use that very often for our custom-made software. We would typically reserve the /MAIN/Bin folder for third-party DLLs or open source libraries that we compiled. Those open source libraries did not change very often. We usually would get a version and then keep it fairly stable.
If you're going to do this binary thing, then you need to be aware that there are a whole host of complicated options that I'm not going to describe much because I'm not an expert in them. But there are ways to tell your project to use a specific version of a DLL. In that way if a supplier team publishes a newer version of their DLL, and you are not interested in taking it, you can simply have your application use the older DLL. One of the ways to facilitate this is to the use of the global assembly cache. When a DLL is loaded into the global assembly cache, it can have several versions of a DLL that differ based on the bit with of their compilation (32-bit vs 64-bit) as well as the version number. This increases the complexity of your deployment process, but it gives you some of the advantages that the.net framework provides for version selectivity.
In conclusion, there is not just one way to use the /MAIN/Bin folder in your source tree. There are many many different ways to utilize this folder, and solution architects design the one that provides the optimal convenience and safety for the anticipated development scenarios. But I think I've given you enough illustrations that you begin to get the idea of what some common approaches to using it might be.