Long story short: Atmel Studio's inflexibility forces quite a strict isolation, which forces you towards the purist side (potential advantage) and makes compile-time library configuration very difficult (clear actual disadvantage). So it's not a very clear win - though I am rather liking it.
(for clarity in the explanation: I have 2 libraries, each of them containing a different set of files. And when I mention Solutions and Projects, with capital letters, I mean Atmel Studio / Visual Studio's concepts)
In the beginning, I worked in Atmel Studio 6 with single-Project Solutions, which contained the Solution-specific files and linked to lib files in their own directories. This seemed ugly/hackish; given that Atmel Studio / Visual Studio support multiple Projects inside of a Solution, I had the feeling that everything would be cleaner if I was using a Project for each library. And I had the nebulous desire of having well-defined library versions (not only! eventually I aim to have API versions!), to be able to easily trace the evolution of project, and to be able to easily rebuild them at any point in their history.
What is exactly the problem with a single-Project Solution which links to libraries? The problem is that it's too close to being a big bag of things that influence each other in not too well defined ways. This being embedded development by myself, little coming in from the outside (even Atmel libraries have been heavily modified by now), and given that everything ends up forming a single binary which goes directly "to the metal" (no OS rules to follow), there are very few apparent reasons to keep any restrictions. And so everything ends up depending on a config.h so everything gets recompiled whenever I change a comma there.
And that's plain ugly. And while I have managed to whip up sets of functions into libraries and a set of libraries into a framework which now makes the bulk of new projects in pretty elegant ways (if I may say so), the fact is that the framework is still hanging on a config.h that changes with every project in hard-to-justify ways. In a way, one could say that the framework was still keeping project-specific things, and was making every framework-using project somehow dependent on every other framework-using project. In summary, a mess.
And this is not scalable. These are projects measured in tens-of-KB, tens of files. A 8-bit, 16 MHz microcontroller. So it's rather small; but the libraries and framework keep evolving, start showing traces of what would be an OS, though pretty specific at that. But how could ever this mess continue evolving without collapsing? Could a, say, Linux ever evolve from this? That'd be terrible! Even for version control it's hackish: the current way of organizing it was mostly by checking out the revisions closest to the date of a given release.
There must be a better way. A change in a project shouldn't necessitate a library recompilation - and viceversa. A project that has to be upgraded to a new library version should have a good idea of what has changed and why — enough information maybe to even decide that the upgrade is unneeded. A regression should be traceable and containable - a change in the library necessitated by project A should not damage project B, at least not blindly. Etc, etc.
So, yes. I wanted library versioning, and API versioning.
So, I made an independent Atmel Studio Project for each library, and changed the Solutions to include those Projects + main Project. First win: no linked files, and no strangely empty folders in the filesystem when a snapshot of the Solution folders had to be sent somewhere.
But also first problem: no supported way for a Project to refer to a file from another Project. As in, make the library Project #include a config.h file from the main Project. At least, while using the standard Atmel Studio toolchain and makefile generation.
Remember that one goal is to have the library Projects unchanged while the main Project might configure them. So, no option to hardcode a path to the main Project's config.h into the library Project; that would mean keeping the chaos of the original scheme. Dealbreaker?
The truth is that the difficulty of referring to a Project from inside another has made me stop using a number of hacks and formalize the interfaces, be them .h files or runtime, sooner than I expected. Some ugly hacks that I was still using because, after all, everything was in the same compile bag, got finally pushed past the too-ugly line and motivated me to clean up / make a set of library initializers and callbacks. So in fact the rigidity has pushed me towards the formalized API thing. That's a win.
Only, I expected some more carrot to go with that much stick. I wanted to be able to abandon some compile-time configuration, not be forbidden from using it! ... though, even then, I must again say that the new constraints have made me more careful/conscious in a good way, instead of relying on compile-time configuration cleaning up after me at every step. For example, I was being overly generic in some places with arithmetic, and counting on the compiler to optimize the constant operations - like say "param/ticks_per_ms", where ticks_per_ms is a compile-time constant. Of course it is good to be able to be so generic, but once you start down that road there is the temptation to carry over that genericity all over the place - losing perspective of the fact that "ticks_per_ms" is almost always 1, sometimes 10, and never anything else. So accepting that the main case is 1 and exceptionally 10 might be more realistic/productive in the long run, since that in turn clarifies a number of subproblems.
Another problem is that the compilation options are synchronized between Projects in a very coarse way: the Solution's configuration selects the Projects' configurations, and that's it. So any finer change in compile options usually means having to change them in every lib's Project manually, one by one. Pain in the ass. Doesn't happen too often, but for example when I started trying the link-time-optimization flags it was nasty, since any change affecting LTO has to be made in sync in all the compiled parts. Looks like Visual Studio has a way to copy settings easily via a Properties navigator or some such, but that is not available in Atmel Studio.
(in other news, LTO doesn't look to me like it's worth it for now. A couple of tries I did were actually slower when LTO was on. Compiling/linking was slower. And the .lss files generated finally don't contain source information because the LTO toolchain still does not manage correctly the debug information yet, as of july 2013. So... not for me yet.)
One final disapointment about multi-Project Solutions: compiling is slower than single-Project! How on Earth? Looks like Atmel Studio is regenerating the makefiles unconditionally everytime, and that is slower than the compilation itself - which is avoided by make. Given that the important work is made by GNU tools, which are multiplatform and open source, why oh why did Atmel have to go with Visual Studio instead of, say, Eclipse?
So, to summarize: I am rather glad that I went down this road, but that is because I was hungry for reorganization and cleaning. Compile-time configuration is lost, which unnerved me at the beginning, but the fact is that I am not really missing it; not all the projects have been upgraded to the new format yet, but the worst expected problems have already been dealt with and in some cases the new explicit solutions are faster than the old ones, which makes me optimistic. Also, dependencies are cleaner and explicit, documentability is much better (part of the reorganization allowed to break libraries in smaller ones, which made them pretty much self-documenting), and so maintainability should be much, much better too.
The code and architecture were good before, but the messy organization could make you have doubts. Now, the whole filesystem/library/Project/Solution organization reflects the architecture, and indeed one could say that shows it off!