Just read an interesting post related to testing:
After reading it, for me, the thought that remained is that it says that unit-testing considering only units, trying to achieve 100% coverage is not the best approach to either test or design your APIs.
I agree with that: personally, I think test-driven/test-first development is something you should have in your cookbook (for me it's something I use mostly when I have something that's hard to reason about, so, first I go on to the create test-cases/fix test iteration until the final solution).
Still, when designing some system usually I reason more about interfaces (note: API interfaces not UI), so, the first thing I try to reason is what interface would be needed to do it, then I go on to implement it (although usually I do test-driven development for the API implementation), but then when that is stable, I create a system test integrating it to the actual application (so, for every feature there should usually be a system test accompanying it).
Also, I find that many times, having a system test first helps to speed up the process. I.e.: you want to open a project, create an entity, select some field and voila: a crash happens. So, the first thing I usually do is reproduce that scenario on a test and only then go on to try to reason about it (if your app has some macro system this can even be made very straightforward so that you can record a macro and just play it back on your test -- for instance, on Kraken which is a reservoir data processor I helped developing, having a macro system which writes the actual macro output as Python code helps a lot in writing tests... yes, it's hard to create a proper system like that, but in the end it's very useful for reproducing scenarios -- not mentioning that your users can automate their own scenarios). There are solutions such as Squish which also automate UI tests, but I must say I'm not very fond of that route (as your UI may change a lot and tests become brittle, but macros should be kept backward compatible regardless of that -- or at least give a reasonable error if something is no longer possible).
Sometimes if there's something very simple and straightforward, I may skip on testing, but as things become more complex, I find that system tests can't be skipped. Especially as programmers usually create things on layers (i.e.: you have to change something on a core library which will later be used in the UI layer somehow -- if you just did a unit-test in the core and mocked your way out for the UI you may break things and will only discover it if you actually had a system test).
Sure, I get that the test suite may get slower, but fortunately hardware has been keeping up -- also, it's important to make sure your tests can run in parallel in this situation, so, if you have an 8-core machine you can create 8 processes and just multiplex your test cases to run 8 times faster.
In Django this means I created a structure where I don't use the final database (which would be PostgreSQL), but use an in-memory SQLite, with just a few tests (properly marked to be run serially) to run in the 'real' environment and have a structure which creates only the tables I'll be accessing instead of all the tables... it has the benefit of failing if I somehow try to access a table that shouldn't be accessed and makes things faster and parallelizable -- this also means that Django test subcases with fixtures can't be used, but that's probably for the better (fixtures aren't usually needed -- just go on to programatically do what you must, even if that includes populating the db... that usually makes things better anyways). So, it may not be a complete application test, but it does system-testing feasible without having a test suite that becomes too slow.
In a Desktop app test suite some care has to be taken too... especially on tests that require focus (which have to be marked serial), but definitely, the UI has to be tested, so, I create functional tests which actually click buttons, write on line edits, check if buttons are disabled on situations, etc (but all programmatically, not through UI automation).
To sum it up, I agree with the main point which is that you don't need to do 100% coverage unit-testing mocking your way out to test a given unit, but I definitely don't think that TDD is dead: it's definitely something that must be in your toolbox, so that you can selectively choose when you want to use it... and regression testing is something you definitely can't skip. So, if you have a unit-test which would require mocking to much, don't let skipping that test bother you too much... but don't skip the system testing on that, and if unit-testing doesn't require mocking or huge setups, don't skip that either (unit-tests are usually a bit easier to debug when things fail).
In the end, as always, having things balanced is probably the best choice... no 100% TDD, no 100% system test but a healthy balance which helps catching errors before the user gets them but without becoming too burdensome for you team.