At Amazon we work in the total ownership model. Each team is fully responsible for the software and services we write. We own, develop, maintain, and operate.
I like this model for a number of reasons but when it comes to QA it makes the situation very clear. The software is mine and I am responsible for shipping quality software that is not only good, but also doesn't page me in the middle of the night.
The software runs better in production when it has been well tested. Testing has to be done before the software ships. That is work that has to be done. Facets of that work are better done by folks classically trained in QA. Other parts of that work are better achieved by software engineers. Regardless of who gets the work done, it has to be done, and done well.
If your QA group is competent then they will find plenty of bugs if you give them tools and time. The more useful tools and API you give them, the more bugs they will find in less time. If you don't want to give them enough time to do their job then you get what you paid for, just like when you compress a software engineer's schedule so much that you get some clunky rickety thing that barely works.
In the Amazon model, if I ship bad software then _I_ shipped bad software. QA doesn't ship bad software because they don't ship software. They advise me on the state of my software. Sometimes, time to market pressure requires software to be released with known non-critical flaws that will be fixed in a follow up release. Most times, the date is slipped and the bugs are fixed.
In the case where QA gives the software is given the green light and then fails terribly in Production, you have really clear and specific things to discuss with the QA team and clear and specific opportunities to adjust the process on the next iteration -- which is usually pretty soon given that you have to fix all the bugs you just launched with.