I've heard stories that some groups in Microsoft have a Dev to QA ratio of 1:2 (that's 2 QA for every Dev). On my team at Amazon that ratio is more like 7:1. (7 Dev for 1 QA). 7 SDE's can write a lot more software than 1 SDET can test and as such we have to prioritize quite a bit to make the best use of our SDET's time.
For a while our QA traditionally spent a lot of time running regression tests. Mind you, our software doesn't have a UI so the tests are all about setting state, applying data to the application, and verifying state.
That wasn't such a good use of time. Computers are really good at monotonous repetitive tasks and tying up QA manually running regression tests was an egregious waste.
The first step was to automate the regression tests so that QA could kick them off and then work on other stuff. The next step was to fully automate the regression suite so that it could be run by anyone, not just QA -- like the Dev team. The third step was to simplify the machinery of the regression suite so that each developer could run it locally, as part of their development effort.
At this point, the SDE's are able to drop software into the regression suite before they even check it in!!! New test cases to go along with the new features are added to the automated test suite along with development, as a way to test the software being built, as it is being built. The automate test suite can then easily be run after the code is merged to mainline, and then again after we merge the code up to the release branch. Our QA never has to get involved in this cycle which is great because there are more valuable places for them to contribute.
QA now has a lot more time to bring their expertise to bear: finding mean and devious ways to break the software and expose the bugs.
The other benefit is that when the automated tests are easy to run, the developers will run them all the time and catch the easy bugs before QA gets a turn. That further reduces cycle time and raises the bar on the QA team to find the really good bugs.