"I wouldn't consider running a bunch of extra queries as part of an ETL process. I'd use SSIS's Lookup Transformation and Cache Transform to handle this for me. If there are key mismatches, I can route the mismatched rows to an error table. This should be a part of a template used by developers when creating new load processes."
If a manual DML change is warranted (a one-off change) for some reason (which was my original statement on this), such as inserting a new row manually for some reason do you create an entirely new package for that? The original question here was if a sysadmin or developer needs to change data in a table for some reason (insert a row, update a value, etc.), how do you continue enforcing relational integrity. Your original answer seemed to indicate that in this situation the person making the change would need to manually validate data integrity as part of your standard process.
Side Note: BTW, the Lookup Transformation (with or without Cache Transformation) is not necessarily the most efficient way to match up keys in all situations. I recently refactored a handful of Lookup Transformations in a single package that took 3.5 hours to run. It now takes 4 mins 28 secs to run. This also brings us to another point I briefly mentioned earlier, about wall-clock speed as a simplistic measure of performance. If you're caching millions of large rows in a Cache Transformation you will be using a lot of server resources just to cache that information. If you are pulling all these millions of rows across the network you're also utilizing network resources as well. If there are other processes running on the server or pulling data across the network, this could be degrading their performance... It's all about the tradeoffs. Of course the SCD Wizard is right out.
"The same way you prevent them from overriding or removing FK constraints... you can't. You have to trust your sysadmins and let them know what is allowed and what is not."
A U.S. President once said, "Trust but verify." With DDL changes like FK constraint changes you can vigilantly monitor in many different ways. Regular polling of database metadata and comparison to the expected model metadata, DDL triggers, etc. What type of analogous verification/validation process do you have to verify that your sysadmins have done the right thing? When it comes to my databases I'm just not willing to Gamble that others will always do the right thing or not make mistakes.
"No. Ongoing integrity is just as important. I often choose to ensure ongoing relational integrity through ETL processing, not FK constraints. Same destination, different road."
Negative on that. Your ETL processing stops checking data integrity after the data load has been completed. FK constraints continue to ensure relational integrity on an ongoing basis, protecting you against (1) manual DML changes directly to the database [purposely or accidently], (2) bugs in your ETL process, (3) bugs in other processes that manipulate the data [if there are any, of course]. One-shot validation of your relational integrity during ETL processing is only halfway down the road to ongoing relational integrity.
"They did not degrade query performance, they would have degraded load performance."
It all depends on where your priorities are. As you indicate above, there's a tradeoff to be made here. If load performance is your priority then you'll choose it over the other factors.