Well, the issue is now resolved:
While it would seem logical to use filtered indexes (NOT NULL), to reduce database size and as so many sources on the web say, increase performance, the reality it seems is something else entirely.
In layman's terms, SQL Server query planner resolves even your basic inner joins without making any assumptions as to the content of the columns. Even though NULL values do not form a join, they must be included in the column index in order for query planner to use it, unless otherwise specified with predicates such as WHERE joinCol_ID IS NOT NULL. Basically, SQL Server does not use filtered indexes for joins at all, unless the queries themselves are modified to account for the filter value. Instead, it will create new statistics on these columns and / or use a clustered index scan or other indexes including the column, whichever it deems most effective. Using filtered indexes on foreign keys is therefore an absolutely horrid idea.
We still have no idea how months worth of testing this in multiple other environments never produced the same results outside of this one, single DB, but this is the way it's supposed to work. Apparently something that as far as we know is not related to cache, statistics or configurations, caused the production DB to behave differently and correctly detect and use the filtered indexes, while all of the testing environments simply used the old indexes (seeing as the indexes were dropped and recreated with the same name, this seems a valid theory even if there is no real proof).
So the lesson of the story: The web is filled with examples of how underused filtered indexes are, how awesome they can be. But this serious downside never popped up except as a nagging thought in the back of my head saying "if these are so great, then why aren't NULL values filtered out of indexes by default, since they only take up space and only serve a purpose in special circumstances"? Well, now I know why. :)
SQL Server does not save previous index definitions for reuse at a later time. So query plans are based on the most recent statistics that were used for a compile.
Although stored procedures do not recompile constantly, they will in time recompile. You can also individually recompile procedures. The "sp_recompile" can be used to make your stored procedures recompile the next time they are used.
I see that you mention recreating statistics and recreating indexes, both of which help to get a correct set of statistics. You do not mention doing index reorganization, which also can be part maintaining the health of the indexes.
However, if the definition of the index has been changed to an earlier version, then it means that someone or some process changed them.
Best Answer
Welcome to DBA.StackExchange, and interesting question!...one I've never thought of before. I had to double check myself on what I thought the process was, so I think this StackOverflow Answer should be what you're looking for. Specifically the first statement of the answer:
Essentially, the plans in the plan cache will automatically be marked as invalid of any existing execution plans that reference the dropped index. Then the next time a query runs that involves the entity of which the index was dropped from, a new execution plan is automatically generated and cached in the plan cache because the old one is invalidated. (Eventually the invalidated plans are flushed from the plan cache to make room for new plans being cached.)
Here's an additional resource that concurs this is how it occurs: Dropping unnecessary indexes - effect on query plans in SQL Server 2005