I am willing to bet the issue is that the first does a scan of the table, while the second can look things up from an index. This happens all the time. The reason, as Phil has said, is that there are an infinite number of possibilities.
If you want to make the first approach work better with more flexibility you need to use a recursive query structure. Something like:
WITH RECURSIVE sparse_scan AS (
SELECT m.Metadata, COUNT(*) As Count
FROM TWG10MinData twg, Metadata m
WHERE twg.metadata = m.Metadata and m.Metadata = 1100
GROUP BY m.Metadata
UNION ALL
SELECT m.Metadata, COUNT(*) As Count
FROM TWG10MinData twg, Metadata m
JOIN sparse_scan s ON (m.Metadata = s.Metadata + 1000)
WHERE twg.metadata = m.Metadata
GROUP BY m.Metadata
)
SELECT * FROM sparse_scan order by m.Metadata;
This works differently, running the query for 1000, 2100, 3100, 4100, etc, until a row is not returned. You can be a little fancier to skip cases where a row does not exist but the next one does, but the basic approach is something I use frequently, albeit on PostgreSQL.
I would strongly advise against this route. While it seems advantageous to only have 7 stored procedures rather than 28, it will turn into a troubleshooting, tuning, and maintenance nightmare.
I don't know what dbms you are using, but the advice should apply to all of them.
From a troubleshooting and usage analysis standpoint you will have a hard time. Most reporting you get from your dmvs doesn't show the parameters the user submitted to execute the stored procedure. You are going to have to capture them yourself or run traces to see it in real time. This makes it incredibly difficult to determine how these stored procedures are being used, and how the users are using the system. Are they reading from the db more than inserting or deleting? Without digging deep you won't know. If you break them out into different stored procedures you can see which ones are used when and how often.
From a maintainability standpoint you would think that less files would be better, but it actually makes it worse. Now, whenever you want to change something to the insert portion of a stored procedure you are touching the part that inserts, updates and deletes, which means that they are all added to the regression testing scope. It also means that when you deploy new stored procedure code you'll impact people that are utilizing that table no matter what they are doing.
From a tuning standpoint you are actually working against most dbms query optimizers. The optimizers will look at the data in the tables needed and try to determine the best way to retrieve data from them. The issue here is that the operation is not predictiable because your operation relies on parameter sniffing. Also, the strategy of 1 stored procedure to rule the table makes a few assumptions. Mainly that the way that you select, insert, update, and delete data will always be the same. What happens when that changes? You won't want to keep adding because it won't be easy to navigate. It's also hard to determine which indexes to make being that it's extremely difficult to determine how the optimizer is going to react.
From a usability standpoint, it makes things difficult. When you insert you'll need every column passed in as a parameter, but when you select it will only be the id you are filtering on, when you update it may only be the column you want to update and you have to make the query understand this. This normally leads to dynamic sql which is its own nightmare.
The long and short of it is that stored procedures are really good for doing very specific things. The more specific the better most of the time. While this doesn't lead to code reusability and a lot more stored procedures to work with, it makes things a lot easier to troubleshoot, maintain and tune along your applications growth. It is also more intuitive to developers that need to use the database.
If you do want to go this route, the only real way to do it would be if statements trying to sniff our what a user wanted to do based on parameter:
IF @action = 'insert'
INSERT INTO......
I still would not recommend this though.
Best Answer
If these are in a live database, then you can use the DMV
sys.dm_exec_procedure_stats
to pull information for all the Stored Procedures that have been run and cached on the instance.A basic query to pull all procedures ordered by the Average Duration may look something like this:
However, this doesn't show you all the queries within the Stored Procedure and break down each one by the same metrics. You can do that using
sys.dm_exec_query_stats