This bit of nested dynamic SQL grants exec or select on all the various procedure and function types, that were not shipped by MS, and that exist in any database outside of master, msdb, tempdb and model. It is certainly not easy on the eyes, tedious to reverse engineer, and I haven't thoroughly tested it, but hopefully if you work through it you can figure out what I've done. ALl you really need to specify is the name of the login you want to grant the rights to - of course this assumes that the login has been mapped to a database user with the same name in each database. That was a layer of complexity I wasn't prepared to add, sorry. :-)
DECLARE
@principal SYSNAME = N'some_server_principal';
DECLARE
@sql1 NVARCHAR(MAX) = N'',
@sql2 NVARCHAR(MAX) = N'',
@sql3 NVARCHAR(MAX) = N'';
SELECT @sql1 += N'
SELECT @sql += N''SELECT @sql += N''''
USE ' + QUOTENAME(name) + ';
GRANT ''''
+ CASE WHEN type IN (''''P'''',''''PC'''') THEN ''''EXEC'''' ELSE ''''SELECT'''' END
+ '''' ON '''' + QUOTENAME(SCHEMA_NAME(schema_id)) + ''''.'''' + QUOTENAME(name)
+ '''' TO ' + QUOTENAME(@principal) + ';'''' FROM ' + QUOTENAME(name) + '.sys.objects
WHERE is_ms_shipped = 0 AND type IN (''''P'''',''''PC'''',
''''FN'''',''''AF'''',''''FN'''',''''TF'''',''''IF'''');'';'
FROM sys.databases
WHERE database_id > 4;
EXEC sp_executesql @sql1, N'@sql NVARCHAR(MAX) OUTPUT', @sql = @sql2 OUTPUT;
EXEC sp_executesql @sql2, N'@sql NVARCHAR(MAX) OUTPUT', @sql = @sql3 OUTPUT;
-- you won't be able to validate all of this, since Management Studio will
-- truncate the output quietly. But you should get a good sense of what the
-- end result will be:
SELECT @sql3;
-- When you are happy with the above, uncomment this:
-- EXEC sp_executesql @sql3;
This is kind of like Inception. In order to generate the right output, you have to figure out which layer every element is on. This is why the initial string is so ugly, and littered with escaped apostrophes in sets of 2 and 4. It can be a little unnerving at first, but you can become adept very quickly at writing dynamic SQL for metadata purposes.
Noting that you didn't ask whether it was a good idea, but only whether it is possible :) I would venture to say it seems like it should be possible, since Drizzle speaks the MySQL protocol over TCP -- as evidenced by the fact that you can use the "mysql" CLI client and other MySQL client libraries to connect to a drizzle server.
The "federated" storage engine in MySQL allows your "local" server to connect to a "remote" MySQL (Drizzle?) server using the standard MySQL C-client library (aha!), and remotely access one or more tables on the remote server as if they were tables on the local server... the "local" server essentially spoofs a client connection to the remote server... one connection per table, from what I've observed.
CREATE TABLE (
-- column definitions on the local server
-- should be identical to the remote server
-- although the engine is more forgiving than
-- the documentation will lead you to believe
) ENGINE=FEDERATED DEFAULT CHARSET=utf8 CONNECTION='mysql://user:pass@remote_server:port/schema/table';
-- this is one of two ways to configure the actual connection
I believe this will work because the remote server doesn't know anything unusual is going on -- it thinks it's being accessed by a client, issuing ordinary INSERT/DELETE/SELECT/UPDATE statements, and queries on the local server interact with the federated tables just as they would with any other table.
If the "local" server were MySQL with your stored procedures, and federated tables configured to connect to the "remote" server -- the Drizzle server -- where your data lives, this should accomplish what you want, if I understand the question correctly. I use "local" and "remote" to describe their roles, of course, not their placement. For this application, they would most likely be on the same machine, or a very fast LAN.
Your application would connect to the MySQL server to call the stored procedures, which would then manipulate data on the Drizzle server via the federated connections.
How efficient will it be? Not nearly as efficient as having the stored procedures and data on the same DBMS.
Is it a good idea? I'm not too sure about that. But in principle, it's probably worth a shot.
The federated engine has gotten some bad press, but at least in MySQL 5.5, I find the criticism to be unmerited. I have numerous federated connections from 5.5 → 5.5 and 5.5 → 5.1 in my production systems.
There are a number of caveats that go along with the federated engine, such as no support for transactions, and queries involving federated tables aren't cached on the local server (there's no way to invalidate them, after all). They don't directly support foreign keys, but the foreign keys are still enforced by the remote server, which will throw an error that propagates back to you locally if you violate them (though it may not be the same MySQL error or SQLSTATE).
You also want be sure you define at least the same indexes on the "local" server as exist on the "remote" server -- and sometimes it may even make sense to declare indexes on columns that aren't really indexed on the far-end -- because if the optimizer decides a full table scan is needed, it actually requests the entire contents of the remote table over the wire from the far-end... usually not what you want. The indexes don't actually "exist" locally or take up any space on the local server -- but the query sent to the far end includes a WHERE clause to get only the appropriate rows, only if the optimizer realizes this is a possibility. (This fact does not seem quite as strange when you remember that federated is implemented as a storage engine and the core of MySQL server interacts with it as though it were indeed a storage engine.)
This document is several years old but provides some very good background on the federated storage engine.
Final note: yes, the federated engine is one of the things that got stripped out of Drizzle, but that's fine -- we only need it on the MySQL side for this application.
Best Answer
Unfortunately you'll probably have to handle the stored procedures by hand. If you're lucky, they're simple procedures following standard SQL syntax, and you'll be able to just copy the existing queries over to PostgreSQL, otherwise you'll have to handle any fallout on a case by case basis.
Side note: this is one benefit of some database agnostic ORMs like Dapper, if your database is mainly consumed by application code. By using the ORM framework's native calls, you decouple your database logic from the actual database system, but there's pros and cons.