Does this get you started?
SELECT n.nspname AS schema,
pg_catalog.format_type ( t.oid, NULL ) AS name,
t.typname AS internal_name,
CASE
WHEN t.typrelid != 0
THEN CAST ( 'tuple' AS pg_catalog.text )
WHEN t.typlen < 0
THEN CAST ( 'var' AS pg_catalog.text )
ELSE CAST ( t.typlen AS pg_catalog.text )
END AS size,
pg_catalog.array_to_string (
ARRAY( SELECT e.enumlabel
FROM pg_catalog.pg_enum e
WHERE e.enumtypid = t.oid
ORDER BY e.oid ), E'\n'
) AS elements,
pg_catalog.obj_description ( t.oid, 'pg_type' ) AS description
FROM pg_catalog.pg_type t
LEFT JOIN pg_catalog.pg_namespace n
ON n.oid = t.typnamespace
WHERE ( t.typrelid = 0
OR ( SELECT c.relkind = 'c'
FROM pg_catalog.pg_class c
WHERE c.oid = t.typrelid
)
)
AND NOT EXISTS
( SELECT 1
FROM pg_catalog.pg_type el
WHERE el.oid = t.typelem
AND el.typarray = t.oid
)
AND n.nspname <> 'pg_catalog'
AND n.nspname <> 'information_schema'
AND pg_catalog.pg_type_is_visible ( t.oid )
ORDER BY 1, 2;
In psql you can \set ECHO_HIDDEN on
to make psql show you the queries
used to generate the output of the \d...
commands. I've found these queries
to be very useful as a starting point when digging metadata out of databases.
Update: 2019-12-16
For composite types, the columns metadata can be determined using something like the following:
WITH types AS (
SELECT n.nspname,
pg_catalog.format_type ( t.oid, NULL ) AS obj_name,
CASE
WHEN t.typrelid != 0 THEN CAST ( 'tuple' AS pg_catalog.text )
WHEN t.typlen < 0 THEN CAST ( 'var' AS pg_catalog.text )
ELSE CAST ( t.typlen AS pg_catalog.text )
END AS obj_type,
coalesce ( pg_catalog.obj_description ( t.oid, 'pg_type' ), '' ) AS description
FROM pg_catalog.pg_type t
JOIN pg_catalog.pg_namespace n
ON n.oid = t.typnamespace
WHERE ( t.typrelid = 0
OR ( SELECT c.relkind = 'c'
FROM pg_catalog.pg_class c
WHERE c.oid = t.typrelid ) )
AND NOT EXISTS (
SELECT 1
FROM pg_catalog.pg_type el
WHERE el.oid = t.typelem
AND el.typarray = t.oid )
AND n.nspname <> 'pg_catalog'
AND n.nspname <> 'information_schema'
AND n.nspname !~ '^pg_toast'
),
cols AS (
SELECT n.nspname::text AS schema_name,
pg_catalog.format_type ( t.oid, NULL ) AS obj_name,
a.attname::text AS column_name,
pg_catalog.format_type ( a.atttypid, a.atttypmod ) AS data_type,
a.attnotnull AS is_required,
a.attnum AS ordinal_position,
pg_catalog.col_description ( a.attrelid, a.attnum ) AS description
FROM pg_catalog.pg_attribute a
JOIN pg_catalog.pg_type t
ON a.attrelid = t.typrelid
JOIN pg_catalog.pg_namespace n
ON ( n.oid = t.typnamespace )
JOIN types
ON ( types.nspname = n.nspname
AND types.obj_name = pg_catalog.format_type ( t.oid, NULL ) )
WHERE a.attnum > 0
AND NOT a.attisdropped
)
SELECT cols.schema_name,
cols.obj_name,
cols.column_name,
cols.data_type,
cols.ordinal_position,
cols.is_required,
coalesce ( cols.description, '' ) AS description
FROM cols
ORDER BY cols.schema_name,
cols.obj_name,
cols.ordinal_position ;
When you use SQL CLR objects across databases, SQL Server will automatically convert them if certain conditions are met (https://technet.microsoft.com/en-us/library/ms178069(v=sql.105).aspx).
In my case, this automatic conversion was taking out the SCH-M locks. I fixed the problem by manually converting the contents of my SQL CLR type to binary, and passing it to the second db as varbinary(max)
. I then deserialized back to my CLR type in the second db.
Best Answer
I do not believe there is any issue here with regards to the User-Defined Type (UDT) as I am able to execute this successfully:
Please note that I am using the Free version of the SQL# library (which I wrote) to test with:
No errors. However, if I remove the stats object name (the error did mention a missing object name) but keep the square brackets, then I am able to reproduce the error:
returns:
And no square brackets at all works just fine (since it updates all stats on the table):
So it really needs to be empty square brackets. My guess is that your weekly Update Stats job (I am assuming it is a SQL Agent Job since you said "weekly") is somehow leaving out the stats object name.
AND, since the question actually mentions auto-created statistics, that should be tested as well, just to be sure that there isn't an issue with those (there shouldn't be, but it's still best to test and know for sure):
The code above also completes without errors :-).
The following test should be done to separate the UDT from this particular usage of it (i.e. this table). The example below again uses SQL#, but you should replace
[SQL#].[Type_HashTable]
with[dbo].[TParams]
and then replace the string value in the firstINSERT
statement with the appropriately formatted initialization text. If this does not error then the issue is with that table, possibly with the column name itself, such as having a hidden character in it. If it does error then we need to look at the definition of that UDT.