I am developing a database with multiple tables. One of its tables will have a total of 50.05GB of data inserted as rows. Do the searching and inserting capability of Sql Server 2008 engine in a table decreases with an increase in total rows in a table? Is there per row data size limit?
Sql-server – Do the searching and inserting capability of Sql Server 2008 engine in a table decreases with an increase in total rows in a table
performancesql-server-2008
Related Question
- Sql-server – SQL Server 2008 DB Performance on single disk
- Sql-server – SQL Server 2005 – Query optimization for fetching large number of rows from table with 750 million rows
- Inserting to table: Insert rows with Left Outer Join or Insert rows then Update
- Sql-server – SQL 2008 Server – performance loss possibly connected with a very large table
- MYSQL: How to improve performance of inserting over 1M rows to a table with over 100M indexed rows
- Sql-server – Inserting rows into a truncated table will increase the size
- Mysql – Performance tuning for MySQL database (read only)
- Mysql performance related question for very large database
Best Answer
Yes the performance will eventually be effected but you can counter this with the addition of suitable indexes. Indexes will be easily be sufficient for managing the performance of tables that are ~50GB. If you get a lot more data in the future columnstore indexes and partitioning may be worth a look but they do require the Enterprise Edition of SQL Server. Columnstore indexes would require and upgrade to SQL Server 2012 or later.
A table can contain a maximum of 8,060 bytes per row but
VARCHAR
andNVARCHAR
columns can be stored off row to provide more space. More details here