Your table is a "heap," which means that it doesn't have a clustered index. The short version of what that means is that your data isn't laid out on the disk in a logical way. A heap really isn't an optimal structure for an actively updated table. Here's an excellent video on heaps.
So, my suggestions--please test them before deploying them in production--are:
BEGIN TRANSACTION
USE fp; ALTER TABLE fp_core ADD ID bigint NOT NULL IDENTITY (1, 1)
GO
USE fp; CREATE CLUSTERED INDEX IX_fp ON fp_core (ID) WITH (STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
GO
COMMIT
This will take some time to complete AND will be a blocking query, so do it during downtime.
This will give you a clustered index and lay out your table on a completely random sequential number, unfortunately, but since you don't have any non-unique values that might be the best course of action.
I would then include the columns note_id and timeoffset in your nonclustered index on HASHKEY.
BEGIN TRANSACTION
USE [fp]
GO
IF EXISTS (SELECT * FROM sys.indexes WHERE object_id = OBJECT_ID(N'[dbo].[fp_core]') AND name = N'IX_Agency_TIMESTAMP')
DROP INDEX [IX_fp_core] ON [dbo].[fp_core] WITH ( ONLINE = OFF )
GO
USE [fp]
GO
CREATE NONCLUSTERED INDEX [IX_fp_core] ON [dbo].[fp_core]
(
[hashkey] ASC
)
INCLUDE ( [note_id],[timeoffset])
WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, FILLFACTOR = 80) ON [PRIMARY]
GO
COMMIT
(Or you could do it through the GUI. If you right-click on the index and choose properties, there's an included columns pane.)
This will hopefully have the effect of replacing your heap scans with nonclustered index seeks, but since your query has a "where in [30,000 items], it might do a index scan instead. Either way, it won't be rooting around in the heap, which I expect will be a good thing.
Kin made a couple of suggestions which should be helpful.
First: Make sure that your statistics for the CallTime index are up to date.
Your plan shows that the time filter is being made by seeks to the clustered index. So, for some reason the CallTime index is not being used. What is the definition of that index? If it is a multicolumn index, be sure that the most specific column is first. Example:
IDX_Calls_CallTime
NOT: CallID, CallTime
USE: CallTime, CallID
Second: Consider using CONTAINSTABLE.
I doubt that you will ever want to return 1.200.000 rows for "efendim". By using CONTAINSTABLE and ranking you can set the top number of rows that you want, thus reducing the number of FullText results to process to 10, 100, 1000, or whatever fits your needs.
Best Answer
1/ The first I have in mind is that time is spent on checking FK constraint in your destination table while new rows are inserted. Firstly, I would check if referenced tables have PKs defined (or even clustered from performance point of view). Give it a try by performing some bulk inserts after you have disabled those FKs.
2/ Statistics might not be updated on referenced table, case when a statement like this should be executed for each referenced table:
3/ Long term, you should take into consideration partitioning that table. This is the perfect scenario for doing this.
4/ Disable indexes on destination table, perform bulk insert then re-enable the indexes. Not-recommended, but give it try to see the results and to see if this is the root cause of your poor performance for bulk inserts.
5/ Have a look at waiting threads while a bulk insert is executing.
6/ Post the result of wait statistics in order to see which types of waits are commonly arrive on your server.
Tell us the result and we'll come with more details depending on the context.