score:0

you could put the contents of your csv list objects into a table value parameter. then call a stored procedure, passing in that tvp. the stored procedure could then run a cursor through the 300 databases and joins to your table value parameter (using ad-hoc sql). it will basically be a loop that iterates 300 times which isn't too bad. something like this:

create procedure yournewprocedure
(
    @tablevalueparameter dbo.udttvp readonly
)
as

declare @dbname varchar(255)
declare @sql nvarchar(3000)

declare db_cursor cursor local for
    select distinct name
    from sys.databases
    where name like '%yourdbs%'
open db_cursor
fetch next from db_cursor into @dbname
while @@fetch_status  = 0
begin
    set @sql = 'update t
                set t2.field = t.field              
                from @tablevalueparameter t
                join [' + @dbname + ']..tableyoucareabout t2 on t.field = t2.field '

    exec sp_executesql @sql, n'@tablevalueparameter dbo.udttvp', @tablevalueparamete

    fetch next from db_cursor into @dbname
end
close db_cursor
deallocate db_cursor

score:1

you seem to have the basic idea right. hitting the database once for every line in the csv is going to be way too slow. you can create a "where in" statement via linq like so:

var addresses = getemailaddresses();
var entries = ctx.entries.where(e => addresses.contains(e.emailaddress));

however, if you have too many addresses in your list, it'll take a long, long time to generate and evaluate your query. i'd recommend splitting your input list up into batches of a reasonable size (200 entries?), and then using the trick above to handle each batch with a single database check.

once you've got that working, you can try a few other things to see if they make a measurable difference performance-wise:

  1. tweak the batch size.
  2. run the batches independently with varying degrees of parallelism.
  3. play with indexes on the database tables, particularly on the email address field.
  4. order the email addresses before breaking them into batches. it's possible that the db queries will take better advantage of hard disk caching strategies.

Related Query

More Query from same tag