Accepted answer

i think i figured out a solution that works for my situation. rather than wrapping sqlconnection and overriding open() to change databases, i'm passing the dbcontext a new sqlconnection and subscribing to the connection's statechanged event. when the state changes, i check to see if the connection has just been opened. if so, i call sqlconnection.changedatabase() to point it to the correct database. i tested this solution and it seems to work - i see only one connection pool for all the databases rather than one pool for each db that has been accessed.

i realize this isn't the ideal solution in an ideal application, but for how this application is structured i think it should make a decent improvement for relatively little cost.


i think, that the best way is making unitofwork pattern with repository pattern to work with entity framework. entity framework has firstasync, firstordefaultasync, this helped me to fix the same bug.


i don't think that's going to work off a single shared connection.

linq to sql works best with unit of work type connections - create your connection, do your atomically grouped work and close the connection as quickly as possible and reopen for the next task. if you do that then passing in a connection string (or using custom constructor that only passes a tablename) is pretty straight forward.

if factoring your application is a problem, you could use a getter to manipulate the cached datacontext 'instance' and instead create a new instance each time you request it instead of using the cached/shared instance and inject the connection string in the getter.

but - i'm pretty sure this will not help with your pooling issue though. the sql server driver caches connections based on different connection string values - since the values are still changing you're right back to having lots of connections active in the connection string cache, which likely will result in lots of cache misses and therefore slow connections.

Related Query

More Query from same tag