Add support for MariaDB/MySQL or another DB engine for scalability

Avatar
  • updated
  • Archived

The weakest link seems to be the SQLite DB. Once the DB gets to 1GB size, the SC application becomes unstable.

Duplicates 3
Actually Support MS SQL

When we piloted ScreenConnect several years ago, we worked with the SC team and installed using MS SQL Standard because of complications/limitations for our reporting and concurrent access needs with the built-in SQLite DB.


On two separate occasions, support has been utterly and completely useless in supporting the application because of the DB flavor. While I would not expect the support team to support the DB layer, I would still expect full support of the application.


Please either officially support MS SQL or provide reference architecture or whatever else is necessary so that it is a "supported configuration."


Given that we've been running it for going on 3 years, I wouldn't think much is necessary from a technical perspective.

MSSQL

Official support for MSSQL database instead of SQLite would make large on-prem deployments much more manageable.  

Officially Support SQL Server
Avatar Sol

Right now MS SQL Server is not officially supported, but generally works with support in the forums. It would be nice to see full support for MS SQL including SQL Clustering and Always On HA.

Avatar
2
Alex Heylin

** BUMPITY BUMP BUMP  ** 

We're only running 1000 agents, and every time I try to run an audit report for a 4 day period it times out and crashes the browser.  It's impossible to run this report, I have to run one day at a time. This NEEDS a proper DB. My vote's for mysql, but honestly I dont' care as long as it's not SQLShite which is totally unsuited to operating this at scale.  We'd like advanced auditing, but every time we've tried turning it on it makes the system unusable due to DB size.

Avatar
1
Primary PC Solutions

Sadly, this product's major downfall seems to be the database. We, along with many others as I see, have issues with SQLite simply not being able to handle it. I personally would love to see a MySQL or PostgreSQL implementation on ScreenConnect to tighten up this product. 

Avatar
0
Alex Heylin

If ported to MySQL, devs would need to be careful to ensure proper full backup handled properly (given lack of VSS).  We recently had hell with LabTech because their DB backup process is only partial, and we didn't fully understand the implications of lack of VSS support (restored server had completely useless DB) - so SC need to be careful to design something better - even if it's just dump the WHOLE database to a file-per-table zip file at a configured time. 

Avatar
0
g drive

Need this as well. Any blip with that file and our company is dead in the water. My file gets to 2gb almost monthly even with maint plan in place.

Support fro something else would be a huge help. 

Avatar
0
Alex Heylin

I still think this product needs this (vital for advanced auditing), but here are some steps that will help you if you're struggling with DB size issues. SQLite relies entirely on OS caching the file - there's no way to force it into RAM like with a DB engine. 

1. Put the whole server on SSD, local NVMe for preference.  You should probably use RAID but make your own priority judgement here.  This should allow you to get away with less RAM, and just make the whole server fly. 

2. If you've still got issues, throw RAM at the problem. I'd estimate you need is:
RAM required by OS at idle (after running for a few days - just take whatever GB it's using now)
+
1.2 x size of your DB file

at least 2GB extra. 

3. Consider if you really need advanced auditing and / or to keep data for as long as you do now.  Could you export more frequently and clear the DB down to keep the size down.

My rule for SC is if Windows resource monitor doesn't show at least 2GB of "free" (light blue - NOT the "available") RAM with the server under the highest load then you need to add more. Highest load may be when you're running reports etc. 


Disclaimer - some of what follow is probably not technically "true", but is the best I can recall from the last time I had to touch this - combined in a way as to emphasise how I worked around the issue. 

Remember you won't see a process using the RAM the DB is using - it's using file cache RAM, which isn't shown as a belonging to a process, but is included in the dark blue section of the graph. If the server is short on RAM the first thing the OS will sacrifice is the file caching you're relying on to speed up your DB access. Also check the Disk section of resource monitor while generating your "high load" scenario - if you see any significant read activity of the DB file then add RAM because you want it able to serve all reads from cache. As a test, I just exported all our audit data and all you could see on the server was CPU at 80% with no file reads.  It was quick and returned so much data Chrome ran out of memory and killed the tab.

If that all sounds expensive, for us we have 7GB RAM, 2 vCore & 50GB SSD in a public cloud for under GBP £20 (USD $25) pm and this happily runs 1,500 agents as long as we don't turn on advanced auditing (which is suicidal in our experience). Shop around. If you want to use advanced auditing or keep the data for any length of time - size accordingly and hope. 

Avatar
0
Cody Arnold

Where are we at with this, 5 years old and "Pending review" 

Support for another DB engine like MSSQL or mysql would be great where we have more flexibility & can have better performance as well. I've heard stories of people getting it working with MSSQL on the backend as is but the problem ends up being eventually a patch breaks it because something changes.

We're running an instance with ~10k agents now, it does alright but we've got over 100 people that have access to the platform, it's easy to have a lot going on at once, and when a lot of people start cruising through the web UI, running searches to find stuff or whatever gets taxing on the server because of delays to process the searches.

Avatar
0
Mike Bannerman Team Member

Instances in our cloud, running current versions of the CTRL, are using SQLite and are performing well. We have examples of instances of CTRL in our cloud running 30K+ agents, instances running 250+ concurrent connections, and instances with  with DBs over 200GB. 

There are no plans to add additional DB options for the product at this time, but we'll continue to look for ways to improve the product to scale for Enterprise.  At the moment the DB is not the best opportunity to improve performance.

It would be helpful to learn more about your on premise set up, @cody.  Feel free to reach out CTRLPM@connectwise.com.

Avatar
0
Max Kaplan

There has been a lot of talk here about wanting a database such as MySQL or MSSQL as opposed to SQLite for performance, and reporting which are valid points but what about redundancy or HA. We have been plagued by application level issues lately which ultimately caused us to lose our 5GB database. Had this information been stored in MySQL or MSSQL, not only could the data have been preserved, but we could even have had a secondary server so we could continue supporting our clients while we fixed the application issues on the main server. If Control is to be a proper remote support application, we need to be able to ensure uptime regardless of whether the issue is server, application, or network related. This is simply impossible with the current database design. ConnectWise, please implement this.

Avatar
0
Max Kaplan
Quote from Mike Bannerman

Instances in our cloud, running current versions of the CTRL, are using SQLite and are performing well. We have examples of instances of CTRL in our cloud running 30K+ agents, instances running 250+ concurrent connections, and instances with  with DBs over 200GB. 

There are no plans to add additional DB options for the product at this time, but we'll continue to look for ways to improve the product to scale for Enterprise.  At the moment the DB is not the best opportunity to improve performance.

It would be helpful to learn more about your on premise set up, @cody.  Feel free to reach out CTRLPM@connectwise.com.

What sort of retention do you have for logs and extended logging? We require at a minimum 90 days of extended auditing retention and would like to store more than that. This tends to bloat the database, and is where a more resilient database engine would shine. Also, how are you handling uptime? With each instance only being able to run on a single server with read/write access to the SQLite database file, what happens when that server goes down, or the application has issues, you can't simply fail over to another node that points to the same data.

Avatar
0
Alex Heylin
Quote from Mike Bannerman

Instances in our cloud, running current versions of the CTRL, are using SQLite and are performing well. We have examples of instances of CTRL in our cloud running 30K+ agents, instances running 250+ concurrent connections, and instances with  with DBs over 200GB. 

There are no plans to add additional DB options for the product at this time, but we'll continue to look for ways to improve the product to scale for Enterprise.  At the moment the DB is not the best opportunity to improve performance.

It would be helpful to learn more about your on premise set up, @cody.  Feel free to reach out CTRLPM@connectwise.com.

Mike - I have to disagree. Extended auditing was operationally useless for us, and support have told us the reason for this is excessive SQLite DB size. When we restarted the server, it took over an hour for SC to become usable and even then it was slow for three more hours. IMO SQLite is not suited to anything like 200GB.  It's suitable for more like 200MB. Regardless of the maximum size listed on https://www.sqlite.org/whentouse.html - that doesn't mean it's a good idea to use it at that scale.  The lack of caching being one obvious problem (relying on OS FS cache is not as performant as a DB engine taking nGB of RAM away from the OS and dedicating it to the DB table cache).  Another is the single writer limitation.   

While certainly SQLite makes the system easy to deploy and work with, and while you may say you don't have issues running it at scale it's very clear that many customers do have issues running it at scale (or at least think they do).  If you're SURE all these issues aren't related to the DB engine, perhaps you'd like to explain what is causing them and how to resolve them, because support have told us it's a DB scaling issue?

 Commenting is disabled

Top contributors

Avatar
Avatar
Avatar
Avatar
Avatar
Avatar
Avatar
Avatar
Avatar
Avatar
Avatar
Avatar
Avatar
Avatar
Avatar