To maximise productivity and the amount of information proteomics laboratories can derive from precious samples, it is essential to keep mass spectrometers working at peak efficiency. Ideally, laboratories need a user-friendly method of monitoring the instruments’ performance over time to allow early detection of any problems arising, thereby minimising instrument down-time and preventing samples being wasted in sub-optimal runs.
We have designed a website which accesses stored data from routine quality control (QC) experiments on our SCIEX 5600 and 6600 TripleTOF mass spectrometers. This is hosted on R@CMon, the Monash node of the Nectar research cloud, and is accessible from Monash University client computers. The data comes from standard SCIEX calibration runs, which run automatically several times a day, as well as complex biological standard experiments representative of our lab’s typical immunopeptidomic samples, which are run manually several times a week and searched using ProteinPilot. Several times daily, the QC server accesses the resulting data files via a network shared drive and extracts the relevant data, along with the date and time, for storage in MySQL databases using a program written in C. The QC server utilises Apache tomcat to run Java servlets, which request information regarding specified dates from the MySQL databases, and present useful graphs with Plotly.js. In this way, it updates itself on-the-fly, allowing the user to see at a glance whether recent runs succeeded or failed, and notice subtle deterioration or fluctuations in signal, ideally in time to pinpoint and address issues before they develop into major problems.
While this QC server is tailored to our lab’s SCIEX instruments and specific needs, the individual components may be adapted to other mass spectrometers and other sample-specific readouts. Thus, it may be of interest to many groups seeking tools for monitoring their mass spectrometers for proactive quality control.