About the BrainIT Data

The BrainIT data was collected prospectively as part of the EU Project grants:
QLCT-2002-001160 & IST-2007-217049.
The KIdsBrainIT data was collected prospectively as part of the EU Project grant:
ERA-NET award MR/R004498/1

Glasgow University who hosted the EU grants for the BrainIT datasets during the data collection phase of these projects qualifies as the Data Controller.

Edinburgh University who hosted the EU grants for the KidsBrainIT datasets during the data collection phase of these projects qualifies as the Data Controller.

Using our best endeavours and in accordance with GDPR we have:

Data Sharing and Access Protocol

Any such "open" database can only work if it is based, to some extent, on trust. Criteria have been set and must be met by contributors to and analysers of the database. Contributors who fail to follow these guidelines will be prevented from future access to the database. All data is managed on a dedicated research data server. Data is pseudoanonymised contains only pseudo-anonymous "StudyIDs". Dates for the new KidsBrainIT datasets are also pseudo-anonymised. Only categorical (from drop-down lists) data is collected. No free text is collected or stored. Using our best-endeavours, it will not be possible to tell from which centre the data originated nor identify a given patient. Each individual within each centre who were directly responsible for collecting data within that centre will have free access to their own pseudo-anonymised data. Should individual data contributing members wish to access the joint database for their own research, they may do so provided:

Database Access/Analysis Criteria

Guidelines for External Research Organisations or Individual

Members Not from Data Contributing Centres or External research organisations may access the database but will require supervised access to the database, they may gain access through collaborating with a BrainIT centre PI provided:

In such a collaboration, the centre PI remains responsible for supervising access to the database and for tracking any analyses resulting from the collaboration with Non profit making external research organisations or individuals not from data contributing centres.

The Project PI remains responsible for ensuring the any publication resulting from the analysis follow BrainIT publication criteria.

Joint Authorship Guidelines

If any data from the joint database was used in the analyses which subsequently formed part of a published abstract or manuscript - reference to the "Brain-IT Group" must be given in the Authors citation. Eg: A. Author1, A. Author2... "on Behalf of the Brain-IT Group'. Or alternativly: "In Collaboration with the BrainIT Group" depending on the publication. Alternative wording, as rare exceptions to this rule, can be used following discussion with the Steering Group. An appendix to the manuscript should list the centre PI's who contributed data to the relevant analysis. A list can be provided by the steering group.

As part of the normal BrainIT review process. All data contributors will be invited to both review and to contribute towards any abstract or manuscript produced prior to submission to a meeting or for publication. Those data contributors who made a significant contribution to the design, analysis or writing of the abstract/manuscript will also be named co-authors on the abstract or manuscript. The "Vancouver Publication Guidelines" should be adhered to. Where there is uncertainty over whether a significant contribution was made by a given data contributor - a final decision will be made by majority vote of the steering group.

Attempts at publishing analyses of data from the database without adhering to all the above criteria will result in the sending of a letter by the BrainIT steering group to the editor of the journal. The PI's database access criteria will be revoked.

Data Validation Levels

Unvalidated Data

ALL RAW DATA (fully anonymised) transferred from centres to the BrainIT coordinating centre, irrespective of version, is kept on the BrainIT group server, external to the database and is never modified. Once a patient data collection session is closed and missing data found or coded as missing, that data record is entered into the BrainIT database. This data will be considered to be “Unvalidated” and will be coded accordingly as Validation Level 0.

Validated Data

Each project specific database can have two levels of validation applied and coded. The minimum level of validation possible to allow data to be used in analyses intended for publication is Validation Level I (Software Validation). Validation Level II is the more rigorous approach but requires hiring data validation staff and so is funding dependant.

Validation Level I (Software Validation)

The BrainIT group has developed an XML schema for its core data set. The current version of this schema can be provided to registered BrainIT members. Contact Ian Piper (ian.piper@brainit.org) for access to the XML Schema documentation. Raw data, using some form of software method, should be validated against the BrainIT Schema. The BrainIT schema defines for each data element features such as: data value format and precision (eg: numeric, alpha-numeric, significant decimal places), for categorical fields any allowed values and for numerical fields upper and lower limits. Level I validation can also be expected to perform various “Sanity” checks for example: a) Discharge Date is after the Admission Date. b) Admission date to the ICU is after admission to the Hospital with Neurosurgery. c) For monitoring channels that the start and end times of monitoring for that monitoring channel (eg: invasive arterial blood pressure) is appropriate.

Validatation Level II (Human Validation)

The purpose of this level of validation is to confirm that a transcribing error was not made or that the data for that patient actually exists, ie: that a mistake was not made and data from another patient was transferred by accident. (This last situation is not an uncommon event for monitoring data where a new patient can be admitted to a bed-space and connected to monitoring before the last patient’s data file was closed on the local data collection system.) This level of validation requires human resources and so is also the most costly form of validation to implement. Data elements transferred are checked against the local unit’s original archived source (paper or electronic). For example, a patient’s daily arterial blood pH value (reported at 08:15 hours) is checked against the recorded value on the local unit’s original archived data source (eg: Lab result report stored with the patient’s medi cal notes). Both the data value and the time stamp must be correct. Note: It is not required that ALL data elements are validated, only that a representative random sample of the data elements collected for that patient are validated.