paint-brush
Rosebud Brain-Trauma Monitoringby@bryanjordan

Rosebud Brain-Trauma Monitoring

by Bryan JordanDecember 7th, 2018
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

I like to split my time between working on my own software/hardware projects (<a href="https://medium.com/@bryanjordan/designing-streamplate-baeea2220f94" target="_blank">https://medium.com/@bryanjordan/designing-streamplate-baeea2220f94</a>) and building up my understanding of neuro-trauma — a tangent from perhaps one of my longest-standing interests: trauma surgery.

Company Mentioned

Mention Thumbnail
featured image - Rosebud Brain-Trauma Monitoring
Bryan Jordan HackerNoon profile picture

Guidelines for a visual-audio monitoring system using residual neural networks to generate objective-based reports on brain-trauma patients at point-of-care.

I like to split my time between working on my own software/hardware projects (https://medium.com/@bryanjordan/designing-streamplate-baeea2220f94) and building up my understanding of neuro-trauma — a tangent from perhaps one of my longest-standing interests: trauma surgery.

Over the past few weeks I’ve been trying to spend a few hours a day reading papers about the neural-correlations of consciousness. A subset of these authors have been trying to improve the metrics used in monitoring patients with severe brain-trauma at the point-of-care. The Glasgow Coma Scale is the most widely adopted but pertains to a highly-subjective assessment that can overlook the subtle and minor improvements that can be easily missed.

Across these papers a consistent theme emerged: the necessity for objective metrics in tracking patient progress. Patients in a deep-coma or even those possessing non-verbal skills can become further victims through misdiagnosis or mistreatment caused by the absence of some of these abilities.

It appears that constant and empirical-based recordings are needed albeit, without intruding on the patient’s state so as not to confound their persisting condition, and with an ease of use that makes it both affordable and non-distracting to practitioners.

As of early December, I’m pretty sure there isn’t this type of service available or at least in wide-spread use.

I’ve thought of a general blueprint for a simple implementation but haven’t had time to properly think about technical specifics. I’m hoping that by releasing these guidelines that either someone can develop this independently, or perhaps with the right person/people an open-source implementation can be made available.

Why an open-source is preferred will become more evident at the end of the guidelines. Here are my general thoughts:

  1. Patients with severe brain trauma are generally intubated, physically stationary and sectioned away from other patients. As such, these patients are already in a controlled environment that allow for close-visual recording.
  2. Proximate visual-audio recording equipment (video cameras, microphones, speakers) can be attached above the patient’s bed in a 4-camera configuration: close-up shot of patient’s head, medium-shot of patient’s torso, wide-angle shot of entire patient and a 360-degree shot of the patient’s surroundings.
  3. Persistent recording equipment allows for controlled audio-based tests rather than observational studies. These controlled tests can be administered automatically on a timer and can be proposed by assessing behavioural distinctions in patient’s state.
  4. Visual-audio recordings can be uploaded to a cloud-based infrastructure and then assessed off-site. This service can be automated (eg. every 24-hours) or manually operated and involve an operator processing the footage into a residual-neural network to detect discriminators in the patient’s state. Consequential insights could then be fed into a secondary model to compare relative patient states with that of others.
  5. The key idea is to develop a network of patient state’s with relativity assessed through empirical-based behavioural recordings rather than subjective reports. As these recordings can be compiled and collectively evaluated, a databank of real-time patient states can be generated and as these states progress in time, along with an uncovering of latent features, future patients can benefit from such automated investigations.

I can’t think of any critical technical limitations with respect to implementations. Visual-audio recordings can be even as rudimentary as GoPros providing third-world nations or under-resourced hospitals/medical centres an entry-point into this database. An assumption made is internet access. Even if access is sporadic, recordings can be uploaded intermittently and similarly for downloads. As WI-FI becomes more persistent across the world (only 44% of the global population is connected to WI-FI), I assume this will become less of a problem.

Programming implementations are quite trivial thanks to the widespread interest in machine-learning, accomplishments of residual neural networks, the ever-growing capabilities and hence increasing return of investment for cloud computing (personally recommend Google Cloud Platform but have yet to use AWS or Azure).

I might begin developing this in the coming weeks depending on whether I come across other superior research options. Otherwise, feel free to use this guideline for whatever purpose you deem fit.


If it works, success. If it doesn’t, it’s better to actively fail then inactively succeed.