top of page

MicroMed Bot & User Task Analysis

On a micro level, a task analysis can be broken into several tasks. The first task will be mapping the users body and attaching guidance bots at each major junction of blood vessels. During the first week of use, machine learning will be based off of these junction guidance bots counting the number of bots passing by, as well as guiding the bots into an optimal path to ensure proper dispersion. As it takes about a minute for a blood cell to leave the heart, cycle the body, a machine learning generation of experimentation can happen every minute, quickly reaching the number of generations required for machine learning to smooth out and strengthen. Next, at night, the nanobots will conduct  the cellular equivalent of a fire drill. They will all test the efficiency and integrity of the maps made earlier. To test this map they will all rush to the brain, simulating an oxygen deprivation response. After rushing to the brain, but not dispensing their emergency oxygen, they will all flow past the wrist, delivering their data to the intelligent watch worn by the user. Many of these tests can be conducted every night to compare machine learning model estimations to data gathered from multiple experiments to determine if changes need to be made. 

 

Once a regular map is made and tested, The bots will then need to maintain proper distribution to ensure maximum coverage and data gathering. They will monitor the blood for oxygen, carbon dioxide, glucose, lactic acid, white blood cell, and chemical levels. On a very simplified level of task analysis, they will look for outliers or anomalies, report them to the watch, verify the reports, and then alert the user to conditions such as “low oxygen levels in both feet”, “low blood sugar”, “carcinogens detected in bloodstream”, or even “high concentration of white blood cells in left pinky”. The nanobots will be looking for things that need to be reported to the user, passively observing and reporting, and with that information gathered, the decision will be up to the user once notified.

 

In the event of a catastrophic accident that causes loss of blood and even consciousness, the nanobots will sense low blood pressure and oxygen levels, as well as loss of connected bots, and respond by deploying to the wound to deploy a clotting agent and attempt to stop the bleeding, as well as rushing to the brain to watch for low oxygen levels that could cause brain damage. If oxygen levels fall below safe levels, the nanobots will dispense life saving oxygen, while alerting the watch and app of the situation. If the user does not intervene, the app will alert emergency first responders, making the assumption that the user is unable to. With the information described earlier, the nanobots will take triage action, but while the user will be given the option to opt out of medical attention, the app will initiate a 10 second countdown to opt out before it makes the decision for the user. 

 

Lastly, if the nanobots detect a high level of swelling, lactic acid, and other signs of injury, they will identify the type of injury and report it to the app. Broken bones, sprains, large buildup of plaque and other ailments that cannot be easily addressed by the nanobots, being mostly focused on gathering data and preventing brain damage, will be reported to the app, and diagnosed. Different conditions will be met with the suggestion to drink certain supplements with nanobots that specialize in an appropriate repair, and the building blocks required to conduct such repairs. Broken bones could be met with carbon nanotubes to create an internal scaffolding to ensure proper alignment and healing after the bone is set. Heart attack threatening plaque could be met with drilling bots and solvents to dissolve the plaque. Endless secondary supplement products could be made to directly address medical problems!

 

While the nanobots and attached apps will occasionally make large scale decisions, like in the case of emergencies that the user might not be conscious to address, the nanobots will mostly be making tiny decisions of where to go, what to look for, and when to report it. Once they have done these small tasks, the massive amount of data will be brought back to the smart watch, which will tell the app what to tell the user. The user will then be entirely in control of deciding how to respond to what the nanobots have found. Once informed that there is a dangerous level of plaque buildup in their veins, the user can decide to take the appropriate supplement, and then go out and purchase and consume it. If the nanobots detect a moderate buildup of plaque, the app can suggest exercise and diet changes that will help, or the user can take the same supplement anyway! If the app detects an imminent emergency, the user will be informed and they can then decide if they want to go to the hospital or have an ambulance called. User experience will be largely focused on the analyzing, deciding, and responding to the app’s report of nanobot data.

​

​

Three Possible Bot & User Task Analysis Flow Charts

Screen Shot 2019-10-23 at 11.19.26 PM.pn
bottom of page