Sensor based focus

From its-wiki.no
Jump to: navigation, search

(back to Sensor-based_decisions_in_eHealth)

Focus of the thesis

We have the alternative in

  • (a) going for a scenario where we have infrastructure, and get decisions from the infrastructure (e.g. Telenor Objects & VISMA monitoring)
  • (b) taking a set of sensors, and through the mobile phone get an idea of monitoring and feedback
  • (c) use the mobile + the range of sensors for determining the accuracy of activity data

Evaluation

(a) Infrastructure scenario

Advantages
nice infrastructure from Telenor, real world scenario. Relevant for industry.
Disadvantages
little research


(b) Monitor and feedback through mobile

One of the inputs is "activity report" from the user. He should tell what he is doing. - this is not going to happen. Users are lazy, we have to take this burdon from him.

Advantages
lots of existing applications and knowledge, able to monitor and suggest
Disadvantages
lack of battery lifetime (except: Bluetooth low energy with Zephyr heart rate), monitoring will cost battery; Limited research

(c) Mobile only

in this scenario we will analyse the lifetime and accuracy of different sensor types and see which activity report we can create from the inbuilt sensors and external equipment.

Input

  • Bluetooth-based Zephyr equipment for heart rate, ECG, temperature, breathing...
  • mobile phone based gyro, accelerometer, GPS, ++ for positioning and movement
  • some "easy" time statistics: e.g. bluetooth heart rate measures for two hours, GPS positioning (tracking), all sensors monitoring, .... (all done while phone is in flight modus)
  • hand-written activity report

Output

  • correlation of activity report based on sensor characteristics
  • "less is more", which sensors to use for a movement profile
Advantages
future-oriented scenario
Disadvantages
lots of new areas, highly research relevant.

email discussion

Item
However let me share something considering the sensor device or smartphone-tab that you are about to order. I believe it makes more sense for us to have the information (ex. the user's activity is "Running" ) and make use of it for the further operations rather than dealing with the raw data ( like we discussed in the meeting, we specified that getting the activity context from the smartphone is to be done by accelerometer and gyro sensors ) and trying to make it a meaningful data. After our meeting, I came into conclusion that starting from the point where we already have context data and proceeding from it will be more beneficial for us to focus on other aspects instead of spending our time to convert unmeaningful data (ex. accelerometer value: 3.3) into significant data (ex. user's state: sitting) . In our case it is the heart rate sensor though which I believe we will be able to make use of it efficiently . However when it comes to making use of the raw data of the built-in sensors of the smartphone, I cannot seem to understand why to bother dealing with that sensor data if it is already possible to benefit from and we have a chance to get user's activity context by an application or a sensor that can give this output seamlessly. By this way, our focus can easily be on the mapping of the context data and analysis of the results.


Answer
what is your suggestion to get the activity data? Yes, we can get parts from the context (location), but can we get more? - I want to avoid that the users types in what he is doing (we'll never get someone to do that - see argumentation (b)). That is why I would like to use the mobile phone sensor data. - It might be that we don't need "sitting versus standing", but only "walking", "walking the stairs", "running" to measure a total moving profile of the user.

In the medical world we currently argue on "can we measure on how much a person is moving" during the day? Is it lack of movement, lack of high intensity movement, wrong food, or anything else which effects the health of the user? With our suggestion (c) I hope to be able to

  • provide an estimate on that movement profile
  • knowing what the user is doing, can generate a fitness profile
  • have recommendations on "what can be measured" and "what needs to be measured" to get a better understanding of "health".
Next steps (23Nov2012)
What about the discussion on "programming" the mobile phone sensing? If you can implement the measuring and logging of data, then I can run that app to capture activities - And when the Zephyr comes, then I will redo and add those activities. Thus we will get two sets of data:

- Activity list (hand written) and mobile phone sensor data - Activity list (hand written), mobile phone sensor data and Zephyr monitoring data

If our focus is on determination of "level of activity", then we should search for literature on "activity level".

I would expect that we go along the following line:

  • activity level - determination from mobile phone data - does it work?
  • accumulated activity report for a person (sum of daily activity) - can we achieve it with or without Zephyr?
  • fitness level of a person - need to find a "fitness formula".

Just wondering if we have other options for measures, e.g. decision making or training suggestions. "Congratulations, you have performed 80% of your weekly training already". - Just wondering, do puls data tell us "how exhausted we are"?