Meeting the brief Investigation Plan & Design Create Evaluation References Summary

Evaluation

An evaluation of your project based on the requirements set out in the brief.

Level RequirementImprovement & Justification
Basic 1.Create a fully automated embedded system that utilises digital/analogue inputs and digital/analogue outputs to support the theme of wellbeing.The device was surprisingly comfortable and i didn't need to charge it's battery bank once. I found it very helpful and certainly believe it is a worthy alternative to a white cane for aiding people with visual impairements
Basic 2.Validate and store the data gathered from the embedded system.The system stores the time data and speed of what it thinks is a collision to a reasonable degree of accuracy an accelerometer could improve it still. The device can also gather data at the push of a button flawlessely.
Basic 3.Create an analysis component that can be used to calculate or predict certain information and inform future decisions relating to wellbeing.The device predicts a collision in the future and warns the user before it happens. In my experience the system warns accurately as long as the sensor can see the obstacle atleast a half second before.
   
Advanced 1. Using Python and/or JavaScript, create a computer model based on your own personally created dataset of wellbeing data or one that you have sourced externally (suggestions included on the next page). Your personal dataset could be generated manually, programmatically or by the embedded system. The dataset should contain multiple descriptive features of wellbeing and the model should be capable of answering a minimum of two ‘what if’ type questions which you will need to devise yourself.The model for deciding the audio response could be quickly refined and modified thanks to the custom data i gathered of the false positives and negatives. Having the predicted time until impact, distance and velocity was incredibly helpful in finding the sweet spot through graphing the data.
Advanced 2. Each ‘what if’ question must use a minimum of three validated parameters (using at least two different data types) and, based on the information provided, offer the user insights in relation to some aspect of their wellbeing.The device and answers all three what-if questions although it sometimes over-reports collisions and yet it also doesn't catch collisions with thin obstacles outside of the devices field of view (such as a railing). The device can also start generating sound while the user is resting their hand the code for this had to be rushed because of time constraints. Both these issues could be solved by using an accelerometer in conjunction with the system though and given more time would completely eliminate false positives. The emergency warning situation though underwent much developpement and many layers of validation wherein it now almost always warns the user in due time.
Advanced 3. Users can view data in a graphical format which displays information such as their progress using the system or the results of a ‘what if’ scenario.The Raspberry pi can simply be plugged into a display and the user's carer can launch program that displays the simulated location of obstacle and another that displays all of the collisions with date time and associated telemetry.

An evaluation of how your project has met the needs of the end user(s).

The device fulfills our basic criteria as set out in the Investigation section and helps a user navigate their environement. Although the extra camera functionality had to be left out I'm glad i could use the extra time to better tune the sound profile of the device. As it stands the distance and emergency warnings accurately and quickely convey distance and angle of an obstacle. I was surprised at how easy the device was to become accustomed too, after implementing interaural time delay the classmates i tested it on could immediately much better understand their environement.

Suggest how you would further improve/iterate this project

If not for the time constraints I would once again try and get the camera to recognise objects in the environement and read them out. I would've also liked to integrate OCR image to text to allow the user to hear thing like big signs and posters that don't have braille. Using a wide field of view camera on the front (the one available to me was incredibly zoomed it making it useless to a blind person) would allow the camera to identify what the user is reaching for with their hand and read it out to them. Integrating a braille display to the project would also be of great help as reading text aloud to the user would not be ideal. The device could've been better tuned expescially in regards to the resting mode as unless the users hand in perfectely steady the device will come in and out of resting mode annoying the user. This can easily be solved by having a second velocity variable that has much more smoothing than the one used for the other calculations and let this one dictate going in and out of this mode.