After the completion of STEM I, students work together in small teams to engineer assistive technology devices to aid an individual in the community. They meet with clients, conduct market research and conduct parent searches, design and build prototypes, and present their device to the community. I worked alongside Samhitha Bodangi , Derek Desrosiers , Abigail Figueroa , and Kruthi Gundu to create an assistive device for visually impaired people (VIPs).
Much of the outside world contains visual information conveyed through text. Advertisements, signs, menus, etc., often don’t have a Braille translation, which leaves Visually Impaired People (VIPs) reliant on audio or sighted people to assist them. For visually impaired adolescents, Braille literacy has been shown to have major implications on future life outcomes like employment outcomes and socioeconomic status. However, there is a lack of interactive, affordable educational devices for younger VIPs.
The goal is to design an assistive device for VIPs that uses optical character recognition (OCR) to take pictures of text in the environment. The device translates text from the image into Braille configurations and provides a tactile medium for the VIP to read the translated Braille via an electromechanical refreshable Braille module. The key component of this design is a cam actuator, which consists of an eccentric cam with a magnet embedded in it. This complex is rotated to two stable positions by a micromagnet that changes its polarity. The rotation of the cam causes a Braille dot to be lifted or taken down. All electrical designs will be constructed using a PCB board and an Arduino circuit.
The final design prototype involves an affordable electromechanical refreshable braille display module using micro magnets and Optical Character Recognition (OCR) technology. The final prototype consists of a main computing microcontroller, a Raspberry Pi, a main PCB board from the design housing all the braille cells and separate PCB boards for each of the braille cells. For the text-to-braille configuration conversion, the Raspberry Pi was used to take camera images and perform OCR using Python’s Tesseract library. After converting images into text, the Raspberry Pi uses the pybraille library to convert tests into Unicode braille, which is displayed on the live web server. For the Braille cells and PCB board configuration, we printed the Braille cells using an SLA 3D printer, scaling up the CAD file to attain a more precise resolution. The electromechanical component involves solenoids generating an electromagnetic field to rotate the cam, raising the Braille pin. The PCB board was regulated by an Arduino Nano, sending instructions to each smaller PCB holding a braille cell. In the future, both components will be combined to automate the process and connect the OCR to the device itself. Our testing design studies (as seen in the design study document) show that the developed aspects of the Braille display function as intended so far. Future improvements require a more precise 3D printer that can handle printing the small Braille cells in the correct resolution. Figure 2 details what the final, fully incorporated device will look like, combining both the optical and Braille display aspects. This is a more complex device our group will work towards during our further continuation of the project.