We have devised a novel "Point-and-Tap" interface that enables people who are blind or visually impaired (BVI) to easily acquire multiple levels of information about tactile graphics and 3D models. The interface uses an iPhone's depth and color cameras to track the user's hands while they interact with a model. When the user points to a feature of interest on the model with their index finger, the system reads aloud basic information about that feature.
View Article and Find Full Text PDFA highly sensitive tumor necrosis factor α (TNF-α) detection method based on a surface-enhanced Raman scattering (SERS) magnetic patch sensor is reported. Magnetic beads (MNPs) and core shells were used as the capture matrix and signaling probe, respectively. For this purpose, antibodies were immobilized on the surface of magnetic beads, and then Au@4-MBN@Ag core-shell structures coupled with aptamers and TNF-α antigen were added sequentially to form a sandwich immune complex.
View Article and Find Full Text PDFAs a widely used industrial additive of plastic products, phthalate ester (PAE) plasticizers can easily migrate into food, threatening human health. In this work, we proposed a rapid, precise, and reliable method to detect PAE plasticizers in edible oils by using surface-enhanced Raman spectroscopy (SERS) technology. A two-dimensional (2D) silver plate synergizing with a nanosilver sol was prepared as a substrate for SERS to detect potassium hydrogen phthalate (PHP), a hydrolysate of a PAE plasticizer.
View Article and Find Full Text PDFMaps are indispensable for helping people learn about unfamiliar environments and plan trips. While tactile (2D) and 3D maps offer non-visual map access to people who are blind or visually impaired (BVI), this access is greatly enhanced by adding interactivity to the maps: when the user points at a feature of interest on the map, the name and other information about the feature is read aloud in audio. We explore how the use of an interactive 3D map of a playground, containing over seventy play structures and other features, affects spatial learning and cognition.
View Article and Find Full Text PDFFor the purpose of developing multifunctional water purification materials capable of degrading organic pollutants while simultaneously inactivating microorganisms from contaminated wastewater streams, we report here a facile and eco-friendly method to immobilize molybdenum disulfide into bacterial cellulose via a one-step in-situ biosynthetic method. The resultant nanocomposite, termed BC/MoS, was shown to possess a photocatalytic activity capable of generating •OH from HO, while also exhibiting photodynamic/photothermal mechanisms, the combination of which exhibits synergistic activity for the degradation of pollutants as well as for bacterial inactivation. In the presence of HO, the BC/MoS nanocomposite exhibited excellent antibacterial efficacy upwards of 99.
View Article and Find Full Text PDFOwing to the rise in prevalence of multidrug-resistant pathogens attributed to the overuse of antibiotics, infectious diseases caused by the transmission of microbes from contaminated surfaces to new hosts are an ever-increasing threat to public health. Thus, novel materials that can stem this crisis, while also functioning via multiple antimicrobial mechanisms so that pathogens are unable to develop resistance to them, are in urgent need. Toward this goal, in this work, we developed in situ grown bacterial cellulose/MoS-chitosan nanocomposite materials (termed BC/MoS-CS) that utilize synergistic membrane disruption and photodynamic and photothermal antibacterial activities to achieve more efficient bactericidal activity.
View Article and Find Full Text PDFComput Help People Spec Needs
September 2020
We describe a new approach to audio labeling of 3D objects such as appliances, 3D models and maps that enables a visually impaired person to audio label objects. Our approach to audio labeling is called CamIO, a smartphone app that issues audio labels when the user points to a (a location of interest on an object) with a handheld stylus viewed by the smartphone camera. The CamIO app allows a user to create a new hotspot location by pointing at the location with a second stylus and recording a personalized audio label for the hotspot.
View Article and Find Full Text PDFWe have successfully prepared a highly sensitive sandwich nanosensor combined FeO and Au@ATP@Ag nanorods for histamine detection based on surface-enhanced Raman spectroscopy (SERS). The FeO beads with -COOH served as a capture part to enrich histamine. The Au@ATP@Ag core-shell nanorods functionalized with Nalpha,Nalpha-Bis(carboxymethyl)-l-lysine (AB-NTA) were then used to connect with the imidazolyl group of histamine, simultaneously the internal standard 4-aminothiophenol (4-ATP) in the core-shell structure was used as the SERS signal.
View Article and Find Full Text PDFBiomed Opt Express
October 2018
Superhydrophobic silver films were fabricated by silver-mirror reaction and surface functionalization with thiol. The thiol-functionalization significantly improved the hydrophobic property of the Ag films (AFS), and their contact angle values slightly increased with the extension of a thiol alkyl chain, reaching about 160°. The surface-enhanced Raman scattering (SERS) detection capacity of these films were investigated, and AFS-Dodec showed the best substrate for R6G molecule detection with the concentration limit of 10 M.
View Article and Find Full Text PDFHerein we utilized coordination interactions to prepare a novel core-shell plasmonic nanosensor for the detection of glucose. Specifically, Au nanoparticles (NPs) were strongly linked with Ag+ ions to form a sacrificial Ag shell by using 4-aminothiophenol (4-PATP) as a mediator, which served as an internal standard to decrease the influence of the surrounding on the detection. The resultant Au-PATP-Ag core-shell systems were characterized by UV-vis spectroscopy, transmission electron microscopy, and surface-enhanced Raman scattering (SERS) techniques.
View Article and Find Full Text PDFComput Help People Spec Needs
January 2014
This paper describes recent progress on Crosswatch, a smartphone-based computer vision system developed by the authors for providing guidance to blind and visually impaired pedestrians at traffic intersections. One of Crosswatch's key capabilities is determining the user's location (with precision much better than what is obtainable by GPS) and orientation relative to the crosswalk markings in the intersection that he/she is currently standing at; this capability will be used to help him/her find important features in the intersection, such as walk lights, pushbuttons and crosswalks, and achieve proper alignment to these features. We report on two new contributions to Crosswatch: (a) experiments with a modified user interface, tested by blind volunteer participants, that makes it easier to acquire intersection images than with previous versions of Crosswatch; and (b) a demonstration of the system's ability to localize the user with precision better than what is obtainable by GPS, as well as an example of its ability to estimate the user's orientation.
View Article and Find Full Text PDFThere is growing interest among smartphone users in the ability to determine their precise location in their environment for a variety of applications related to wayfinding, travel and shopping. While GPS provides valuable self-localization estimates, its accuracy is limited to approximately 10 meters in most urban locations. This paper focuses on the self-localization needs of blind or visually impaired travelers, who are faced with the challenge of negotiating street intersections.
View Article and Find Full Text PDFPurpose: This paper describes recent progress on the "Crosswatch" project, a smartphone-based system developed for providing guidance to blind and visually impaired travelers at traffic intersections. Building on past work on Crosswatch functionality to help the user achieve proper alignment with the crosswalk and read the status of walk lights to know when it is time to cross, we outline the directions Crosswatch is now taking to help realize its potential for becoming a practical system: namely, augmenting computer vision with other information sources, including geographic information systems (GIS) and sensor data, and inferring the user's location much more precisely than is possible through GPS alone, to provide a much larger range of information about traffic intersections to the pedestrian.
Design/methodology/approach: The paper summarizes past progress on Crosswatch and describes details about the development of new Crosswatch functionalities.
Proc IEEE Workshop Appl Comput Vis
January 2011
Proc IEEE Workshop Appl Comput Vis
February 2011
There is a growing body of work addressing the problem of localizing printed text regions occurring in natural scenes, all of it focused on images in which the text to be localized is resolved clearly enough to be read by OCR. This paper introduces an alternative approach to text localization based on the fact that it is often useful to localize text that is identifiable as text but too blurry or small to be read, for two reasons. First, an image can be decimated and processed at a coarser resolution than usual, resulting in faster localization before OCR is performed (at full resolution, if needed).
View Article and Find Full Text PDFCrossing an urban traffic intersection is one of the most dangerous activities of a blind or visually impaired person's travel. Building on past work by the authors on the issue of proper alignment with the crosswalk, this paper addresses the complementary issue of knowing when it is time to cross. We describe a prototype portable system that alerts the user in real time once the Walk light is illuminated.
View Article and Find Full Text PDFForeground-background segmentation has recently been applied [26,12] to the detection and segmentation of specific objects or structures of interest from the background as an efficient alternative to techniques such as deformable templates [27]. We introduce a graphical model (i.e.
View Article and Find Full Text PDFActa Crystallogr Sect E Struct Rep Online
August 2010
The title dinuclear Fe(II) complex, [Fe(2)(SO(4))(2)(C(13)H(8)N(4))(2)(H(2)O)(4)]·2H(2)O, is centrosymmetric. Two sulfate anions bridge two Fe(II) cations to form the binuclear complex. Each Fe(II) cation is coordinated by two N atoms from a 1H-imidazo[4,5-f][1,10]phenanthroline (IP) ligand, two O atoms from two sulfate anions and two water mol-ecules in a distorted octa-hedral geometry.
View Article and Find Full Text PDFTechnol Disabil
October 2008
Urban intersections are the most dangerous parts of a blind or visually impaired pedestrian's travel. A prerequisite for safely crossing an intersection is entering the crosswalk in the right direction and avoiding the danger of straying outside the crosswalk. This paper presents a proof of concept system that seeks to provide such alignment information.
View Article and Find Full Text PDFComput Help People Spec Needs
July 2008
Urban intersections are the most dangerous parts of a blind or visually impaired person's travel. To address this problem, this paper describes the novel "Crosswatch" system, which uses computer vision to provide information about the location and orientation of crosswalks to a blind or visually impaired pedestrian holding a camera cell phone. A prototype of the system runs on an off-the-shelf Nokia camera phone in real time, which automatically takes a few images per second, uses the cell phone's built-in computer to analyze each image in a fraction of a second and sounds an audio tone when it detects a crosswalk.
View Article and Find Full Text PDFProc IEEE Comput Soc Conf Comput Vis Pattern Recognit
January 2008
Urban intersections are the most dangerous parts of a blind or visually impaired person's travel. To address this problem, this paper describes the novel "Crosswatch" system, which uses computer vision to provide information about the location and orientation of crosswalks to a blind or visually impaired pedestrian holding a camera cell phone. A prototype of the system runs on an off-the-shelf Nokia N95 camera phone in real time, which automatically takes a few images per second, analyzes each image in a fraction of a second and sounds an audio tone when it detects a crosswalk.
View Article and Find Full Text PDF