Mobile Accommodation Solution for Workplace Accomodation

LinkedIn

According to the U.S. Bureau of Labor Statistics, only 20.4 percent of people with disabilities were employed in March 2017, as opposed to 68.7 percent of people without disabilities. Therefore, creating better support for job applicants and employees is critical to creating a diverse pool of talent in the workplace, optimizing the productivity of every worker, and increasing job satisfaction.

The Mobile Accommodation Solution (MAS) app – the iOS version of which is now available in the app store – is a first-of-its kind tool that helps employers and others manage workplace accommodation requests throughout the employment lifecycle. Using the app, employers can track the status of requests; access fillable forms; and store, print and export records that can be imported into enterprise information systems. The app was developed by West Virginia University’s Center for Disability Inclusion in partnership with the Job Accommodation Network and IBM; funding came from the National Institute on Disability, Independent Living and Rehabilitation Research.

STEM Professor Receives Award to Study Technologies for Disability Community

LinkedIn
Ashley Shew standing on porch with arm around pillar smiling

By Leslie King

The trichotillomania bracelet looks unassuming, just like any other smart technology worn around the wrist. But rather than counting steps or heartbeats, it serves another purpose.

The wristlet vibrates an alarm when it tracks the user subconsciously beginning to pull out strands of hair. For those with trichotillomania, instead of following the compulsion to yank out their hair, the wireless device helps them notice the gestures and change their behavior.

This tool, along with other technologies for the disability community, intrigues Ashley Shew, an assistant professor in the Virginia Tech Department of Science, Technology, and Society. In July 2018, she received a National Science Foundation Faculty Early Career Development Award that will allow her to investigate the personal accounts of people with disabilities, as well as their opinions of the technologies designed for them.

The prestigious honor, given to junior faculty who exemplify the role of teacher-scholars through outstanding research and education, is better known as the CAREER award.

“I’m interested in the storylines that disabled people tell about their bodies and how their relationships with technology differ from popular and dominant narratives we have in our society,” said Shew, who herself identifies as disabled.

Her research focuses on discrepancies between how scientists and engineers understand and explain their work related to disability and the actual needs and wants of people with disabilities. Shew said there is a disconnect between media-based depictions and reality within the realm of science, technology, engineering, and mathematics (STEM) education and technology design.

“This means people aren’t always designing with real users in mind, but with ideas about what users want based on the entertainment media,” she said. “This is problematic because nondisabled people create and depict disabled people. There is little authentic disability representation in the media, so all these media-driven narratives about technology get fed into engineering.”

Shew cites several misleading media-supported tropes. Negative stereotypes encourage the public to view disabled people with pity, as sinners or fakers, or as resource burdens. And while the trichotillomania bracelet is small and unobtrusive, many technologies, such as wheelchairs or exoskeletons, are not. Some people who could benefit from viable supportive devices might shy away from them to avoid public skepticism or castigation.

And the reverse depictions are just as misrepresentative.

“There are also tropes about inspiration and courage,” Shew said. “The one people lean on, which I’ll be assessing through this grant, involves a focus on inspiration and courage, along the lines of, ‘You’re such an inspiration because you’re disabled in public.’ If you’re not inspiring, you’re courageous to overcome what you’re overcoming. If we believe you’re truly disabled, then if you’re out having a regular life, you’re considered heroic in ways that don’t map onto real life at all.”

Designers often create technologies with this trope in mind. An example of this is a surge of 3D-printed hands for young amputees. Marketed with terms such as “superhero” hands or arms, the branding presents these children as different from people without disabilities. Shew describes this phenomenon as techno-ableism, when technology makers try to empower others with helpful tools but use rhetoric that has the opposite effect. As part of her CAREER award, Shew will publish a book about this phenomenon.

Shew will also seek to counter unrealistic portrayals of people with disabilities by educating creators of disability technologies. Her research will incorporate interviews, memoirs, and the compilation of existing materials into classroom public outreach, an open-access website, and a textbook to complement existing STEM educational resources.

Shew is collaborating with Alexander Leonessa and Raffaella De Vita, associate professors in the College of Engineering, who have also received CAREER awards. In 2019, she will work with them through Virginia Tech’s STEMABILTY, a summer camp for students with disabilities.

A Virginia Tech faculty member since 2011, Shew received a Certificate of Teaching Excellence in 2017 and a Diversity Award in 2016, both from the College of Liberal Arts and Human Sciences. Also in 2016, she received the Sally Bohland Award for Excellence in Access and Inclusion from the Virginia Tech office of Services for Students with Disabilities.

Shew co-edited Spaces for the Future: A Companion to Philosophy of Technology with Joseph Pitt, a Virginia Tech professor of philosophy. She is also the author of Animal Constructions and Technological Knowledge, published by Lexington Books/Rowman & Littlefield.

Shew is the fourth faculty member in the College of Liberal Arts and Human Sciences to receive the prestigious National Science Foundation CAREER Award in the past several years.

Source: vtnews.vt.edu

 

Cox and Comcast Focusing on Accessibility

LinkedIn

The Cox Communications new Contour voice remote, powered by Comcast’s X1 platform, empowers customers who have limited mobility or dexterity or a visual disability. With the push of a button, you can search, surf and record your favorite programs, all with the sound of your voice.

Plus, the new Contour features Voice Guidance, a “talking guide” developed by Comcast, that speaks what’s on the screen, including program descriptions and navigation options. Now individuals with accessibility needs can easily explore thousands of TV shows and movies.

This proactive step is not limited to their product offering. Cox is also hiring individuals with disabilities to test their products.

Mona Lisa Faris, president and publisher of DIVERSEability Magazine spoke with representatives from Cox and Comcast to discuss how their collaboration is helping both companies become more proactive.

Ilene Albert, Executive Director, Value Added Services and Diversity Products at Cox, began with some history behind this new focus at Cox.

“Last December we launched a center of excellence for accessibility, to focus on developing products, support and services for our customers who have disabilities and accessibility needs. We are very excited about this; we work with all of our peers across the product organization to make sure we are looking at the broad picture of accessibility,” Albert explained. “We partner well with Comcast, who has been the leader in helping develop products for the accessibility community.

Jennifer Cobb is Director of Diversity Products at Cox. She told us, “Last year, we worked to set up the business processes so that, going forward, we were included in all new product development. One of the things we are working toward is integrating more research with persons with disabilities into our overall processes.”

Thomas Wlodkowski is the Vice President of Accessibility at Comcast. He was brought in to start up an accessibility office and, because he is visually impaired, he provides a unique perspective for Comcast, helping the company open products and services to the widest possible audience.

“I’ve been in the accessibility field before it was really considered a field—since the early 1990s,” Wlodkowski reports. At Comcast, our program is founded on three pillars: customer experience, product capabilities and infrastructure. My team is in the product group, and we launched voice guidance, which enables people who are visually impaired to navigate onscreen menus. We have an accessibility lab in our Philadelphia corporate headquarters that we use to drive employee awareness, and we also bring external community members in to help with user testing. It’s a big piece of our effort.”

Wlodkowski went on to say, “There is a saying in the disability civil rights community: Nothing about us without us. We really need to bring people with disabilities into the development process to find out where the barriers are.”

“At Comcast, we are building a lot of the accessibility solutions that, essentially, Cox would have had to build on their own. They get accessibility as part of the relationship. Then the two accessibility teams can partner to share best practices.”

“X1 has been a great product for us,” Wlodkowski said. “It’s based in the cloud, so we don’t have to install additional software or hardware in the box. We can roll new features in—and as we do that, Cox can also pick them up as well.”

New features were recently added just as Tom said, as Cox released a statement earlier this month announcing that YouTube is now available for Cox customers via their Contour app.

As Tom Wlodkowski pointed out, “By building accessible products, it builds a better product overall for everyone.” Accessibility is a fairly new frontier, as more and more companies realize that dedicating teams to ensure accessibility not only improves the products offered to those with disabilities but it provides a better experience for all customers.

Cox’s licensed version of Comcast’s X1 platform, Contour, is now its flagship video product.. And fans of The Voice who have Comcast or Cox as their cable provider will be happy to know they can now use their remote to cast their votes on the popular live show. The Contour/X1 technology is truly changing the television viewing experience, offering something for everybody to love!

The Ability Hacks: The story of two hackathon teams embracing the transformative power of technology

LinkedIn
Microsoft's Hackathon

This week is the Microsoft One Week Hackathon, where employees from around the company work tirelessly to “hack” solutions to some of the world’s biggest challenges. The opportunity to empower people through technology, particularly those with disabilities, has never been more important.

Back in 2014, we had 10 ability hack projects, last year we had 150 projects and 850 people, and this year – well, it’s going to be exciting to see. This is a wonderful testament to our employees and their passion for innovation and conviction in the importance of empowering every person and organization to achieve more.

An inspiration for many was two Ability Hack projects that won the company hackathon in 2014 and 2015, and this year we will be giving away copies to hackers of a new book covering the journeys of those hackathon teams. “The Ability Hacks” shares the behind-the-scenes stories of the hackers who pioneered two innovative hacks-turned-solutions used today by people with disabilities around the world – the Ability EyeGaze Hack team and Learning Tools Hack team.

We hope this book, and the journeys these teams have been on, can help spark a conversation about the transformative power of technology, and encourage engineers and developers to build the next wave of inclusive technology. I encourage you to read, and as a teaser, here are a few highlights:

EyeGaze: Reinstating independence by revolutionizing mobility

“Until there is a cure for ALS, technology can be that cure.” – Steve Gleason, former NFL player

In 2014, former NFL player Steve Gleason, who has a neuromuscular disease called amyotrophic lateral sclerosis (ALS), sent an email to Microsoft challenging employees to develop a technology that could allow him to drive a wheelchair with his eyes. A group of software engineers, program managers, marketers and advocates formed the Ability Eye Gaze hack team and accepted this challenge ahead of the 2014 Microsoft hackathon.

Through hard work, determination and despite a few twists and turns, the team collaborated to build a solution complete with duct tape that allowed Steve to control his wheelchair with his eyes. This invention had impact, ultimately inspiring the formation of the Microsoft Research NExT Enable team, who have continued working on technology for people with ALS and other disabilities. This has already resulted in a new feature named Eye Control, which was developed in collaboration with the Windows team, and was included in Windows 10 last year.
Learning Tools: Transforming education and learning in the classroom
“If you design things for the greatest accessibility – Learning Tools is like that – it makes everything accessible to all, and why wouldn’t we want that?” – A fourth-grade teacher

While Learning Tools involved a different set of players in a different part of Microsoft, its story shares the same lessons, opportunities, passion and impact experienced by the Eye Gaze team. Winner of the 2015 Hackathon, Learning Tools helps students with dyslexia learn how to read and is now transforming education for teachers, students, administrators and parents.

What’s amazing about this story was the diversity of the team, which included developers, a reading team and a speech pathologist, working extensively with students and educators to create the product. While originally created for folks with dyslexia, the Learning Tools team is seeing benefits to folks with dysgraphia, ADHD, English language learners and emerging readers. Today, Learning Tools is incorporated into apps, Office, and Edge, reaching 13 million active users in more than 40 languages. Like the Eye Gaze team before it, the Learning Tools team evolved from a passionate hackathon into a strategic business.You can even read “The Ability Hacks” using Learning Tools, just download the PDF and open in Microsoft Edge.

‘It’s not about the technology. It’s about the people.

As Peter Lee, corporate vice president, Microsoft Healthcare, shares in the book’s foreword, “A focus on inclusion helps a team become more empathetic with its users, which in turn affects deeply the design and development process of products.”

Personally, I go to work every day feeling humbled that I represent a company with an incredible mission to empower every person on the planet to achieve more. I’m grateful for the chance to share just a few of their stories in “The Ability Hacks.” Trust me, it’s two stories of many that have taken place over the last four years and there will be a lot more in our future.

While we’ve come a long way in incorporating accessibility and inclusivity in everything we do, the truth is that accessibility is a journey. There is more in front of us than behind us. Please read the book and join the conversation about inclusive technology design on Twitter via #abilityhacks. And if you want to create products for people with disabilities, do check out our AI for Accessibility program, which provides access to advanced Microsoft Azure cloud computing resources to individuals and organizations working on empowering people with disability across the world at https://www.microsoft.com/en-us/ai-for-accessibility.

The Ability Hacks

Aligned with the first day of Microsoft’s One Week Hackathon, Microsoft will launch a new book which shares the behind the scenes stories of two Microsoft Hackathon teams who embraced their passion and pioneered two innovative hacks-turned-solutions used today by people with disabilities around the world.

The book includes a foreword by Corporate Vice President Peter Lee and an afterword by Chief Accessibility Officer Jenny Lay-Flurrie, and is available in paperback and Kindle at Amazon.com and for download on PDF and EPUB.

We hope this book and the journeys these teams have been on, can help spark a conversation about the transformative power of technology, and encourage engineers and developers to build the next wave of inclusive technology. If you want to create products for people with disabilities, do check out our AI for Accessibility program, which provides access to advanced Microsoft Azure cloud computing resources and grants to individuals and organizations working on empowering people with disability across the world at https://www.microsoft.com/en-us/ai-for-accessibility.

Continue on to Microsoft’s newsroom to read the complete blog.

 

How Xbox Adaptive Controller Will Make Gaming More Accessible

LinkedIn
xbox adaptive controller

On Wednesday night, Microsoft unveiled its new Xbox Adaptive Controller for the Xbox One console, aimed at making gaming more accessible for those with disabilities and mobility limitations as part of their Gaming for Everyone initiative.

The device allows for individual customization through a series of peripheral attachments that allow gamers to cater the controls to their own specific comfort.

For many, the current Xbox controller design (and those of other consoles’ controllers like Nintendo’s Switch and Sony’s Playstation 4) presents a challenge to use as it was not designed for individuals with mobility impairments. The Adaptive Controller is a foot-long rectangular unit with a d-pad, menu and home buttons, the Xbox home icon button and two additional large black buttons that can be mapped to any function.

On its back are a series of jacks for input devices and various peripheral accessories, each of which can be mapped to a specific button, trigger or function on the Xbox controller.

“Everyone knew this was a product that Microsoft should make,” Bryce Johnson, inclusive lead for product research and accessibility for Xbox, told Heat Vision.

The original inspiration for the Adaptive Controller came during 2015’s Microsoft One-Week Hackathon, an event where employees develop new ideas and tackle issues with their products. Through a partnership with Warfighter Engaged, an all‐volunteer non-profit that modifies gaming controllers for severely wounded veterans through personally adapted devices, a prototype was put together that would eventually become the Adaptive Controller.

“We had been doing our own stuff for a couple of years before that, making custom adaptive items for combat veterans, and it was kind of a challenge for even the most basic changes, requiring basically taking a controller apart,” Warfighter Engaged founder Ken Jones said. “Microsoft was thinking along the same lines. It was really just perfect timing.”

As development on the project went on, Microsoft began working with other foundations aimed at making gaming more accessible such as AbleGamers, SpecialEffect, the Cerebral Palsy Foundation and Craig Hospital, a Denver-area rehabilitation center for spinal cord and brain injuries.

While third-party manufacturers have created more accessible peripheral controllers in the past, Microsoft is the first of the major gaming publishers to make a first-party offering.

“I think we’re always open to exploring new things,” Johnson said of Microsoft developing their own peripherals for the Adaptive Controller. “Right now, I think the challenge is that there is a super large ecosystem of devices that we intentionally supported as part of the Xbox Adaptive Controller, and we want people to go out and find that vast array of toggles, buttons, etc. and have those work with that device.”

Continue onto The Hollywood Reporter to read the complete article.

Pinterest Just Redesigned Its App For Blind People

LinkedIn
pinterest on desktop

Here’s how the company confronted its own shortcomings on inclusive design–and systemically redesigned its app for everyone.

Last year, Long Cheng sat down with a group of engineers as they studied people using Pinterest. For Cheng, lead designer at the company, this sort of user testing was commonplace. But that day, something was different. The testers weren’t thirtysomething moms, or whatever stereotypical demographic pops in your head when you picture one of Pinterest’s 200 million users. They were people with a range of visual impairments, from macular degeneration to complete blindness. And Cheng wanted to see how well they could use the app.

To his dismay, many couldn’t even get past the sign-up screen. People literally couldn’t even create an account. While iOS and Android each have an accessibility feature–called Voice Over and Talk Back, respectively–which read aloud the buttons and options on the screen for visually impaired users to navigate, Pinterest had failed to properly label its own user interface for this feature to even work properly. Similarly, when people did eventually get into the app, recipes read aloud would be missing steps or ingredients. People found themselves trapped inside pins, unsure how to escape. Even for partially sighted people, Pinterest design, with its minuscule type, was a challenge to discern.

“It was definitely personal for me, and me specifically. Because I’ve been a designer here for five years, and it’s a product I really love to work on, and I want everyone to be able to use it,” says Cheng. “For the group of engineers and designers sitting there, we felt like we weren’t doing enough. We wanted to do more.”

Blind people using Pinterest–the app for visual inspiration–may sound like an oxymoron. But in fact, Pinterest, like all mainstream apps, has a contingent of blind users (though the company admits to not tracking them). Many use Pinterest simply to bookmark stories on the web they’d like to read later. And those who don’t use the service might like to, if they were better welcomed.

“We asked one user, would you use Pinterest? You can’t see what’s on the screen!” Long recounts. “She said, ‘of course I would.’” Visually impaired or not, we all want tasty recipes, better haircuts, and fashion advice. And Pinterest is loaded with billions of pins full of this stuff.

Over the past year, Pinterest has committed to practicing inclusive design, and making its product more accessible to everyone. With a team of a dozen designers and engineers, Cheng developed a multi-part approach to redesigning Pinterest as a product that could be more accessible to everyone, leading to a fully redesigned app and desktop experience that’s been slowly rolling out for months.

Continue onto Fast Company to read the complete article.

This New Prosthetic Limb Transmits Sensations Directly To The Nervous System

LinkedIn

Even with the most advanced prosthetics, amputees cannot feel the ground when they walk on a synthetic leg, or know if someone is touching a mechanical arm. This new MIT tech hopes to change that.

In 1992, Hugh Herr, now head of the Biomechatronics Group at MIT Media Lab, had both of his legs amputated below the knees after sustaining frostbite during a mountain climbing accident. “I”m basically a bunch of nuts and bolts from the knees down,” Herr says, demonstrating his prosthetic legs on the stage at TED 2018 in Vancouver, “but I can skip, dance, and run.”

Herr’s team at MIT focuses on building prosthetic limbs that respond to neural commands with the flexibility and speed of regular limbs. Around 24 sensors and six microprocessors pick up neural signals from Herr’s central nervous system when he thinks about moving his legs. They transmit those signals to the prosthetics, which move accordingly. But despite this remarkable connectivity between man and machine, it’s not a complete fusion. “When I touch my synthetic limbs, I don’t experience normal touch and movement sensations,” Herr says. In order to know his neural commands worked, he has to look and actually see his foot hit the ground–he can’t feel it.

Reproducing the sensations of having a real limb in prosthetics is, Herr believes, the last remaining hurdle to creating truly effective synthetic limbs. “If I were a cyborg and could feel my legs, they’d become a part of myself,” Herr says. But for now, they still feel separate.

His team, however, is working on a new type of limb that would receive not only commands, but sensations, from the central nervous system. This principle, which Herr calls neuro-embodied design, involves extending the human nervous system into synthetic body parts.

Since the Civil War, when limbs are amputated, doctors have generally truncated the tendons and nerve endings, which minimizes sensation and often leads to the “phantom limb” feeling experienced by many amputees. But in a new process Herr’s team pioneered at MIT, doctors leave the tendons and nerve endings intact so they can continue to feed sensations down past where the human leg ends and the prosthetic begins.

Last year, a fellow mountain climber and old friend of Herr’s, Jim Ewing, became the first patient to undergo the new amputation process and get fitted with a cyborg-like synthetic limb.

Continue onto Fast Company to read the complete article.

AI technology helps students who are deaf learn

LinkedIn

As stragglers settle into their seats for general biology class, real-time captions of the professor’s banter about general and special senses – “Which receptor picks up pain? All of them.” – scroll across the bottom of a PowerPoint presentation displayed on wall-to-wall screens behind her. An interpreter stands a few feet away and interprets the professor’s spoken words into American Sign Language, the primary language used by the deaf in the US.

Except for the real-time captions on the screens in front of the room, this is a typical class at the Rochester Institute of Technology in upstate New York. About 1,500 students who are deaf and hard of hearing are an integral part of campus life at the sprawling university, which has 15,000 undergraduates. Nearly 700 of the students who are deaf and hard of hearing take courses with students who are hearing, including several dozen in Sandra Connelly’s general biology class of 250 students.

The captions on the screens behind Connelly, who wears a headset, are generated by Microsoft Translator, an AI-powered communication technology. The system uses an advanced form of automatic speech recognition to convert raw spoken language – ums, stutters and all – into fluent, punctuated text. The removal of disfluencies and addition of punctuation leads to higher-quality translations into the more than 60 languages that the translator technology supports. The community of people who are deaf and hard of hearing recognized this cleaned-up and punctuated text as an ideal tool to access spoken language in addition to ASL.

Microsoft is partnering with RIT’s National Technical Institute for the Deaf, one of the university’s nine colleges, to pilot the use of Microsoft’s AI-powered speech and language technology to support students in the classroom who are deaf or hard of hearing.

“The first time I saw it running, I was so excited; I thought, ‘Wow, I can get information at the same time as my hearing peers,’” said Joseph Adjei, a first-year student from Ghana who lost his hearing seven years ago. When he arrived at RIT, he struggled with ASL. The real-time captions displayed on the screens behind Connelly in biology class, he said, allowed him to keep up with the class and learn to spell the scientific terms correctly.

Now in the second semester of general biology, Adjei, who is continuing to learn ASL, takes a seat in the front of the class and regularly shifts his gaze between the interpreter, the captions on the screen and the transcripts on his mobile phone, which he props up on the desk. The combination, he explained, keeps him engaged with the lecture. When he doesn’t understand the ASL, he references the captions, which provide another source of information and the content he missed from the ASL interpreter.

The captions, he noted, occasionally miss crucial points for a biology class, such as the difference between “I” and “eye.” “But it is so much better than not having anything at all.” In fact, Adjei uses the Microsoft Translator app on his mobile phone to help communicate with peers who are hearing outside of class.

“Sometimes when we have conversations they speak too fast and I can’t lip read them. So, I just grab the phone and we do it that way so that I can get what is going on,” he said.

AI for captioning

Jenny Lay-Flurrie, Microsoft’s chief accessibility officer, who is deaf herself, said the pilot project with RIT shows the potential of AI to empower people with disabilities, especially those with deafness. The captions provided by Microsoft Translator provide another layer of communication that, in addition to sign language, could help people including herself achieve more, she noted.

The project is in the early stages of rollout to classrooms. Connelly’s general biology class is one of 10 equipped for the AI-powered real-time captioning service, which is an add-in to Microsoft PowerPoint called Presentation Translator. Students can use the Microsoft Translator app running on their laptop, phone or tablet to receive the captions in real time in the language of their choice.

“Language is the driving force of human evolution. It enhances collaboration, it enhances communication, it enhances learning. By having the subtitles in the RIT classroom, we are helping everyone learn better, to communicate better,” said Xuedong Huang, a technical fellow and head of the speech and language group for Microsoft AI and Research.

Huang started working on automatic speech recognition in the 1980s to help the 1.3 billion people in his native China avoid typing Chinese on keyboards designed for Western languages. The introduction of deep learning for speech recognition a few years ago, he noted, gave the speech technology human-like accuracy, leading to a machine translation system that translates sentences of news articles from Chinese to English and “the confidence to introduce the technology for every-day use by everyone.”

Continue onto Microsoft’s Blog Room to read the complete article.

Google Debuts Wheelchair Accessible Routes in Google Maps

LinkedIn
wheelchair accessible routes

Google Maps will now show wheelchair accessible routes in cities like Boston, New York, and London.

The search giant said Thursday that people can now use Google Maps to get directions that are catered specifically to people with mobility problems.

Although people can use Google Maps to get around using public transit, those routes may not be best suited for people with wheelchairs or who have other disabilities.

Google (GOOG, -3.63%) said that it teamed with transit agencies to help it catalogue the best wheelchair-accessible routes. To find those routes, Google Maps users enter where they want to go, tap on the “Directions” tab, and then choose “wheelchair accessible” as one of the options under the “Routes” section.

The company is debuting the new feature in major metropolitan areas worldwide. In addition to Boston, New York, and London, the option is available for Tokyo, Mexico City, and Sydney.

“We’re looking forward to working with additional transit agencies in the coming months to bring more wheelchair accessible routes to Google Maps,” Google product manager Rio Akasaka said in a blog post.

Continue onto Fortune to read the complete article.

This Smart Paint Talks To Canes To Help People Who Are Blind Navigate

LinkedIn
ohio state school for the blind

The Ohio State School for the Blind is pioneering new technology that causes canes to vibrate when it touches lines of traffic paint.

The crosswalk on a road in front of the Ohio State School for the Blind looks like one that might be found at any intersection. But the white stripes at the edges are made with “smart paint”–and if a student who is visually impaired crosses while using a cane with a new smart tip, the cane will vibrate when it touches the lines.

The paint uses rare-earth nanocrystals that can emit a unique light signature, which a sensor added to the tip of a cane can activate and then read. “If you pulse a laser or LED into these materials, they’ll pulse back at you at a very specific frequency,” says Josh Collins, chief technology officer at Intelligent Materials, the company that manufacturers the oxides that can be added to paint.

As the company explored how the paint could be used with autonomous cars–the paint could, for example, help a car recognize an intersection or lane, or provide markers that make GPS much more accurate–they realized that the paint could also be useful for people who are blind.

A person who is blind usually relies on the sound of parallel traffic rushing by them on the side to help stay oriented while crossing the street and not veer out of a crosswalk (in some cities, beeping walk signals also help). But that doesn’t always work well, and it’s particularly challenging on streets with less traffic.

“It’s much easier to stay oriented when you can hear those traffic sounds,” says Mary Ball-Swartwout, an orientation and mobility specialist at the Ohio State School for the Blind, who helps teach students skills for navigating. “When we talk about lower-traffic areas, that’s where [smart paint and a smart cane] could really have a lot of use.”

Students at the state-run boarding school, which has a large, enclosed campus in Columbus, Ohio, will help researchers test several crossings with the new paint on the school’s internal streets. The paint, which can be clear or gray on a gray surface so it’s essentially invisible to sighted people, could also be used in other locations. “We’re also thinking about providing them with guidance as they move down a sidewalk or guidance about whether or not they’ve arrived at a bus stop or at a certain destination,” says John Lannuti, a professor of materials science engineering at Ohio State University who connected Intelligent Materials with the School for the Blind.

GPS, which isn’t precise enough to distinguish between a street or a sidewalk–and occasionally doesn’t even recognize the right street–isn’t a foolproof system for navigation. But the paint could help someone identify, for example, if they are standing on the northwest or southwest corner of an intersection, or the exact location of an entrance to a building. The paint could also be used with other navigation tools.

“What we’re envisioning is sort of a Google Maps for the blind, that says, okay, you want to go to the barbershop, and sets a path for you and tells you when you’ve arrived because the cane senses a stripe of paint associated with the barbershop,” Lannuti says. “There may be a point where a smartphone connected to the paint speaks to the user.”

Continue onto Fast Company to read the complete article.

SignAll is slowly but surely building a sign language translation platform

LinkedIn
sign language computer

Translating is difficult work, the more so the further two languages are from one another. French to Spanish? Not a problem. Ancient Greek to Esperanto? Considerably harder. But sign language is a unique case, and translating it uniquely difficult, because it is fundamentally different from spoken and written languages. All the same, SignAll has been working hard for years to make accurate, real-time machine translation of ASL a reality.

One would think that with all the advances in AI and computer vision happening right now, a problem as interesting and beneficial to solve as this would be under siege by the best of the best. Even thinking about it from a cynical market-expansion point of view, an Echo or TV that understands sign language could attract millions of new (and very thankful) customers.

Unfortunately, that doesn’t seem to be the case — which leaves it to small companies like Budapest-based SignAll to do the hard work that benefits this underserved group. And it turns out that translating sign language in real time is even more complicated than it sounds.

CEO Zsolt Robotka and chief R&D officer Márton Kajtár were exhibiting this year at CES, where I talked with them about the company, the challenges they were taking on and how they expect the field to evolve. (I’m glad to see the company was also at Disrupt SF in 2016, though I missed them then.)

Perhaps the most interesting thing to me about the whole business is how interesting and complex the problem is that they are attempting to solve.

“It’s multi-channel communication; it’s really not just about shapes or hand movements,” explained Robotka. “If you really want to translate sign language, you need to track the entire upper body and facial expressions — that makes the computer vision part very challenging.”

Right off the bat that’s a difficult ask, since that’s a huge volume in which to track subtle movement. The setup right now uses a Kinect 2 more or less at center and three RGB cameras positioned a foot or two out. The system must reconfigure itself for each new user, since just as everyone speaks a bit differently, all ASL users sign differently.

“We need this complex configuration because then we can work around the lack of resolution, both time and spatial (i.e. refresh rate and number of pixels), by having different points of view,” said Kajtár. “You can have quite complex finger configurations, and the traditional methods of skeletonizing the hand don’t work because they occlude each other. So we’re using the side cameras to resolve occlusion.”

As if that wasn’t enough, facial expressions and slight variations in gestures also inform what is being said, for example adding emotion or indicating a direction. And then there’s the fact that sign language is fundamentally different from English or any other common spoken language. This isn’t transcription — it’s full-on translation.

“The nature of the language is continuous signing. That makes it hard to tell when one sign ends and another begins,” Robotka said. “But it’s also a very different language; you can’t translate word by word, recognizing them from a vocabulary.”

SignAll’s system works with complete sentences, not just individual words presented sequentially. A system that just takes down and translates one sign after another (limited versions of which exist) would be liable to creating misinterpretations or overly simplistic representations of what was said. While that might be fine for simple things like asking directions, real meaningful communication has layers of complexity that must be detected and accurately reproduced.

Somewhere between those two options is what SignAll is targeting for its first public pilot of the system, at Gallaudet University. This Washington, D.C. school for the deaf is renovating its welcome center, and SignAll will be installing a translation booth there so that hearing people can interact with deaf staff there.

Continue onto TechCrunch to read the complete article.