Deaf PhD student receives $25k research grant from Microsoft

[Transcript] Larwan Berke is one of the 11 outstanding doctoral students to get the 2019 Microsoft Research Dissertation grant. Berke is a PhD student at Rochester Institute of Technology (RIT) who majors in computer science and is currently doing an internship in Boston.

Each dissertation grant provides up to $25,000 in funding to help doctoral students in the field of computing. This funding helps each student to advance in their doctoral thesis work.

Berke will be using this grant for his work that uses automatic speech recognition as a captioning tool to enable greater accessibility for deaf and hard of hearing users. The Daily Moth reached out to Berke for an interview.

RENCA DUNN: You’re interning in Boston so now could you explain a little more about the grant? What would it do, what would you do with that grant?

LARWAN BERKE: Okay sure, it’s a major award and I’ll expand on that. Microsoft awarded me a PhD Dissertation grant that will help me finish my work. It’s been four years into my research now, but I’m making good progress. I know I have about 1, 2 or 3 years to go and it depends. I need to study, have proper equipment and enough money. Time is also a factor when it comes to doing these things. The amount of the grant was $25,000 and it will help me buy equipment, a laptop and things that I need. I can also hire two research assistants now. They’re deaf, of course. Having them involved in the research helps me collect sufficient data to study. I’d been working with a few people gathering a small amount of data, but now I can expand my team then I can finish my PhD which I hope will take only one more year. That grant will really help me move things along. It’s a really nice boost, indeed.

RENCA: So, about your research, could you explain a little about your research.

LARWAN BERKE: Yeah, my field, if you remember, is in Computer Sciences (CS) but my specialty is in HCI, Human Computer Interaction. That means I study the technology called ASR, Automatic Speech Recognition, and use it as a captioning tool. You might have seen it before on, you know, YouTube on your phone. You can turn the captions on.

But that technology was invented by hearing people. These inventions were meant to assist hearing people. Now think about it, why don’t we have a tool meant to accommodate deaf people? Sure, this would be a good idea, but who is going to do the studying, evaluating and measuring? It’s not going to be with hearing people rather it will be those from the deaf community. That’s what my work’s all about. It’s measuring how deaf people use this technology. I’m not going to focus on the mathematical aspect, or the artificial intelligence. I’m not going to dig into the social science or social psychology. There will be overlapping factors from both fields and that’s called an HCI: Human Computer Interaction. It truly is great work and fun too because I’m interacting with deaf people and hearing people who are knowledgeable and specialize in AI. I can bring all I’ve learned into my work. This will improve the technology benefiting us because of course eventually deaf people will demand more access to information. I know interpreters can help us with this, but sometimes there are none available, sometimes they get sick or they get into a car accident. Maybe money is a factor. This could result in me not getting the information I need; I want that information. That captioning tool is one way that potentially help us all.

RENCA: You think closed captioning on television should be provided through an automated system? Or should it be a person behind a keyboard doing the captioning for our television? I’m curious about your opinion on that.

LARWAN BERKE: It’s a big challenge indeed. Of course, I would prefer getting perfect results, 100% access all day, 24/7 captioning, but in the real world, it’s not that simple. In the big cities, you know the channels like CNN or FOX, they have the funds. They’re mandated by the federal law called the CVAA or the Communication Video Accessibility Act, to provide suitable captioning. But in the smaller cities, where there’s newer channels and YouTube, there are no law requiring captions. This means I could get absolutely no captioning which is not what I want. That’s why when artificial intelligence, a robot, are utilized, I think this would be great in terms of helping us get more access to information. We have to find the right balance because the AI is used, it could provide a subpar captioning. I’ve had this discussion and I would get frustrated. I would want to give up. I would get mad. Then I’m back to getting no captioning and humans are fighting companies with money but don’t want to spend it. We need that balance and that’s why I think it’s important that we study the AI technology and how we can improve it. We would be watching, and it’s important that deaf people are a part of this, and we would approve the technology before it’s used. Imagine, maybe in sports, where their broadcasting process is relatively easier. Whether they’re scoring goals, or making baskets, they’re using the same vocabulary. That might be where the AI would be effective? Suppose in a chemistry lecture where they’re using long terminology then the AI might be no-good in this scenario. That’s where deaf people need to step in and set the boundaries. They can determine the scenarios where it would be appropriate to use the AI and identify scenarios where the AI isn’t ready to handle. You can imagine that technology will have improved a year or two by now. It’s like at each blink of eye, there’s improvement. They make newer, faster technology then I can analyze it and see if there’s been an improvement where it’s okay to use it in scenarios outside of that boundary. Like in hospitals, for example, would you be okay with being provided captions? I might not trust the technology. Even after 20 years, I probably would be resistant to it. On the other hand, the technology might be ready in 5 years. You can’t predict when that happens. It’s important that I’m able to observe and understand how to analyze and measure the data. Would it be acceptable and up to their standards? If it was a human doing the captioning, they could still make mistakes sometimes. You can see these occasional captioning mistakes. You can see the, m backspacing and correcting themselves. It’s the same idea and it’s important to have the right balance. If you can get the same results from the AI, then you can be happy with the service you’re getting. That’s our goal.

RENCA: That’s pretty cool.

--

Renca: Berke also encourages more deaf and hard of hearing people to be involved with STEM.

LARWAN BERKE: I’d love to promote the field of STEM here. It’s a really fun field to get into. There are some people who might not be into all that math, the coding and all these complicated computations. I know the feeling, but as you progress, the more you learn, you will find it really enjoyable. You know it’s like you’re “hacking”, you’re playing with the data and I don’t mean you’d break into the government’s firewall and steal information. That’s not what I mean at all. I mean you’d be playing with the technology. That’s what the research is about. You’d be hacking all day long. When I got the grant, we made adjustments and we got deaf lab technicians together and came up with ideas. It’s been really fun, and I want to encourage more deaf people to get into STEM so you can be the next researcher like me to win a grant like this one. Perhaps 10 years later. I’ll be there to see the next deaf grant winner. That would be pretty great and I do hope for this.

--

Renca: Berke explained that he wants to also build more trust between deaf and hard of hearing people and the captioning tool just like how we trust interpreters to relay accurate information. By building that trust requires more research for his work.

Thank you, Berke, for sharing. It is great to see how technology continues to rise up and to see the push for more accessibility for our community.

http://bit.ly/2XFkoNK

———

Supported by:

Convo [https://convo.click/2mVhM8h]

Gallaudet University: [gallaudet.edu]