Self-driving cars, also known as autonomous vehicles, are quickly becoming a new reality in many parts of the world. As more begin to take the road, pedestrians and drivers are faced with an obstacle.
How can a machine have a successful and safe interaction with a pedestrian crossing the street?
Since current AVs lack the human capacity to effectively communicate with pedestrians like a human driver would, some feel a disconnect with the vehicles.
Raiful Hasan, computer science assistant professor, and Hadi Rahmati, visual communication design assistant professor, are looking to improve that communication.
By uniting Rahmati’s experience in Visual Communication Design and Hasan’s background in developing a system for distracted pedestrians, the two assistant professors have combined their expertise to create a solution.
The research, carried out at Kent State, started in the fall of 2023. They are now waiting to start their user group study with human beings.
In July 2025, they completed and presented their findings after developing the framework and prototype over the past year and a half through the John and Fonda Elliot Design Innovation Faculty Fellows Program. This program creates time and space for faculty to create collaborative projects and conduct research.
After reviewing around 60 studies of different external human interface interactions, or eHMIs, they found that the studies did not apply how people interpret information, or communication theory, to interact with the pedestrian.
Additionally, the eHMIs were all static or nonadaptive. The interfaces would respond the same to each pedestrian it interacted with instead of adapting to the pedestrian.
For example, a static eHMI would use audio and light to signal when a pedestrian can cross the street. But the eHMI does not know when or how to communicate with that pedestrian.
The lack of communication between pedestrians and AVs due to a static eHMI makes them harder to trust.
“Those were not successful because that was from a technical perspective,” Hasan said. “There was no study or evaluation of how people would accept [eHMIs].”
In the few studies that did apply some sort of communication theory, those researchers never developed a prototype to actually test it out.
“There was a big gap in not including the communication theory and the knowledge that we have from human-to-human communication being used in the field of human-machine interaction,” Rahmati said about previous eHMIs.
Rahmati compared interacting with a static eHMI to interacting with an automated voice on a phone call.
“We kind of adapt to the state of the other part of the communication when we communicate and when we speak,” Rahmati said. “[The machine] was predesigned for every situation.”
When a pedestrian interacts with a human driver, the driver adapts to the context of the situation. If a distracted pedestrian crosses the street, the driver may honk at them to get their attention, but the driver would not honk at every pedestrian every time.
An AV with a static eHMI would not adapt in such a way. It would communicate with a distracted pedestrian the same way they would a pedestrian in a wheelchair, Hasan said.

“All the existing eHMI was related to a specific type of pedestrian,” Hasan said.
Hasan explained when developing a solution to this problem, both he and Rahmati had to create a combination of visual communication theory and technical science people could trust.
The eHMI they would create would be able to adapt and communicate with three different levels of pedestrians.
The first level of pedestrians are non-distracted. They are paying attention to the road and vehicle when attempting to cross the street.
When recognizing this type of pedestrian, the eHMI will flash a walking or wheelchair symbol on the windshield of the AV and project a green light, indicating that it is safe to cross.
The second level refers to distracted pedestrians, such as those who are using their phone or listening to music.
The eHMI would send a haptic signal, or signals received via touch, to the pedestrian through their electronic devices and project a green light. In addition, there would be a display on the AV’s windshield to show they are safe to cross.
The third level applies to pedestrians who may be visually impaired. For colorblind pedestrians, the eHMI would use contrast to show them it is safe to cross.
The eHMI would also use haptics to send a signal to the pedestrian’s electronic devices to let them know if it is safe to cross. The eHMI would also flash the universal blind symbol on the AV’s windshield, alerting other pedestrians to who was crossing the street.
If a pedestrian waves at the car as a “thank you,” the eHMI would then recognize that and, in return, give feedback on the windshield.
“The pedestrian will understand the car is actually communicating with them,” Hasan said. “So, they will trust the communications.”
To introduce this trust, the eHMI relied on two communication theories: Roman Jakobson’s Code-Channel Model and the Computer-Mediated Communication Affordance Theory.
Jakobson’s Code-Channel Model framed the eHMI as an adaptive system rather than one-way communication. This theory guided the eHMI in decisions involving what cues to give pedestrians depending on the situation or recognizing the context around crosswalk behavior.
The CMC Affordance Theory allowed the eHMI to analyze the possible actions in an interaction with a pedestrian. The design of the eHMI incorporated four affordances, such as the visibility of cues, the persistence and safety of signals, allowing cues to adapt and associating signals to a situation’s context.
The design also incorporated trust building, ensuring the eHMI gave transparent and consistent feedback when responding to pedestrians and helped with determining the best location of the visual channel.
“When you start to see, ‘the machine identified who I am,’ you start to trust that machine more than a machine which is not dynamic,” Rahmati said.
With the combination of computer science and communication, Hasan hopes the future of their technology will be implemented in real cars and create success where there was none before.
Rahmati hopes the future of their research shows people that they can make new technology like AVs more effective. He believes that instead of ignoring AVs or totally accepting them for what they are now, people should find a way to make them better.
“What I dream is that maybe seeing that moment of the pedestrian crossing a road, saying ‘thank you’ to an autonomous vehicle, and an autonomous vehicle returning ‘you’re welcome,’” Rahmati said.
Loreal Puleo is a reporter. Contact her at [email protected].
