Project overview
The problem:
Practitioners want to work at hospitals that offer specific practitioner privileges and procedures, but they do not know which hospitals offer these privileges and procedures, or what to do to ensure they are qualified to practice at those hospitals.
The proposed solution:
A dynamic rules engine that is able to match health practitioners with area hospitals based on practitioner credentials and hospital privileges.
Research
The first step we needed to take was learning how the process of requesting privileges at a hospital works currently, and what would motivate a healthcare practitioner to request specific privileges. After discussing this with our stakeholders, we were able to diagram the current process in a flow chart in order to ensure all parties were on the same page, and then discuss the plausible task scenario that would lead a user to this software application.
A healthcare practitioner requesting privileges flow
While the flow chart delineated the current process, we were still missing one crucial piece: the motivation; what would drive a practitioner to seek out this tool? And what tools exist already that either do or do not meet this need?
Without an upfront research engagement I was left to conduct some guerilla research.
Guerilla user research
Thankfully, I am married to a doctor. I asked Dr. Sumarsono about what he looks for in a hospital, and what type of privileges he would look for in his given specialty. He informed me that he does not look for hospitals based off of privileging, and as a current resident, he rarely thinks about privileging at all. In terms of career decisions, he cares more about a mix of factors which include but are not limited to:
- Location
- Research opportunities/output
- Teaching opportunities
- Electronic Medical Record software
The tool that he used extensively to evaluate medical residency programs, and that he would most likely use to help determine a practice location is Doximity.
I decided to take a look at the website and do a feature comparison.
Market research
The key differentiator between the client’s software and what existed in the market was the ability to determine one’s eligibility for privileges before applying. Despite the lack of preliminary research to support this offering, we decided to let real users do the talking in validation testing.
Design
Part of the challenge of coming up with an enjoyable user experience was translating what is currently a paper, phone, and fax-heavy process into a purely digital experience. My goal was to condense the delineation of privileges form without losing any vital information. The other difficulty was introducing the new behavior of "connecting" with facilities prior to applying in a way that feels natural. Despite the fact that this is not strictly speaking a professional network, I modeled the behavior of "connecting" with facilities off of existing platforms such as LinkedIn. Here are a few of the key screens.
Testing
We tested our designs with 5 healthcare practitioners. To set the scenario, we asked each of the participants to assume that they had just submitted their common application through the portal in front of them. Now, they were asked to apply to a specific hospital through that same portal.
These were the top insights we captured as qualitative feedback:
- “If I’m going to apply for a hospital I know what hospital I’m going to apply to”
- “The one thing I don’t understand is what 0% eligible means…does that mean they don’t need anybody or they do need somebody?”
- Did not notice the small print saying how eligibility is determined
- Once we explained that eligibility meant how much of their credentialing application aligned with the hospitals privileges, he responded: “Oh ok, how would I be eligible for some hospitals and not for others? Must be something in my background”
- He would like to sort his hospital search by “location, city, number of procedures”
- “I would want to find out why I’m 0% eligible“
- “I like the map showing where everything is, if I live here I know I’d be interested in these 3 places”
- Core procedures cardiothoracic surgery - “Confused why some are black and some are gray” - updated UI pattern
- “I may ask for additional privileges that aren’t on the form”
- Once he saw the facility’s delineation of privileges, he said
- “It’s a little clearer…but I still don’t know what 50% versus 60% what 80% (eligible means)”
- “So eligibility is based not on my qualifications but what I’ve submitted”
- Once he submitted all his information, he saw the progress screen
- “All I need to know is that the verification is in process, the percentage doesn’t always mean as much to me...would probably call"
Revised prototype
Key learnings
The biggest design update I made was actually not in regards to the UI, but to the copy. We changed the language of “N% eligible” to “N% ready to apply” midway through testing. This seemed to add some clarity around what their true status was. As one participant noted above, percentage was really telling him how much information he completed, not how "eligible" he was. The language was too strong, potentially offensive to these highly qualified practitioners, and did not accurately communicate what stage they were at in the process. The main takeaway for me was that software needs to speak in the language that users are speaking, not in how the business perceives them.
The other big takeaway was that we were unable to clearly validate what need in the market this tool was fulfilling. Doctors in particular seemed to know where they would apply, and did not seem to question how "eligible" they were before doing so. They were used to customizing their career by electing privileges that may not be on the form, and did not care for the minutiae of percentages. They want to know if something is good, or if something is not.
In sum, upfront research is invaluable when trying to release a new product to an unknown customer base. A lot can be learned from just talking to potential users.