Artificial Intelligence (AI) will be changing a lot of what we do in the future, and its impact will definitely be felt in the physical therapy industry, but how? In this episode, we talk with Pedro Teixeira, MD, PhD who has developed an AI software to help PTs become both more compliant in their documentation AND encourage proper billing for our services. This is just the tip of the iceberg as to what AI will do for our profession, starting with addressing a pain point for every PT practice.
—
I’m excited because we got some new technology coming to the show that I’m excited to talk about and how AI influences the physical therapy space now at this new level since AI is a buzzword. In order to do that, I’m going to have a discussion here with Pedro Teixeira. He is the Cofounder and CEO of PredictionHealth. Thanks for joining me, Pedro. I appreciate it.
Happy to be here. It should be fun.
Pedro, as I said, I’m excited to bring you folks on because AI being a hot topic, physical therapy could use some technological advances in terms of some of our pain points. You’re going to share with us what some of those are now. Before we get into it, tell us a little bit about how you got into the AI and specifically the physical therapy space of all spaces.
I’ll start with the AI piece. It’s a long story, but I’ll jump to the cool parts. For the AI thing, I did Compsci and Biochem back in college. I thought that was cool, but the Compsci thing was not a mistake, but I didn’t take my first course until sophomore year. I was like, “I’ll take intro to computer science,” because it seems cool and I like computers. I took it and I fell in love. I was like, “I have to do this. I’m going to go to med school. I’m going to somehow figure out a way to grab computer science and bring it with me.” I did that. I ended up at Vanderbilt, applied MD PhD program there and they have a great Biomedical Informatics department.
I was like, “Cool. I can do medicine,” because I thought healthcare in the human body is so important, but then the computer science thing was super cool as well. AI was starting to kind have a renaissance around then. I was like, “This is powerful,” because computers can do things quickly, simple things, math things quickly, but the AI component allows them to do a lot of stuff that you’d care about more in healthcare.
Does this person seem like they’re going to be sick or not? It’s not something you can do easy math. It’s usually a complicated pattern match problem. I went to Vanderbilt, met my cofounder there, Ravi Atreya, and we were processing all this data. It was so awesome to be able to teach the computer to find patterns of people with hypertension or without. They’re going to be sick or they’re not going to be sick.
You’re looking through lots of medical records or are you looking at studies across different platforms ? W here’s the data coming from?
Vanderbilt was cool to go to because they had their own EHR, which they had had for a long time. They had millions of records. They had the text data, the lab report description and the PT reports. Everything was in there. The problem was, though, that a lot of it’s written out like in text. If you have discreet data, like numbers, like this person has five problems, this person has this ICD 10 code. That’s usually what people are more used to processing. That means that the things that then are typed in, they’re important. Maybe you’re saying like, “I’m worried about Mrs. Jones.”
Some of the subjective parts and the interpretation that’s not in the impression. That stuff you’re talking about.
That’s where the whole AI thing became this shining tool that I could use and apply because that type of problem where things that are important are written in the text by people. AI is a good thing to pull that back out and to organize and structure it. Getting the opportunity to run something where you could run it on millions of charts and get answers out from that text that you’re never going to sit there and read ten million charts. That felt like a superpower to me and I was totally hooked.
Was there a certain a-ha moment? Did you get a result at some point? You were asking a certain question or trying to solve a certain problem and you said, “That’s super powerful.” Was there any moment like that?
There are two. One’s super early and it was funny. It was one of the first assignments I did. It was cracking passwords and you could basically ask it to do the simplest silly thing, which is try every single password of 1 and 2 characters and 3 characters. I had it print to the screen so it would print every single time it tried one.
The thing goes shooting across my screen, going so fast. I remember just going, “Whoa,” because he was doing hundreds of thousands of things per second. I was like, “Wow.” I had another similar moment when I was doing the PhD work because I would go and I’d be like, “I have these two million patients. Let’s figure out who has hypertension.” We’d write the algorithm and we’d do all this data processing, then you’d hit go. It would churn through millions of patients if we wanted to and be like, “Here are the ones that most certainly have it. Here are the ones that might. Here are the ones that don’t seem like they have it.” I’m like, “Wow.” It’s like I’m Superman, but for reading electronic health records.
Do you continue to go down that path with the AI?
My cofounder and I were both like, “This is so cool.” When you’re doing that, part of the reason you do do AI is because AI is very robust. If you train it right, it’s robust. People don’t always put the right thing in the right field. They don’t talk about it the same way and they don’t use the same acronyms. Sometimes, they make up acronyms. The AI can be made to pattern match. The same way that if you’re looking at a picture of a dog, it could be a puppy or an older dog. The dog could be hidden behind a tree and you can pick it out.
AI is very robust if you train it right.
Click To Tweet
That’s one of the superpowers for AI. My cofounder and I were like, “We’ve got to apply this. This is so powerful. This is so useful. If we could help it improve care for patients and help organizations be more efficient, that would be huge.” We knew it was ridiculous and risky, but we’re like, “We’re going to go start a startup and we’re going to try to be helpful with AI. Clean up this EHR data and try to provide people with useful insights using this AI technology that seems like it’s well suited to take all the stuff hidden in the notes and make it easy to digest and take action on.”
What year was this?
That was the very beginning of 2017.
You’ve been at this for a while. We’re talking about AI now like it’s brand new, but you’ve been dealing with it for a long time.
We’ve been cracking it for a while. It’s been cool to see as the models have gotten bigger, as people have refined the techniques that you’ve been able to do new, better, and cooler things. It feels like this gigantic wave is something picking up. We’re surfing on this thing that we thought was going to be like a cool wave, but now it’s like this ridiculous thing. It’s been exciting and very interesting and not boring.
How did you get into the physical therapy space of all places?
We started off figuring we’d do documentation, like, “The notes are the thing that we think have useful data in them. Let’s start off with a thing that people don’t like to write and we’ll see if we can be useful there.” We did initially have a documentation assistance product where we listened to the conversation then we would try to get it summarized.
The bar for quality on that is high, which is appropriate, as you would totally expect. We had humans in the loop to review the data and carefully look over that. We had a couple of specialties. We started in family medicine. We added urology and we got to physical therapy. When we were going through with the site, they thought that was interesting as an idea.
They tell us the notes and they also have to be compliant. We’re like, “That’s interesting. Tell me more.” You all have to hit all of these little check boxes and things and this many goals and written this way. That’s not uncommon in other specialties. I remember sitting there in Kelly Brown’s office, who’s joined us since because she’s a PT and Clinical Director. She’s awesome.
I was like, “This seems like a good problem for our models because they’re good at reading text and finding and categorizing things in useful ways. This sounds like a good opportunity for us. We’ll learn from the documentation. We’ll pursue it.” It has ended up being a great specialty that we’ve decided super focus on because you all have the compliance problem. You have a lot of optimization. You have the CCPT coding selection problem that you don’t want to be inappropriate. You want to make sure that it’s accurate, but you also don’t want to under code.
There are a bunch of good problems for the types of models that we’ve been building. PT has been such a welcoming specialty. It’s great. The vibe is awesome. People are positive and appreciative that we’re trying to be helpful. It seems like that’s appreciated and it’s been very welcoming. We’ve been like, “Let’s go and let’s dive down and help people in PT. Maybe someday we can get to the point where we can extract value and information for lots of other specialties, too, but it fits so much in PT that if you’re going to expand somewhere that it’s been a great spot so far.
It’s great to have you because those who are reading, if you haven’t gathered so far, PredictionHealth’s focus and correct me if I’m off on this at all, Pedro, but your focus is on reviewing documentation for compliance and optimizing the CPTs build, essentially. Also, taking it a step further to not assess for compliance but then show the document or where they are falling short on their compliance and train them on what could be set or done differently to improve, not only their documentation according to compliance benchmarks or expectations but also optimize their billing. Am I saying it right?
Yes, that’s good. I almost think of it like diving if you’re like diving up and down like the 30,000-foot view and go into the five-foot view. We can have the models read every single sentence of every single note. We can scale these clusters up so they can process millions of notes, then you’re not going to go read through that. We don’t want you to read through every single one of those things. We want to make it easy for you to get the overview from compliance or from a CPT or whatever perspective.
“How are we doing? There might be a problem here,” then make it easy to dive down and be like, “Cool.” “Why is this note not compliant? The goals. Your goals aren’t measurable. Cool. Tell Joe you’re doing great. This is the one spot where you could maybe improve,” or maybe you’re wasting time writing stuff too. That’s another thing that we’ve seen. Some people come out of school especially and maybe they document way too much. We also help you figure out like, “This is way over-documented. You could save five minutes here per note or something,” because you only need this much to be compliant and accurate. You don’t need quite as much let’s say and you can save some time. We’re trying to be helpful.
I’m getting shaky-excited because if you’re reading and you’re a PT owner and you care about compliance, you do not want to do chart auditing. It’s like you have to sit there and read every word, and then make your notes and be like the elementary school grade teacher with the red pen saying, “Don’t do this, do this. This could be better.” At times, it could be subjectively interpreted like, “I could have done this. Why isn’t this right?” No, what you’re saying is your software program and with the usage of AI can do the chart audits for you. Am I saying that right? It can tell you where your compliance is off and how you can build better. That in and of itself, PT owners should be doing it. They are doing, but it is relatively small.
The percentage of them that are doing it and doing it correctly is even smaller. What you’re bringing to the table is something that needs to be done, is required to be done if you’re a Medicare provider and hopefully, you are getting it done. If you aren’t getting it done, I’m sure you’re in the majority. Now you’re saying, here’s something that you can offer that takes care of that for you. Am I right? My brain’s exploding and I’m excited for the owners that are out there that are reading.
We’re trying to be helpful. It’s a fortunate circumstance that the models are good at this type of thing. I would be terrible of this if I had to go sit down and review like six episodes of care per year per person and remember, did you do signs and symptom? Did you do enough comorbidity? Are the goals measurable? Every single one of them and all the stuff you have to remember at the same time. The computers are good at remembering little facts and checklist-y things as long as they can read the sentences. With AI, you can now read the sentences, so then you can take that part away and it’s the same model run on every single sentence. I feel like it’s fair. It’s consistent. It’s not going to be like, “I like Joe, but I don’t like Sam.”
It’s fully objective. For peace of mind, you’re able to then teach the program the compliance expectations that are out there, like Medicare expects these things. You’re able to teach it and program that accordingly so that if a compliance expectation changes or is added, it’s not hard to upgrade, I’m assuming.
One of the things that we go through with a few folks is if we’re looking for goals. There’s a thing in the policy somewhere that describes what you have. We have mapping to links so that we can show people like, “We look for these things. Here’s how we wait stuff.” Some things are not as important as others and then we can reference you back. Medicare is usually the standard because that’s the most common one, like the default. To say like, “We’re looking at this and it’s coming from over here.”
We try to be transparent with things. The ML thing is going to be super perfect all the time, but you can flag stuff. You can say, “I don’t know about this or I don’t know if I agree.” We’ll either explain things or we can always check the model and be like, “Why are you doing this?” We retrain it. We’re always retraining models and doing new things. If there is a change, to your point, I don’t have to now communicate out to a bunch of different compliance officers and every single therapist and remind them all the time. We retrain the one model and we can run full pass on whatever millions of things we want to.
That’s amazing because now I’m understanding I don’t have to learn all the compliance metrics. I don’t have to study and know every single bit of required documentation that has to be in every patient note. I can now leave it to a model that has it all installed. It’s a huge time saver in the documentation and it keeps us compliant. It’s interesting because I did a quick webinar with Andrew in your company. He shared with me how it will also share with you how you might want to utilize your documentation to bill a different CPT code. That’s where it’s huge and valuable in that. If you were like most PTs back in the day, there were a lot of therapeutic exercises and there was a lot of manual therapy built.
Three of those and one of the other and that was very common. Honestly, according to the exercises that you’re doing, maybe a better code is a therapeutic activity. Maybe a better code is neuromuscular reeducation. By the way, those do reimburse better or maybe you could change your documentation a little bit because you’re doing that exercise to justify those codes. You just need to add this blurb. Correct me if I’m wrong, but it was able to do that for you as well and train you on that.
That’s a brand-new model that I’ve spent a lot of time working on and reviewing data. I tell folks that it’s a Goldilocks problem. That if you go conservatively and you do two units of AirX and that’s all you put, if you did a bunch more than that, then you’re being inefficient with the resources that you put in, as you did work you’re not going to get paid for.
Can I add to that? There can be a little bit of a red flag there. If every visit in a patient’s note is 2 in 1 and 3 in 1, 2X TherEx, 3 TherEx and 1 manual therapy all the time, then that can be a flag from what I’ve heard because you’re not making any changes or there’s not a lot of skilled thought in that. You may want to consider that, so you do have a point there.
We do check for copy forward. If the pattern is the exact same for too long, that factors into the compliance work because then you’re not necessarily progressing the care as much, like super dinged if you do over 3 or 4 visits in a row of the same thing and the same exact text is a red flag that you can get in trouble for. That’s huge. If you’re underdoing it or if it’s always the same, you’re not progressing. That’s a problem.
On the other side, if you overdo it or if you’re saying it’s their act, but you didn’t do something that merits that code, then that’s now a compliance problem. You want to be accurate. The best way to determine if it’s accurate, at least that we’ve been able to figure out, is that these models could read what you’re saying, what activities you did, how you’re justifying it and it’ll then predict what’s the code that makes sense based on what we’ve done and reviewed then you can see. Do you have any risk where you’ve maybe overdone it, which is bad or do you have some inefficiency because you’ve underdone? You’ve put the time and effort in to do. That’s been a cool one too. These folks have been excited about that, which I understand.
It’s huge. I’m imagining, not knowing the details, with a program like yours, it seamlessly integrates into most EMRs at this point. Am I wrong?
The overall thing for that product is this dashboard tool and we connect into WebPT , the Prompt Clinician and we’re working some other folks to add in.
I know you’re working with MWTherapy as well now.
We started working there, too and there are some more folks that we’re going to be announcing, which is exciting. For the WebPT, it’s SOAP one and SOAP two, then you tell us which GHR you have and we get login then we handle the rest. We pull all it in, we process it and you get dashboards that will highlight, “Here’s all your therapists and here’s clinic X, Y and Z and get you all the metrics in that system.”
To ask a little bit in the weeds here, does each provider then have access to their own AI compliance metrics or is that something that just comes to the owner?
What we’ve seen so far is the most common is owners, for sure and leadership, team folk, and compliance officers will get access. There’s some variability for the next sets of things. Some groups, they have a bunch of clinic directors. Their clinic directors also get access so that they can go in and manage their clinics with all that numerical data.
The next thing that we said, some people do send or have individually, is the snapshot. They don’t necessarily want to go through the entire dashboard, but it would be relevant for them to know like, “How am I doing?” We summarize it all down into a snapshot that shows things like the compliance score, what the top three things are to work on and what the top three are that you’re good at and some things around like charge, capture, diversity and other operational metrics on one little snapshot that then the clinic directors will use as a nice quantitative tracking metric over time that they can review with the therapists that they’re overseeing.
What I love about what you’ve done as well is that you’ve given grades, percentage grades to how are you doing with this compliance metric and are you 80% or are you 5% ? You can see not that you’re maybe off a little bit but how far you’re off on your compliance. You can see they might be compliant in 10 out of the 16 metrics. I don’t know how many there are, but what are those six that they need to work on? You can say specifically focus on those things and increase those grades if you will.
We roll it up because again, like that 30,000 few-foot view back down to the 5-foot. We’ll do a full summary of the grade for the person for each month so they can see how they’re progressing. Behind the scenes, there’s a total of 60 current labels for the compliance score that are behind it. Those break into about sixteen sub-items across initials and dailies, discharge, etc. We can tell you, “It’s your initials,” and the functional exam part is an issue because otherwise, it’d be hard to improve over time if you’re like, “You told me it’s bad, but why?” We want to be able to go straight down and say like, “It’s just this section and that one. Otherwise, you’re great.”
This is a specific question. I’m wondering if you can help because it could be commonly a question for providers and that is, could you be able, through the documentation, to differentiate if it’s appropriate to bill for a reevaluation or if the documentation that was used to justify a reevaluation was compliant or not? Can you get that specific?
The way we look at the predictions and labels does take into account what note it is. You expect different things and with different waitings for an initial versus a daily. The scoring tool does look into that. I believe we also have some things that you can filter on a bunch of different stuff, but then if you want to hone in on like, “Did I miss a re-eval or something like that or did an initial get done?” They didn’t like the bill or checking the code. A lot of those things that you would do that there’s a lot of rules for, we do have some filters and ways to look at that because it comes up as a problem. For example, you don’t want to go so many visits before you do a re-eval.
You’re trying to play that game. You do enough work to justify a re-eval, but then the insurance might company might not accept it or not. You feel like you’re justified and you want to make sure your documentation supports it. It’s nice to have that guidance and support, especially from a non-person to not have to bug somebody about it.
What I also liked in the demo that you shared with me is that you can use the AI bot to ask for examples of how certain phrasing could be better. You don’t have to go to your clinic director. The providers themselves and even the clinic directors’ owners could go to it and say, “How could this be worded better and compliant?” It can do some of that training and can interact with the providers.
That’s been a fun new feature too. We’ve been thinking almost like, “Do we want to have like a ChatGPT but for PT?” Anyway, we have great folks and like Kelly. She’s a PT and she’s been working with us. We were thinking like, “We basically want to make like a roboKelly.” That you could ask questions, so you can interpret things better and ask, “Who’s doing great? Who’s doing worse? How do I phrase this better?” Build that interaction with them, so you can get it whenever you want through that interface. There’s some cool stuff in the future that is related to future steps. You want to make it closer. You want to bring the space between the evaluation and the documentation. You want to get to the point where it’s real-time, like close, helping you actively write. This is the first couple of steps toward getting there.
Speak to that a little bit. That’s a great segue. Where do you see this technology going? Specifically in the physical therapy space.
Telling somebody like, “You did great here and here are some spots to improve,” is helpful because otherwise, it’s again like that black box. Long-term, why don’t you help me write it well the first time, is basically the thing that we want to solve for, working to start building towards real-time and integrating. I know what you wrote before, I know how things have gone. Can you give me a focus set of things that you’ve changed and updated then now I can help you? At least give you suggestions for like, “Here are some compliant ways that you could do.”
Predictive text.
That AI bot behind the scenes, the technology to do that very much overlaps with the thing that you would want to do predictive text. To help me finish my note faster, you can actively tell me as I’m picking stuff, like, “It’s compliant. We checked it,” like a spell checker. A compliance spell checker is built in. Also, that’s typing ahead for you because if you’re in an email, Gmail now has that thing with predictive text. It writes it for you. It doesn’t make a lot of spelling errors.
Do you see a situation in the future, and I’m sure it will be decades down the road, but where the guidelines for billing certain codes for certain insurance that might also help you document appropriately? Not every note and every charge is Medicare-related. Say you pick insurance, Cigna. They don’t allow certain codes, but they do allow the other codes. There are multiple pay procedure pay reductions, NPRs that might come into play if you’re billing accordingly. Don’t bill this code more than one unit. Do you see some of that coming into play in the near future?
That’s an exciting opportunity because if I’m seeing patients, I can’t possibly remember different rules for each person based on what insurance they have and what condition they have.
It sucks because we’re getting screwed because we have to remember those things, for each insurance company has different ways and if I don’t bill it right, then I’m going to be the guy that loses the money on that care.
It doesn’t seem maybe to be an accident. They make it complicated but computers are incredibly good at remembering and doing all these little look lookup things. That’s going to be something. Default now is Medicare because it’s so common. There are obvious cases we’re already seeing in the data, like, “They only did this many codes or this unit or whatever because there’s a cap for this payer or that for the other thing.” We want to make that easy for people and expand for customization per payer or whatever else ends up being relevant. There are some good opportunities there.
There are some amazing things that are happening now and we can foresee, as you said, what can happen in the future. Any cons that you’ve come across over the past few years or things that you’ve had to watch out for?
The AI tool is super powerful. Now these types of tools are powerful, but then what you do get is the challenges that come along with scale. If you get it to predict some word wrong for some reason and if it’s running across millions of notes. You have to keep that in mind. You have to have monitoring in place, because of how many things it can process, you also have to make sure. If it does make a mistake 0.1% of the time, if you’re doing millions of charts, that adds up fast.
People have mentioned a similar thing for ChatGPT, that this is great when it works well, but if it hallucinates or it does something off and if millions of people are using it, then it’s hallucinating and giving them like, George Washington was not born in whatever random country. You have to watch that scale side. We try to be very thoughtful about doing spot checks. We have compliance folks do manual checks against the chart. We see how closely they agree and try to do more time. It includes monitoring. If it suddenly starts behaving and getting some things differently, we can compare historically to make sure the results are stable.
Always try to make it visible, like, “We predicted this and here’s what it is.” You can spot-check it and see if you agree if it makes sense to you or not. If you don’t like it, please give it a little thumbs down or thumbs up sign so you can be like, “Sounds good. This one’s odd.” We track every single one of those because, again, you don’t want it to make mistakes, but nothing’s perfect. You want to be transparent and track that so that you can keep improving over time.
Nothing's perfect, so you want to be transparent and track mistakes to keep improving over time.
Click To Tweet
You’ve done this a number of times with a number of clinics across the country already. Can I ask how many you’re working with at this point in 2023?
We’ve added a lot of folks.
That’s what I hear. You folks have ramped up.
It was by organizations like logos and we’re well over 50, 60 or 70 now. A lot of those have been added over the course of 2023 and that’s organizations. I don’t remember offhand clinics.
That could be multiplied by a certain number as to the number of clinics that are implementing it.
Some hundreds of hundreds of therapists and others. Even down to smaller groups that don’t have a compliance officer. It’s a very wide variability on that too.
That was another question. Is it something that you would recommend for even the smaller practice that might have 1 or 2 providers as well as someone with a hundred providers in their company? Is there some minimum level that at which it works best?
What we’ve seen is that people get very different things based on their size. If you’re small, then you’re not going to afford a compliance officer. If you’re that small, it’s a couple of PTs. You probably are the “compliance officer.” In that case, if you’re worried about compliance or if you’re worried about like, “Are we being efficient with how we code or how we do these other things? It’s a good alternative that would be more cost-effective than having a higher compliance officer for some portion of their time or pay some billing optimization person to go through. That’s that side and metrics too are helpful for them.
If you’re on the other side, if you’re a big organization, You might have some KPIs and some dashboards already for a lot of the stuff. What you probably don’t have is the AI models to read through the actual text for either the CPT code appropriateness predictions or the compliance predictions, in which case, this now gives your team because you have resources to do metrics because you want to optimize. It gives them new tools that they otherwise wouldn’t have, so you can take that team and have them focus on things. Some compliance hotspots or some CPT coding misunderstandings otherwise could fly under the radar because it’s hidden in whatever millions of notes your organization does per year.
That’s why what I love about what you’ve where you’ve progressed to at this point is that compliance in and of itself is an expense. It’s not a revenue generator. What you can add in terms of helping you build better, where you might be missing those and adding or recommending different codes based on the documentation and vice versa. That’s where it can be a return on investment. It’s not just a sunken cost.
Compliance in and of itself is an expense. It's not a revenue generator.
Click To Tweet
What we were hoping is basically with the new billing efficiency-related modules to make it a positive ROI that you basically compliance for free that the new one to provide. The ROI on those new features that we’re launching, it’s different for different groups. Some people get a lot, but on average, I believe we’re in somewhere in the range of 4X to 3X to 5X at least ROI for people on average. You are basically getting compliance for free, assuming that you look at the results for the billing efficiency piece or the coding efficiency piece and incorporate some small percent of the learning. You can get a reasonable return.
I know you’re much more on the technical side of things. Have you recognized implementation? Is it difficult? Does it take a lot of time? Is it heavy into that? What’s involved with implementation?
For the customer? If you’re on one of those EHRs already, then you say like, “I want to use it.” Sign and say, “We want to use it.” The BAA because you have to be HIPAA compliant and everything, then we handle the rest. We’ve got all these converters and different kinds of robots that pull the data out. We run it through all the ML models and you get your dashboards and can usually do it under two weeks for sure for super large site or something. For some people, it’s like a day and we can turn it around.
The one friend of mine that’s using your technology, and he has 4 or 5 clinics at this time, signed up. He turned things over to you, guys. You have access to the data and he said he didn’t hear anything back but 3 or 4 days later, like, “Here’s your report. Here’s all the things that you can work on,” that showed up in his email. He was completely surprised that it happened that quickly. It wasn’t a pain-sticking process like it is to implement a new EMR or something like that. It was quick and easy, sifted through all the data and gave him a nice clean report after a few days.
That’s good to hear. We try to make things easy for you. You folks have so much to do already. We’ve heard some intense stories and I’ve even gone to a lot of events and conferences where people are like, “I got to hire, I got to find people, I got to cover shifts, I’m seeing patients and I got to worry about compliance and reviewing things and the billing then the denials.” It was like we don’t need to give you guys any more. Let’s make this super easy as possible because I get it. You’re in a clinic and it’s already crazy enough keeping up with your patients once you own a practice or have to manage it on top of some other stuff. It gets nuts.
The reason why you provide so much value in this situation is that documentation is the largest, I don’t think it’s going out on much of a limb, but it is the largest pain point in most clinics. Simply the documentation involved and maintaining compliance and the billing associated with it. It’s a huge headache for any provider, especially if you are the provider. The owner has other things that they’re worried about, but if you’re a provider, you want to treat patients and the documentation is such a downer. That’s the beauty and the value that your technology brings to this.
It’s something. I remember the worst part when I finish a shift. I didn’t end up practicing, but I finish a shift. I got to write these notes still and I know what I did. The patient’s gone, they’re good. We discharged them and I still have to write the story of the stuff that’s already done. This is not how I want to spend 2:00 AM.
This is why I was excited to have you on. You’re doing some awesome work, so I was excited to highlight that. If people wanted to learn more about PredictionHealth, how do they get in touch with you folks?
The website is PredictionHealth.com and if you want to send us an email or ask a question, you can always email Sales@PredictionHealth.com. We also have a presence on LinkedIn , if folks want to search there as well. That’s preferred and just PredictionHealth. There are some videos on there for folks. You can always message me too, Pedro@PredictionHealth.com. I love being helpful.
I know, on your page, on your website, you can sign up for a demo. I recommend everyone reading to do that, at least, to see if that should work for you. It’d be a benefit to you. Is there anything else you want to share in terms of AI in the physical therapy space and what PredictionHealth is doing? Anything we didn’t cover that comes to mind?
AI’s been super cool classically but also the latest stuff. For anybody that’s more on the tech side, if they’re on the fence or something. Healthcare and taking advantage of tools like this that are getting available and becoming easier to use is going to be huge for people that have an easier time completing their work or looking up information.
If it seems remotely relevant to us and you have feedback, we always love getting feedback from folks so we can keep improving. We try to launch things pretty consistently and quickly. For folks more on the clinical side, there are a lot of good opportunities for technology to help, especially AI, because it’s getting so powerful and easier to use. I encourage folks.
If you’re doing the same thing again, you’re like, “I hate doing X,” it’s a good opportunity to maybe take a step back and like, “What am I doing in my workflow that’s making things difficult? Are there any tools that seem like they could help?” There are already a lot of providers and owners that I’ve seen do cool things with ChatGPT, like, “Help me write an announcement to local folks to say to come to my PT practice or something or explain these following five exercises.” I’ve seen a lot of cool stuff. Whether you’re technical or not as technical, the tools are getting so easy to use. Let us know. We’d love to find good use cases that people are to be excited about or try stuff out yourself because you can use these new technologies to make your day-to-day easier.
Thank you for coming on, Pedro. I appreciate the time that you took with us.
I had a blast. It was great to meet you and great to have this chat.
Thank you.
Pedro Teixeira, MD, PhD received his degrees from Vanderbilt University and completed his PhD in the laboratory of Dr. Josh Denny in the Department of Biomedical Informatics. There he applied machine learning and natural language processing (NLP) methods to extract phenotypic information from electronic health records (EHR) and identify associations with genetic variants.
Pedro and Ravi Atreya launched PredictionHealth in January 2017, a startup dedicated to helping clinicians and healthcare organizations understand and quantify the unstructured data in their EHR with machine learning to ensure every patient gets the best care every time. Their analytics and actionable feedback in physical therapy help organizations improve their compliance and practice efficiency by analyzing every sentence of every single note.
He has also presented his work broadly and won several awards, including first place in IBM’s The Great Mind Challenge (Watson Technical Edition 2013).
Love the show? Subscribe, rate, review, and share! https://ptoclub.com/
All Rights Reserved | Private Practice Owners Club