Yesterday we briefly covered what healthcare integration and interoperability is and what it means to the healthcare industry. In today’s segment, we will be discussing some of the file protocols that are used in conjunction with continuity of care and interoperability.
Much like the blood cell in the human system, HL7 messages are the lifeblood of healthcare data exchange. Established in 1987, Health Level 7 (HL7) is a non-profit organization who’s mission is to “[provide] standards for interoperability that improve care delivery, optimize work flow, reduce ambiguity and enhance knowledge transfer among all of our stakeholders, including healthcare providers, government agencies, the vendor community, fellow SDOs and patients.”
In more simple terms, HL7 is a file protocol through which care providers leverage a standard for sharing patient data. HL7 messages are broken into specific types that relate to a specific event within a patient record, also known as a trigger event:
Each on of these trigger events is created by a hospital system and will need to be shared not just across internal systems, but also with hospitals, HIEs, physician groups, clinical labs, etc. that may reside outside of a healthcare providers network. Not each message type is relevant to all applications and many hospitals that maintain dozens of systems will leverage HL7 routing engines to deliver messages to the appropriate destination.
While the HL7 message protocol is a standard widely adopted healthcare providers, it is sometimes seen as Stephane Vigot of Caristix puts it, as a “non-standard standard”. What Mr. Vigot is saying is that even though the protocol specifies syntax and message headers for identifying pertinent information, different systems may use different templates. Take patient “sex” for example: one hospital may register a patient as either male or female and another may have up to 6 attributes relating to the patient’s sex. As a result, when systems are integrated, HL7 messages need to be normalized so that the systems know where to look for the information.
Probably the most important thing to know about HL7 version 2.x vs. version 3 is that the latter has not been embraced by the healthcare industry yet. Version 2.x is a textual, non-XML based file format that uses delimiters to separate information. Version 3 on the other hand is an XML based file format.
DICOM stands for Digital Imaging and Communications in Medicine. Like HL7, DICOM is a file format for exchanging patient data, but is used in conjunction with systems that exchange medical images. DICOM messages are the file protocol of choice for PACS (Picture Archiving and Communication Systems). Value Representations (VR) of a DICOM message.
These two documents perform very similar functions, and are considered summary documents. Both CCD and CCR are XML based documents and provide a summary of a patients healthcare history. Included in a CCD or CCR document is a human readable section that covers the patients care history as well as pertinent patient information such as demographics, insurance information, and administrative data.
The major difference between the two revolves around how closely one is tied to HL7 standards than the other and how much easier one fits into the current workflow of a particular health IT system. While some see CCD and CCR as competing standards, Vince Kuraitus of e-CareManagement argues that “the CCD and CCR standards are more complementary than competitive.” The basis of his opinion revolves around the “the right tool for the job” metaphor and HIEs adoption of CCD doesn’t say much.
Integration and interoperability need file protocol standards and as the healthcare IT industry keeps evolving, many of the ambiguities of the current standards will eventually (hopefully) be normalized and conformity will prevail. In the meantime, HL7 2.x, DICOM, CCD/CCR are here to stay and will continue to be the lifeblood of integration and connectivity.
Completely inspired from my trip to HIMSS last week, I thought it made sense to talk about healthcare interoperability, connectivity and the component pieces to making this happen. This mini series is broken up into several parts that will cover:
According to HIMSS, healthcare integration “is the arrangement of an organization’s information systems in way that allows them to communicate efficiently and effectively and brings together related parts into a single system.” †
The 2006 White House executive order defines Interoperability as (section 2 paragraph c):
”Interoperability” means the ability to communicate and exchange data accurately, effectively, securely, and consistently with different information technology systems, software applications, and networks in various settings, and exchange data such that clinical or operational purpose and meaning of the data are preserved and unaltered.
These are great standard definitions and allow you to understand the difference between the two. Integration relates to how systems can work or collaborate for a common purpose, e.g. a patient management system working with a scheduling system. Interoperability speaks to how these systems are connected, in order to provide a continuous flow of information that improves care for the patient.
In order to achieve Interoperability, systems must be connected in a secure way, authenticating all users and allowing one healthcare application to share data with another anywhere in the country, without compromising a patient’s privacy.
From a real world scenario, what this means is that all systems must be integrated in order to achieve interoperability, i.e. a physicians patient management system must be able to authenticate and securely connect to a hospitals EMR; Ambulatory centers to pharmacies; hosted EMRs to wound treatment centers. Patient information can no longer live in just one place.
Connecting all of these systems is no small task and is as much of an organizational challenge as it is a technological one. People and healthcare systems no longer exist within a vacuum and teams need to collaborate to make integration projects happen. These same people will need to agree on the best way to solve the connectivity problem and rely on the guidance of Health Information Service Providers to come up with solutions that meet the needs of all while adhering to the mission of improving patient care. As we continue to move forward in achieving interoperability, the scope and magnitude and of what needs to happen cannot be underestimated and careful planning must take place.
Throughout the mini-series, we will discuss the component pieces that are involved in achieving interoperability including application interfaces, file protocols, transport protocols, security & authentication, and compliance.
Integration and Interoperability are significant pieces of the Meaningful Use objectives and the mission is to improve the care of individuals while providing them with secure, ubiquitous access to their health information. While there is no one way that can solve the challenge of interoperability, understanding the mission and the various parts of the goal can help make achieving connectivity as prescribed by the ONC and Meaningful Use.
I think Ascendian’s CEO Shawn McKenzie’s interview is a great summary of HIMSS 11 and what is happening in Healthcare IT:
If you don’t have time for watching videos at work, then I will try to sum it up the best I can:
Widgets, lot’s of them. Mostly unimportant.
Shawn makes a great point that there is no real plan for Healthcare IT and interoperability. Instead (as we have commented before) there is a focus on EHRs and building “widgets” for healthcare professionals, which is essentially creating healthcare “silos”. While there is a ton of innovation being made at the practice side, very little is going into interoperability and the traditional medi-evil VPN solution for connectivity still reigns.
After walking the floor of HIMSS for days, we learned on our own how true this was. Most EMRs and EHRs didn’t care about interoperability and were content to tell us it was the customer’s problem. This seemed odd to us in 2 ways: 1. The idea is to solve customer problems, not ignore them, and 2. As a business, they are leaving opportunity on the table.
The Direct Project also had a showcase that demonstrated interoperability, but it was not clear who should be interested and why.
Once people realize that connectivity and interoperability are a big issue, they will also realize that the old way of doing things will not be sufficient. Real investment in new technologies that utilize the Cloud and provide real solutions to the connectivity and interoperability problem are needed. To borrow from Mr. McKenzie again, what we have now is the coal but not the train or the tracks.
With new regulation comes new opportunity. New healthcare requirements around the digitization of health information has caused a wide variety of start-ups and services to surface. Innovation is great, but there are very few standards being adhered to, causing a lot of headaches for ISV’s who are working with new customers to implement their systems.
If a hospital, physician, or clinical lab would like to start using a new product or service, that application needs to be able to communicate with older systems that may not be ready for retirement. Who will be responsible for ensuring that the two systems can interface to each other? How much will this cost and what impact will it have on deployment schedules? This typically falls on the vendor and a solutions specialist needs to be brought in.
Take for example a PMS system at a physician practice that now needs to communicate with a scheduling system that resides in a data-center off-site. The physician PMS will need to exchanges HL7 SIU messages with the scheduling system securely, meeting HIPAA requirements for health information exchange.
In order for this to happen, a secure connection between the two endpoints needs to be established, application interfaces need to be built, ports to the firewall need to be opened, and eventually a mechanism for ensuring each endpoint is authenticated must be implemented (See Wikipedia Article under Security Rule). What seemed to be a simple roll-out of a new system now requires professional services, network changes, and protocol conversion if there is a different transport protocol in use.
These integrations and road blocks can increase sales cycles and implementation times, making it harder to sell while decreasing margins for the ISV. Not to mention, the burden this may place on the customer.
Once an integration occurs, it is also necessary to monitor and maintain the network, which requires IT resources that may not have previously existed or may not have the bandwidth to support an increasing number of integration points.
As part of your integration strategy, it is important to evaluate a build vs. buy strategy:
– What will be the cost impact of rolling a VPN and application interface for each endpoint?
– What will be the cost of managing and maintaining that network?
– Who will bear the cost?
– What impact will this have on implementation times and sales cycles?
– As compliance regulations change, how will this impact your solution and margins?
Healthcare interoperability is an extremely important part of HIPAA regulations and a lot of health IT professionals will be focused on it, but as an ISV, connectivity may not be a part of your core offering, making it a distraction instead of an opportunity. If the numbers do not add up, it may make sense to use an application integration service as part of your value proposition to the customer, making implementation smoother, and decreasing network costs.
You have a new application, vendor or hospital that you need to interface to. Everyone in meeting grumbles about how the application interface will be built, where the resources will come from, and who’s budget will take the hit for adding the new partner.
To get started, you start thinking about everything that will need to happen:
1. A new VPN connection will need to be created to bring the new trading partner onto the network… paperwork with the hosting company or telco, network configuration changes, firewall ports opened, etc.
2. Depending on the application or partner you are working with, you need to understand what interfaces you will need support/build, e.g. does the application have a specific transport protocol you are not familiar with, is there a specific message protocol that you will need to convert
3. Specialist that have worked with these types of interfaces will need to be selected and contracted
4. Depending on how many connections you are creating, you may need to bring on additional staff to manage and support these connections
5. If any value added services such as guaranteed delivery or file tracking need to be implemented, this will increase the scope of contract work
6. Each connection will need to be tested thoroughly
VPNs provide the basic necessity of secure connectivity, but they are a unwieldy solution for IT organizations that are faced with deploying many connections and are limited on technical resources, time, and money.
When working with new trading partners, healthcare application interfaces, or vendors, VPNs may not make the most sense for your needs. Think about some of the problems you may face when adding new VPNs and how you can mitigate those pain points:
1. Are there other secure, application connectivity solutions available? If so, do they offer the most basic needs for interoperability?
2. Does the VPN solution offer file-level tracking, encryption, guaranteed delivery and web portals to view message and data traffic?
3. Are there solutions available that do not require changes to the firewall and/or network
4. Is there a solution that will require minimal IT support, reducing the total cost of ownership for maintaining secure outbound connections?
5. Will these connections continue to meet changing government standards for connectivity or will additional work need to be done to keep them compliant?
6. Is there a solution available that does not take weeks or months to implement?
There are many more questions to be answered, but the basic question is “can you find a better way?”. As technology advances and the Cloud becomes a more trusted platform for offering services, it may be time to start seriously evaluating alternatives to VPNs.
2011 is going to see a dramatic increase in the adoption of EHR software and digital patient information exchange will become an even greater priority in order to meet Stage 1 meaningful use requirements.
If you are an IT Manager, this looks like it will require an all hands on deck and a huge shift in how things have been run throughout your organization. Since all patient data will need to be exchanged digitally in a safe and reliable way, you will be tasked with:
Some things to think about in 2011 as you prepare to meet these new requirements are:
1. Meaningful Use Incentives: Registration for the EHR Incentive program started on January 3rd: http://www.healthcareitnews.com/news/government-ehr-incentive-program-ready-go
2. New Infrastructure: New processes will need to be learned as you begin interfacing to all the EHRs, PMS’, HIEs, Physician Groups, Clinical Labs, etc. being brought onto the network.
3. Security: All patient health information will need to be encrypted and transported securely in order to meet HIPAA compliance.
4. Training: Staff will need to be trained and allocated to manage these networks. As your network continues to grow, so will the resources required to support and manage it. Changes in your firewall will need to happen and application interfaces will need to be built.
5. Solution Providers: HISPs (Health Information Service Providers) will need to be selected. Not everything can/should be done in-house, so you will need to determine how to minimize the total impact of these new application interoperability requirements. Your EMR may already provide application interfaces, but it is possible that many of your systems do not support outbound connectivity.
2011 will bring a lot of change for the healthcare industry as a whole, and with that change, progress. Despite the huge burden these new regulations will have on IT departments large and small, the end game will produce a cohesive, secure and reliable patient information exchange that improves the quality of care for all Americans.
Looking through some of the recent announcements on the Direct Project, it is not completely clear what NHIN is trying to represent if you are an EMR, Health Care Professional or Health Information Service Professional (HISP).
NHIN’s Direct Project is providing a specification and guidance on how interfaces can be built to exchange information in a secure, encrypted way using “email like” addresses on a network.
This is great! This is not an implementation though.
Health care professionals or ISVs who are looking at the Direct Project to solve their connectivity problems will need to understand that an interface will still need to be built, and encryption and key management will still need to be licensed in order to ensure all data is securely sent and in compliance.
This is a great step forward, and standards help drive adoption and innovation, so we will be excited to see how the trials turn out.
NHIN and NHIN Direct (now called “The Direct Project:) are frameworks for creating a standard system through which health care applications can communicate, share information, and connect with one another. And, according to Shahid Shah, in his article titled An Overview of NHIN and NHIN Direct for Software Developers*, “NHIN is far from settled and is not a forgone conclusion for data exchange, so you shouldn’t rest your complete integration strategy on it. In fact, make sure you have other options available to you.”
I don’t want to be too negative on NHIN or those who are striving to make it a production reality, but if you are considering connecting systems leveraging NHIN, you should understand where NHIN stands as a health information exchange solution.
With the HITECH Act, meaningful use (MU), and incentive payments on the line, a lot of organizations are trying to find their health information integrations solutions now, not in the future. As such, below are some weaknesses of NHIN that might be an issue when trying to implement a health care application integration solution:
Given that there are a lot of new compliance regulations to meet in order to receive incentives, IT resources will be stretched, and anywhere you can implement a solution that does not require a large amount of overhead and technical expertise is going to be attractive. Even if NHIN and The Direct Project were ready for prime time, integrating your systems (which could be dozens per location), will require a lot of resources, time, and money.
In short, NHIN is not the silver bullet for health information exchange, nor is it the solution that companies, health care professionals, and application integrators can count on now.
Connecting applications and systems takes time, involves a lot of people and requires training and deployment costs. What if it wasn’t that bad? What if you had an application messaging solution “in a box”? What if that box was a Cloud that allowed you to scale quickly and reduce costs?
Sounds great and it is!
That is what we have been working on here at Cloak Labs and we are proud of the best service that allows you to connect disparate software applications on and off your network in an easy to install, scalable, and cost-effective way.
The Cloak Labs service provides IT Managers and Administrators with a pre-built application network that resides in the Cloud and enables a network to be established in hours in some cases. Traditionally, if someone wanted to establish a connection between two systems, an IT administrator or manager would have to build a network between the two applications and implement an interface engine in the middle in order to allow the two endpoints to communicate.
Let’s review some of the challenges here:
Let’s see how Cloak Labs provides an easy-to-deploy solution to solve this problem:
Contact us to learn more about how your organization can build meaningful application interfaces with CloudPrime.
As a company that leverages Cloud infrastructure to provide cost-effective, scalable and secure application messaging services, I get a lot of questions about how we make the Cloud secure. Before I address this question, I figured it would be interesting to first take a look at the history of Cloud computing. I ask forgiveness in advance for all my gross-oversimplifications.
Cloud computing first started being described in the 60’s when the pioneers of ARPANET envisioned that people all of the world could connect and access data from each other over a network. Having an interconnected “web” would provide the foundation for distributed computing. Further, John McCarty, a noted computer scientist, proposed the idea of “computation being delivered as a public utility.” (ComputerWeekly.com, March 2009), much like it is used today.
Through the 80’s, the concept of a client-server model for operating applications and platforms within an enterprise began to take root and lay a foundation for what we recognize as “the Cloud” today. Client-server systems require that one computing appliance, ideally with a great amount of computing power and capacity, would be able to serve multiple clients (PC’s, terminals, etc.) around the world. One famous example of this in the 80’s was BITNET which connected IBM mainframes in order to send electronic mail to academic institutions around the world. (A brief history of the internet, Internet Society, 2010)
Although the idea of a “Cloud” infrastructure was seeded in the 1960’s, it was not really until the 1990’s that we saw any semblance of Cloud computing the way we know it today. In the late 90’s, SalesForce pioneered one of the first SaaS (software as a service) CRM applications and boldly labeled their innovated business model as “The end of software” since you did not have to purchase and install an application locally.
Although it appeared that SaaS based software models would be the future of how we used and interfaced with applications, these applications were still hosted in server farms or locally by the companies that published the software. In 2006, Amazon.com launched a new service that would change how we thought about hosted computing and helped catapult Cloud computing into the spotlight.
Amazon’s EC2 environment gives developers and software publishers a way to access what seems like unlimited resources in a “pay for what you use” model. This combination of low cost and scalable server resources made it possible for developers with very little money to develop applications and publish them (very quickly!) for the community to use. While this was a great milestone for developers and just about anyone who has ever used the internet, many people, businesses, and experts did not believe that the Cloud could provide the security and reliability needed to run enterprise grade applications.
While Amazon provides a paid public service (much like the one anticipated by John McCarthy), many users of the Cloud leverage what is called a “Private Cloud”. This generally means that the host of the distributed computing center has created a cloud environment but its resources are not made publicly available. Bringing the Cloud internally allows managers to have more control over security and maintenance, instead of relying on a provider. Private Clouds help satisfy many of the concerns IT Managers and CIOs have around security while allowing them to take advantage of the benefits of Cloud computing. Eric Knorr of InfoWorld has a great article here discussing “Private Clouds”.
Today, there is a wide range of options for developers and publishers of software when evaluating which Cloud provider they will use to host their applications. Companies like Microsoft and IBM have started offering services providing their customers with “elastic cloud” environments that promise services that are scalable, easy to access, and inexpensive to use. Seeing more and more large players, as well as small and medium size boutique cloud providers enter the market is a signal that more and more companies are adopting the Cloud as an acceptable infrastructure for hosting their data and applications.
Cloak Labs is a service that leverages* the Cloud for many of the same reasons any other business might. The Cloud provides a scalable, cost-effective and on-demand environment through which we can provide our application messaging services. When people ask how it is that we can leverage the Cloud when it is not secure, the answer is two fold:
Having an infrastructure that can scale as you grow allows our business to provide a rich and robust service without you incurring large up-front costs or expensive service fees for transferring data between local and hosted applications. You can learn more about Cloak Labs and our services by visiting http://www.cloudprime.net