Healthcare Integration & Interoperability — Part 2

Yesterday we briefly covered what healthcare integration and interoperability is and what it means to the healthcare industry. In today’s segment, we will be discussing some of the file protocols that are used in conjunction with continuity of care and interoperability.

The file protocols that we will focus on today are some of the more popular formats: HL7, DICOM, CCD & CCR.

HL7 application interface

HL7 File Protocol

Much like the blood cell in the human system, HL7 messages are the lifeblood of healthcare data exchange. Established in 1987, Health Level 7 (HL7) is a non-profit organization who’s mission is to “[provide] standards for interoperability that improve care delivery, optimize work flow, reduce ambiguity and enhance knowledge transfer among all of our stakeholders, including healthcare providers, government agencies, the vendor community, fellow SDOs and patients.”

In more simple terms, HL7 is a file protocol through which care providers leverage a standard for sharing patient data. HL7 messages are broken into specific types that relate to a specific event within a patient record, also known as a trigger event:

  • ACK — General acknowledgment
  • ADT — Admit discharge transfer
  • BAR — Add/change billing account
  • DFT — Detailed financial transaction
  • MDM — Medical document management
  • MFN — Master files notification
  • ORM — Order (pharmacy/treatment)
  • ORU — Observation result (Unsolicited)
  • QRY — Query, original mode
  • RAS — Pharmacy/treatment administration
  • RDE — Pharmacy/treatment encoded order
  • RGV — Pharmacy/treatment give
  • SIU — Scheduling information unsolicited

Each on of these trigger events is created by a hospital system and will need to be shared not just across internal systems, but also with hospitals, HIEs, physician groups, clinical labs, etc. that may reside outside of a healthcare providers network. Not each message type is relevant to all applications and many hospitals that maintain dozens of systems will leverage HL7 routing engines to deliver messages to the appropriate destination.

While the HL7 message protocol is a standard widely adopted healthcare providers, it is sometimes seen as Stephane Vigot of Caristix puts it, as a “non-standard standard”. What Mr. Vigot is saying is that even though the protocol specifies syntax and message headers for identifying pertinent information, different systems may use different templates. Take patient “sex” for example: one hospital may register a patient as either male or female and another may have up to 6 attributes relating to the patient’s sex. As a result, when systems are integrated, HL7 messages need to be normalized so that the systems know where to look for the information.

Version 2.x vs Version 3

Probably the most important thing to know about HL7 version 2.x vs. version 3 is that the latter has not been embraced by the healthcare industry yet. Version 2.x is a textual, non-XML based file format that uses delimiters to separate information. Version 3 on the other hand is an XML based file format.

DICOM

DICOM stands for Digital Imaging and Communications in Medicine. Like HL7, DICOM is a file format for exchanging patient data, but is used in conjunction with systems that exchange medical images. DICOM messages are the file protocol of choice for PACS (Picture Archiving and Communication Systems). Value Representations (VR) of a DICOM message.

Continuous Care Document (CCD) & Continuous Care Record (CCR)

These two documents perform very similar functions, and are considered summary documents. Both CCD and CCR are XML based documents and provide a summary of a patients healthcare history. Included in a CCD or CCR document is a human readable section that covers the patients care history as well as pertinent patient information such as demographics, insurance information, and administrative data.

The major difference between the two revolves around how closely one is tied to HL7 standards than the other and how much easier one fits into the current workflow of a particular health IT system. While some see CCD and CCR as competing standards, Vince Kuraitus of e-CareManagement argues that “the CCD and CCR standards are more complementary than competitive.” The basis of his opinion revolves around the “the right tool for the job” metaphor and HIEs adoption of CCD doesn’t say much.

Summary

Integration and interoperability need file protocol standards and as the healthcare IT industry keeps evolving, many of the ambiguities of the current standards will eventually (hopefully) be normalized and conformity will prevail. In the meantime, HL7 2.x, DICOM, CCD/CCR are here to stay and will continue to be the lifeblood of integration and connectivity.

Part 1 | Part 3


Healthcare Integration & Interoperability — A Mini Series

Completely inspired from my trip to HIMSS last week, I thought it made sense to talk about healthcare interoperability, connectivity and the component pieces to making this happen. This mini series is broken up into several parts that will cover:

  1. What is connectivity and interoperability?
  2. File protocols, formats and requirements, i.e. HL7 (including discussions on version 2 vs. 3), DICOM, EHI, and CCD;
  3. Transport protocols and interfaces: MLLP, TCP/IP, FTP, etc.;

Part 1: What is Healthcare Integration and Interoperability?

According to HIMSS, healthcare integration “is the arrangement of an organization’s information systems in way that allows them to communicate efficiently and effectively and brings together related parts into a single system.” †

The 2006 White House executive order defines Interoperability as (section 2 paragraph c):

Interoperability” means the ability to communicate and exchange data accurately, effectively, securely, and consistently with different information technology systems, software applications, and networks in various settings, and exchange data such that clinical or operational purpose and meaning of the data are preserved and unaltered.

Reference

These are great standard definitions and allow you to understand the difference between the two. Integration relates to how systems can work or collaborate for a common purpose, e.g. a patient management system working with a scheduling system. Interoperability speaks to how these systems are connected, in order to provide a continuous flow of information that improves care for the patient.

In order to achieve Interoperability, systems must be connected in a secure way, authenticating all users and allowing one healthcare application to share data with another anywhere in the country, without compromising a patient’s privacy.

From a real world scenario, what this means is that all systems must be integrated in order to achieve interoperability, i.e. a physicians patient management system must be able to authenticate and securely connect to a hospitals EMR; Ambulatory centers to pharmacies; hosted EMRs to wound treatment centers. Patient information can no longer live in just one place.

Interoperability Dimensions

(As defined by HIMSS)

  • Uniform movement of healthcare data
  • Uniform presentation of data
  • Uniform user controls
  • Uniform safeguarding data security and integrity
  • Uniform protection of patient confidentiality
  • Uniform assurance of a common degree of system service quality

No Small Task

Connecting all of these systems is no small task and is as much of an organizational challenge as it is a technological one. People and healthcare systems no longer exist within a vacuum and teams need to collaborate to make integration projects happen. These same people will need to agree on the best way to solve the connectivity problem and rely on the guidance of Health Information Service Providers to come up with solutions that meet the needs of all while adhering to the mission of improving patient care. As we continue to move forward in achieving interoperability, the scope and magnitude and of what needs to happen cannot be underestimated and careful planning must take place.

Throughout the mini-series, we will discuss the component pieces that are involved in achieving interoperability including application interfaces, file protocols, transport protocols, security & authentication, and compliance.

The Goal

Integration and Interoperability are significant pieces of the Meaningful Use objectives and the mission is to improve the care of individuals while providing them with secure, ubiquitous access to their health information. While there is no one way that can solve the challenge of interoperability, understanding the mission and the various parts of the goal can help make achieving connectivity as prescribed by the ONC and Meaningful Use.

Healthcare Interoperability Panel Discussion

Part 2 | Part 3


HIMSS — What You Would Have Learned If You Went

February 25, 2011 / CloudPrime, Healthcare / 0 Comments

I think Ascendian’s CEO Shawn McKenzie’s interview is a great summary of HIMSS 11 and what is happening in Healthcare IT:

If you don’t have time for watching videos at work, then I will try to sum it up the best I can:

Widgets, lot’s of them. Mostly unimportant.

Shawn makes a great point that there is no real plan for Healthcare IT and interoperability. Instead (as we have commented before) there is a focus on EHRs and building “widgets” for healthcare professionals, which is essentially creating healthcare “silos”. While there is a ton of innovation being made at the practice side, very little is going into interoperability and the traditional medi-evil VPN solution for connectivity still reigns.

After walking the floor of HIMSS for days, we learned on our own how true this was. Most EMRs and EHRs didn’t care about interoperability and were content to tell us it was the customer’s problem. This seemed odd to us in 2 ways: 1. The idea is to solve customer problems, not ignore them, and 2. As a business, they are leaving opportunity on the table.

The Direct Project also had a showcase that demonstrated interoperability, but it was not clear who should be interested and why.

Once people realize that connectivity and interoperability are a big issue, they will also realize that the old way of doing things will not be sufficient. Real investment in new technologies that utilize the Cloud and provide real solutions to the connectivity and interoperability problem are needed. To borrow from Mr. McKenzie again, what we have now is the coal but not the train or the tracks.


New Healthcare Integration Challenges for ISV’s

With new regulation comes new opportunity. New healthcare requirements around the digitization of health information has caused a wide variety of start-ups and services to surface. Innovation is great, but there are very few standards being adhered to, causing a lot of headaches for ISV’s who are working with new customers to implement their systems.

If a hospital, physician, or clinical lab would like to start using a new product or service, that application needs to be able to communicate with older systems that may not be ready for retirement. Who will be responsible for ensuring that the two systems can interface to each other? How much will this cost and what impact will it have on deployment schedules? This typically falls on the vendor and a solutions specialist needs to be brought in.

Take for example a PMS system at a physician practice that now needs to communicate with a scheduling system that resides in a data-center off-site. The physician PMS will need to exchanges HL7 SIU messages with the scheduling system securely, meeting HIPAA requirements for health information exchange.

In order for this to happen, a secure connection between the two endpoints needs to be established, application interfaces need to be built, ports to the firewall need to be opened, and eventually a mechanism for ensuring each endpoint is authenticated must be implemented (See Wikipedia Article under Security Rule). What seemed to be a simple roll-out of a new system now requires professional services, network changes, and protocol conversion if there is a different transport protocol in use.

These integrations and road blocks can increase sales cycles and implementation times, making it harder to sell while decreasing margins for the ISV. Not to mention, the burden this may place on the customer.

Once an integration occurs, it is also necessary to monitor and maintain the network, which requires IT resources that may not have previously existed or may not have the bandwidth to support an increasing number of integration points.

As part of your integration strategy, it is important to evaluate a build vs. buy strategy:

– What will be the cost impact of rolling a VPN and application interface for each endpoint?

– What will be the cost of managing and maintaining that network?

– Who will bear the cost?

– What impact will this have on implementation times and sales cycles?

– As compliance regulations change, how will this impact your solution and margins?

Healthcare interoperability is an extremely important part of HIPAA regulations and a lot of health IT professionals will be focused on it, but as an ISV, connectivity may not be a part of your core offering, making it a distraction instead of an opportunity. If the numbers do not add up, it may make sense to use an application integration service as part of your value proposition to the customer, making implementation smoother, and decreasing network costs.


Application Interfaces: VPNs are not the answer

You have a new application, vendor or hospital that you need to interface to. Everyone in meeting grumbles about how the application interface will be built, where the resources will come from, and who’s budget will take the hit for adding the new partner.

To get started, you start thinking about everything that will need to happen:

1. A new VPN connection will need to be created to bring the new trading partner onto the network… paperwork with the hosting company or telco, network configuration changes, firewall ports opened, etc.

2. Depending on the application or partner you are working with, you need to understand what interfaces you will need support/build, e.g. does the application have a specific transport protocol you are not familiar with, is there a specific message protocol that you will need to convert

3. Specialist that have worked with these types of interfaces will need to be selected and contracted

4. Depending on how many connections you are creating, you may need to bring on additional staff to manage and support these connections

5. If any value added services such as guaranteed delivery or file tracking need to be implemented, this will increase the scope of contract work

6. Each connection will need to be tested thoroughly

VPNs provide the basic necessity of secure connectivity, but they are a unwieldy solution for IT organizations that are faced with deploying many connections and are limited on technical resources, time, and money.

When working with new trading partners, healthcare application interfaces, or vendors, VPNs may not make the most sense for your needs. Think about some of the problems you may face when adding new VPNs and how you can mitigate those pain points:

1. Are there other secure, application connectivity solutions available? If so, do they offer the most basic needs for interoperability?

2. Does the VPN solution offer file-level tracking, encryption, guaranteed delivery and web portals to view message and data traffic?

3. Are there solutions available that do not require changes to the firewall and/or network

4. Is there a solution that will require minimal IT support, reducing the total cost of ownership for maintaining secure outbound connections?

5. Will these connections continue to meet changing government standards for connectivity or will additional work need to be done to keep them compliant?

6. Is there a solution available that does not take weeks or months to implement?

There are many more questions to be answered, but the basic question is “can you find a better way?”. As technology advances and the Cloud becomes a more trusted platform for offering services, it may be time to start seriously evaluating alternatives to VPNs.


Preparing for Health Application Interoperability

2011 is going to see a dramatic increase in the adoption of EHR software and digital patient information exchange will become an even greater priority in order to meet Stage 1 meaningful use requirements.

If you are an IT Manager, this looks like it will require an all hands on deck and a huge shift in how things have been run throughout your organization. Since all patient data will need to be exchanged digitally in a safe and reliable way, you will be tasked with:

  • Ensuring application interfaces can connect internally as well as make connections outbound through your firewall
  • Making sure your IT ecosystems are documented carefully to determine where the holes are in internal and outbound connectivity
  • Allocating resources for managing all new connections and configuring your firewall to accept new connections
  • Dedicating staff to managing the new network; either adding to overhead or detracting from other initiatives within the organization

Some things to think about in 2011 as you prepare to meet these new requirements are:

1. Meaningful Use Incentives: Registration for the EHR Incentive program started on January 3rd: http://www.healthcareitnews.com/news/government-ehr-incentive-program-ready-go

2. New Infrastructure: New processes will need to be learned as you begin interfacing to all the EHRs, PMS’, HIEs, Physician Groups, Clinical Labs, etc. being brought onto the network.

3. Security: All patient health information will need to be encrypted and transported securely in order to meet HIPAA compliance.

4. Training: Staff will need to be trained and allocated to manage these networks. As your network continues to grow, so will the resources required to support and manage it. Changes in your firewall will need to happen and application interfaces will need to be built.

5. Solution Providers: HISPs (Health Information Service Providers) will need to be selected. Not everything can/should be done in-house, so you will need to determine how to minimize the total impact of these new application interoperability requirements. Your EMR may already provide application interfaces, but it is possible that many of your systems do not support outbound connectivity.

2011 will bring a lot of change for the healthcare industry as a whole, and with that change, progress. Despite the huge burden these new regulations will have on IT departments large and small, the end game will produce a cohesive, secure and reliable patient information exchange that improves the quality of care for all Americans.


Understanding the Direct Project

Looking through some of the recent announcements on the Direct Project, it is not completely clear what NHIN is trying to represent if you are an EMR, Health Care Professional or Health Information Service Professional (HISP).

NHIN’s Direct Project is providing a specification and guidance on how interfaces can be built to exchange information in a secure, encrypted way using “email like” addresses on a network.

This is great! This is not an implementation though.

Health care professionals or ISVs who are looking at the Direct Project to solve their connectivity problems will need to understand that an interface will still need to be built, and encryption and key management will still need to be licensed in order to ensure all data is securely sent and in compliance.

This is a great step forward, and standards help drive adoption and innovation, so we will be excited to see how the trials turn out.

direct_project


NHIN (The Direct Project): Ready for Prime Time?

NHIN and NHIN Direct (now called “The Direct Project:) are frameworks for creating a standard system through which health care applications can communicate, share information, and connect with one another. And, according to Shahid Shah, in his article titled An Overview of NHIN and NHIN Direct for Software Developers*, “NHIN is far from settled and is not a forgone conclusion for data exchange, so you shouldn’t rest your complete integration strategy on it. In fact, make sure you have other options available to you.”

I don’t want to be too negative on NHIN or those who are striving to make it a production reality, but if you are considering connecting systems leveraging NHIN, you should understand where NHIN stands as a health information exchange solution.

With the HITECH Act, meaningful use (MU), and incentive payments on the line, a lot of organizations are trying to find their health information integrations solutions now, not in the future. As such, below are some weaknesses of NHIN that might be an issue when trying to implement a health care application integration solution:

  1. Still in early stages of testing with users
  2. Many security policies still need to be defined and implemented by the user
  3. Limited documentation for implementation
  4. May require application development for exchange integration, making it a resource intense solution to roll out

Given that there are a lot of new compliance regulations to meet in order to receive incentives, IT resources will be stretched, and anywhere you can implement a solution that does not require a large amount of overhead and technical expertise is going to be attractive. Even if NHIN and The Direct Project were ready for prime time, integrating your systems (which could be dozens per location), will require a lot of resources, time, and money.

In short, NHIN is not the silver bullet for health information exchange, nor is it the solution that companies, health care professionals, and application integrators can count on now.


Messaging in a Box… err, Cloud?

Connecting-the-world-transparentConnecting applications and systems takes time, involves a lot of people and requires training and deployment costs. What if it wasn’t that bad? What if you had an application messaging solution “in a box”? What if that box was a Cloud that allowed you to scale quickly and reduce costs?

Sounds great and it is!

That is what we have been working on here at Cloak Labs and we are proud of the best service that allows you to connect disparate software applications on and off your network in an easy to install, scalable, and cost-effective way.

The Cloak Labs service provides IT Managers and Administrators with a pre-built application network that resides in the Cloud and enables a network to be established in hours in some cases. Traditionally, if someone wanted to establish a connection between two systems, an IT administrator or manager would have to build a network between the two applications and implement an interface engine in the middle in order to allow the two endpoints to communicate.

Let’s review some of the challenges here:

  1. In order to connect two applications on or off the network, personnel needs to be allocated to build those connections,
  2. Hardware will need to be deployed to support the connection(s), also requiring more person hours to deploy, maintain and update,
  3. Connections are rigid, allowing for only point-to-point communication,
  4. In order to implement any value added services, either a 3rd party application provider will need to be brought in, or a RYO (Roll Your Own) solution will need to be built. Either way, this will require more support and more costs,
  5. If the systems do not reside on the same network, it will be difficult to get trading partners to accept connections coming into their network for all the reasons above.

Let’s see how Cloak Labs provides an easy-to-deploy solution to solve this problem:

  1. Cloak Labs is a service so no person hours will be need to be contributed to support the network,
  2. As a service, there is no hardware for the end-user to purchase or support,
  3. Cloak Labs provides a small software client that interfaces to applications, encrypts messages and routes them through the appropriate channels,
  4. Cloak Labs’ service allows disparate applications to communicate without the need for an interface engine,
  5. Cloak Labs provides file-level tracking, guaranteed delivery, and a variety of other built-in value added services, enabling IT managers to meet compliance and avoid customization costs,
  6. Cloak Labs provides reporting tools to increase management’s visibility into application network performance and costs,
  7. Since Cloak Labs is easy to install, bringing up trading partners on and off your network is easy and can be done in as little as 20 minutes,
  8. Cloak Labs leverages the Cloud to scale with usage and provides users a cost-effective, pay-as-you-grow pricing model.

Contact us to learn more about how your organization can build meaningful application interfaces with CloudPrime.


Cloud Computing: A History and Perspective

As a company that leverages Cloud infrastructure to provide cost-effective, scalable and secure application messaging services, I get a lot of questions about how we make the Cloud secure. Before I address this question, I figured it would be interesting to first take a look at the history of Cloud computing. I ask forgiveness in advance for all my gross-oversimplifications.

Grandpa Simpson Yelling at Cloud

The Early Years

Cloud computing first started being described in the 60’s when the pioneers of ARPANET envisioned that people all of the world could connect and access data from each other over a network. Having an interconnected “web” would provide the foundation for distributed computing. Further, John McCarty, a noted computer scientist, proposed the idea of “computation being delivered as a public utility.” (ComputerWeekly.com, March 2009), much like it is used today.

Through the 80’s, the concept of a client-server model for operating applications and platforms within an enterprise began to take root and lay a foundation for what we recognize as “the Cloud” today. Client-server systems require that one computing appliance, ideally with a great amount of computing power and capacity, would be able to serve multiple clients (PC’s, terminals, etc.) around the world. One famous example of this in the 80’s was BITNET which connected IBM mainframes in order to send electronic mail to academic institutions around the world. (A brief history of the internet, Internet Society, 2010)

Emergence of Cloud Computing

Although the idea of a “Cloud” infrastructure was seeded in the 1960’s, it was not really until the 1990’s that we saw any semblance of Cloud computing the way we know it today. In the late 90’s, SalesForce pioneered one of the first SaaS (software as a service) CRM applications and boldly labeled their innovated business model as “The end of software” since you did not have to purchase and install an application locally.

Although it appeared that SaaS based software models would be the future of how we used and interfaced with applications, these applications were still hosted in server farms or locally by the companies that published the software. In 2006, Amazon.com launched a new service that would change how we thought about hosted computing and helped catapult Cloud computing into the spotlight.

Cloud Computing Evolved

Amazon’s EC2 environment gives developers and software publishers a way to access what seems like unlimited resources in a “pay for what you use” model. This combination of low cost and scalable server resources made it possible for developers with very little money to develop applications and publish them (very quickly!) for the community to use. While this was a great milestone for developers and just about anyone who has ever used the internet, many people, businesses, and experts did not believe that the Cloud could provide the security and reliability needed to run enterprise grade applications.

While Amazon provides a paid public service (much like the one anticipated by John McCarthy), many users of the Cloud leverage what is called a “Private Cloud”. This generally means that the host of the distributed computing center has created a cloud environment but its resources are not made publicly available. Bringing the Cloud internally allows managers to have more control over security and maintenance, instead of relying on a provider. Private Clouds help satisfy many of the concerns IT Managers and CIOs have around security while allowing them to take advantage of the benefits of Cloud computing. Eric Knorr of InfoWorld has a great article here discussing “Private Clouds”.

Cloud Computing Today

Today, there is a wide range of options for developers and publishers of software when evaluating which Cloud provider they will use to host their applications. Companies like Microsoft and IBM have started offering services providing their customers with “elastic cloud” environments that promise services that are scalable, easy to access, and inexpensive to use. Seeing more and more large players, as well as small and medium size boutique cloud providers enter the market is a signal that more and more companies are adopting the Cloud as an acceptable infrastructure for hosting their data and applications.

CloudPrime Leverages the Cloud

Cloak Labs is a service that leverages* the Cloud for many of the same reasons any other business might. The Cloud provides a scalable, cost-effective and on-demand environment through which we can provide our application messaging services. When people ask how it is that we can leverage the Cloud when it is not secure, the answer is two fold:

  1. We only work with Cloud providers that can pass SAS 70 type II compliance, and
  2. Cloak Labs encrypts all messages over the network, making all data traveling through and stored on the Cloud completely secure

Having an infrastructure that can scale as you grow allows our business to provide a rich and robust service without you incurring large up-front costs or expensive service fees for transferring data between local and hosted applications. You can learn more about Cloak Labs and our services by visiting http://www.cloudprime.net

  • Cloak Labs runs in the Cloud. An overview of the CP Messaging Topology can be seen here.