Monthly Archives: November 2018

What’s in a Name?

What’s in a Name?

Well, for starters, sometimes everything. Names are highly important. It’s the mechanism we use to remember our friends and family by, as well as our adversaries. Names aid us in navigating the world, in providing direction as to how to understand what’s underneath the surface. Names can also lead us astray, as in “don’t judge a book by it’s cover.” I feel it’s time to reconsider the name of this blog in interest of providing better understanding to the world about what’s happening here.

Welcome to Interop Futures! Why Interop Futures? It’s what I do. I have the good fortune to spend a good amount of my time thinking about the future of interoperability, specifically in healthcare. I apply this thinking to ideas at my full time job. I apply this thinking in my IHE participation. I apply this thinking to other non-career related areas of my life as well.

I have a few specific reasons for changing the name:

Better Industry Recognition

My old blog name – {ts_blog} – was rather poetic (as a respected colleague recently mentioned to me – thanks Gila!), but it was quite vague as well. Unless you read the tag line or titles of some of the posts it was hard to really understand what this web presence was all about. Now we have more clarity about what’s going on here.

Alignment with Career and Industry Involvement

I have been trading in interop futures for the better part of a decade now, going back to 2007 when I first got involved in IHE. I joined a company and wrote a profile on antepartum care, and then implemented that profile in our EHR solution. I left that company about a year later because funding was cut for “IHE work.” And so began my career in interoperability futures. I have worked for three other companies since, with a handful of independent contracts in between and almost all of my work involves thinking about and implementing solutions around the future of healthcare IT, about future-proofing those systems, about developing ideas on how to bridge the old to the new, and so on.

Retains a Certain Amount of Flexibility

“Interop Futures” does not have “healthcare” in the name. It allows for a certain amount of flexibility in terms of what topics I choose to write about. Many of the underlying standards used in healthcare IT standards and solutions are domain-agnostic. SOAP and REST by themselves are not healthcare specific, but when used in XDS and FHIR they certainly become so.

It’s a Cool Name!

What’s not to like about the name “Interop Futures”? It sounds like a cool movie, or a crystal ball that allow you to see what’s coming. In reality there is no crystal ball, and I certainly have no movie contracts. Even still, thinking about the future of where health IT is going, and how we might get there is intriguing! As we must not also forget the past lest be condemned to repeat it, the writing you will find here will at times focus on the past and the present, to build into thinking about what is coming in the future!

IHE on FHIR: History, Development, Implementation

Plentiful is the health IT industry with FHIR discussions and opportunities. It’s on everyone’s topic boards, it’s being pitched at all of the health IT conferences, it’s being discussed and used time and again in SDOs, apps are being developed, initiatives are born. And it’s possibly near a tipping point of success.

HL7/IHE History around FHIR

IHE and HL7 have a long history, going back to the beginning of IHE in 1998 (HL7 was already in existance). There have always been collaborators across and between the two organizations. This is, effectively, how IHE begun. A bunch of health IT standards geeks were seeking a new way to provide interoperability guidance to the world, and thus IHE was born. So it’s not surprising that pattern has continued into the era of FHIR. It started with ad-hoc liasons between the organizations, taking a set of FHIR resources into an IHE Profile, or taking requirements from an IHE Profile back to HL7 to create or modify an existing FHIR Resource. The value of FHIR was quickly recognized as a market disruptor, and as such IHE and HL7 begun to explore the idea of formal collaboration more seriously. These organizations are big ships, and they turn slowly, but over the past 6 years, they seem to be turning in the right direction.

In 2013 HL7 and IHE drafted and signed a Statement of Understanding to identify many areas of collaboration between the two organizations. While this SOU did not make specific mention of FHIR, I strongly suspect FHIR was a driving factor in the agreement.

In 2014 the IHE-HL7 Coordination Committee and the Healthcare Standards Integration (HSI) Workgroup were both created. The former in IHE, the latter in HL7. These were intended to be “sister groups” to work with each other helping to improve collaboration for both organizations, leading to greater efficiencies for all involved. These groups languished a bit and never really got enough traction to continue in the way they were originally envisioned.

A few years later, in 2017, IHE created and IHE FHIR Workgroup that continues to meet today. This workgroup is focused on how to include FHIR in IHE Profiles and has very detailed guidance on this documented on the IHE wiki. It also tracks IHE Profiles using FHIR, cross-referencing across IHE Domains. This workgroup has produced materials and guidance that is very helpful to bringing together IHE and FHIR.

In 2018 Project Gemini was launched, named after the space program of years ago. It’s goal is to identify and bring to market pilot project opportunities based on FHIR. It will leverage and potentially create specifications, participate in testing activities, and seek demonstration opportunities. Basically, it’s job is to tee up FHIR-based projects so they can be launch into the outerspace of the health IT ecosystem. Interoperability is often big, expensive, and scary to implementers and stakeholders – similiar to the challenges that NASA’s Project Gemini was facing.

We are on the cusp of pitching into a new era in health IT with the forthcoming of FHIR. While FHIR will not be a silver bullet, it does provide a great opportunity to be disruptive, in a good way.

IHE PCC and QRPH – Profiles on FHIR

The PCC and QRPH domains have been working on FHIR-based IHE Profiles since 2015. PCC has a total of 9 Profiles that include FHIR, and 1 National Extension, and is working on updating 1 of those Profiles this development cycle to include additional FHIR Resources. QRPH has a total of 4 Profiles leveraging FHIR, with 1 new FHIR-based Profile in the works for this development cycle.

One observation that we have made within PCC, and this is also being used in other domains, is the importance of retaining backwards compatability for our implementers by adding FHIR as an option to the menu. It is not a wholesale delete old and bring in new situation. In fact, if we followed that approach then standards would likely never be implemented en masse as they would always be changing. So an IHE Profile that uses CDA today, and that is under conseridation for FHIR will be asssed by the IHE committee to determine if it should add FHIR as another menu item, or perhaps a more drastic measure should be taken to deprecate the “old” technology.

This will obviously vary based on a number of factors, and that’s a topic for another post, but the point is that the default goal for improving existing IHE Profiles with FHIR is not to replace everything in that Profile with FHIR. Rather, it is to assess each situation and make a wise choice based on what’s best for all involved (vendor implementaters, stakeholders (patients and providers), testing bodies, governments, standards bodies). This does not mean that everyone is happy all the time, but all angles must be considered and consensus is desired.

Implementation of IHE and FHIR

FHIR is being implemented in various ways across the industry. There are two very significant initiatives happening right now that are well-positioned to launch FHIR into the outer space of health IT: CommonWell Health Alliance and Carequality. Both iniatives have been around for roughly the same amount of time (CommonWell 2013, Carequality 2015), and focus on the same general mission to improve data flow in support of improving patient health outcomes, but they take different approaches to get there. CommonWell provides a service that members leverage to query and retrieve data, whereas Carequality provides a framework, including a governance model to do this.

These are fundamentally different approaches but both are achieving great success. CommonWell touts upwards of 11,000 healthcare provider sites that are connected to their network. Carequality touts 1,700 hospitals, and 40,000 clinics leveraging their governance model to exchange data. These are big numbers, and both organizations are on a trajectory to continue increasing their connectivity. CommonWell already has FHIR fully embedded as an option is their platform, with the ability for a member to only leverage REST-based connectivity (most, if not all of which is based on FHIR) to fully participate in the Alliance’s network. Carequality currently has an open call for participation in newly forming FHIR Technical and Policy Workgroups to include FHIR as a main-line offering in their specifications.

Given that both of these initiatives have included IHE as part of their original implementation requirements, and that both are now including FHIR, and that both have signifincat implementation numbers – we have an exceptional opportunity to advance interoperability in ways that we have not been able to previous.

Summary

The world of interoperability is alive and well, despite constant setbacks (due mostly to non-technical things), and thanks in part to IHE and FHIR. Convergence is happening, both on the SDO front as well as in the implementation world. And I fully expect that convergence to continue.

IHE PCC Domain Meetings – Fall 2018

The fall PCC, QRPH, and ITI technical committee meetings were held in Oak Brook in mid November, as usual. Unfortunately I was not able to attend the October planning committee meetings – neither in person or via telecommute due to other committments, however I was able to catch up with what is going on by attending in person for the November meetings. I attended mostly PCC, and sat in on a few QRPH sessions. Here is a quick update on PCC Activities.

Profile Work

In PCC we are going to have a quiet year, with only one profile work item, and this work item is to update an existing profile: Dynamic Care Team Managment (DCTM) to include some additional FHIR-based guidance. Tune in for the calls being scheduled now if you want to learn more.

As I understand it there has also been some discussion on CDA harminozation, but there are no formal CDA harmonization work efforts on PCC’s plate for this upcoming cycle. This is a topic that has been discussed in previous years, but only mediocre progress has been made. Perhaps with efforts like Project Gemini there is hope to re-ignite some of this work.

Change Proposal Work

PCC received a sizeable number of CPs last year (2017), and have been slowly working through processing these. This work will continue with the goal for this next work cycle to have all of these CPs closed out.

Based on a quick count here is our CP submission by year:

Year Number of CPs Submitted
2018 6
2017 37
2016 9
2015 12
2014 18
2013 43
2012 20
2011 30
2010 15
2009 5
<= 2008 77
TOTAL 272

While 2017 was not our largest CP submission year, it is still significant. We have 13 CPs that remain in “Assigned” status, which means that someone is reviewing and finalizing for inclusion in a ballot. If those CPs make it to ballot and pass, then they will be incorporated into the appropriate published document (e.g., Technical Framwork, Profile, National Extension, Appendix).

PCC Committee Structure

The PCC Planning and PCC Technical committees have made a decision to combine into a single planning/technical committee, at least for the next work cycle. This is to streamline the work of PCC given the lower number of participants. Fortunately we do have 3 co-chairs (one of whom is acting more as a “co-chair emeritus”, providing expert guidance to our two new co-chairs.) This is completely within the bounds of the IHE Governance, so no issues on that front. In future years we may split back into separate planning and technical committees if appropriate.

PCC Publication Cycle

We also discussed, and are interested in the idea of supporting a year-round profile publication process. This is something that has been discussed in previous years, but due to high volume of profile publication (and other reasons) it has not yet been possible to achieve. ITI is also interested in this idea and has started a wiki page with a great deal of detail. I encourage you to read this and comment on the wiki page if you have additional ideas to include. As part of this effort IHE may also need to look at it’s back end publication processes and explore opportunities to move away from pdf-based publication.

Summary

PCC has it’s work items outlined for this upcoming cycle, and an opportunity to explore what a new/different publication cycle might look like. While there are not a great number of profiles being published this year (only one, in fact, and it’s “just an update” at that), this doesn’t necessarily signify distress for PCC, rather it could indicate the market is still working on catching up with many of the profiles that PCC has published over the past several years.

The Pillars of Health IT Standards – Part 1

The Pillars of Health IT Standards – Part 1

Health IT standards can be broken down into what I call pillars of interoperability. These pillars are Content, Transport, and Workflow. Content standards aid in clearly communicating content shared between various applications. They provide a medium between disparate health IT systems to speak the same language. They provide clues, sometimes very specific, sometimes vague, as to what data lives inside of various content structures. Transport standards describe ways that content will be sent to, received from, or otherwise made available for consumption between two or more applications. Workflow standards stitch together the web of interaction points across the health IT ecosystem, leveraging Content and Transport standards to describe what an “end-to-end” flow of healthcare data looks like for any given use case.

This will be a 3-part blog post series breaking down each of these concepts. This post will focus in on Content, followed by Transport, and finallly Workflow.

Content

Is very important when it comes to one system needing to communicate to another system relevant information to do something. Understanding the content is vital for the system or end user to be able to take some action. If the information is improperly understood then catastrophe could follow, and rather easily. Let’s take the example of units of measure on a medication. Milligrams is a common unit of measure, micrograms is perhaps not (at least in my non-clinical experience). 200 milligrams is a common dosage of ibuprofen, but 200,000 micrograms is not. The astute reader will note these are equivalent values. Suppose that a health IT system creating content to be shared with another system uses the unit of measure of micrograms for ibuprofen and documents a dosage of 200,000 (this could be entered as 200 milligrams by the end user, but perhaps stored as micrograms in the database). A health IT system consuming content created from this source system could accidentally misinterpret the value to be 200,000 milligrams, potentially resulting in a fatal situation for the patient.

While the above example may seem far-fetched, this is a reality that happens all too often and there has been much analysis and research done in the area of accidental medication overdose. The proper creation and consumption of content is vitally important (quite literally!) to a positive health outcome for a patient. Content creators must ensure they properly represent the data to be shared, and content consumers must ensure they properly understand the data to be consumed.

Content can be broken down into several different components: structures, state, reference data, and data mapping. Let’s take a look a each of these areas.

Structures

The structures used in content interoperability vary from base-level standards to implementations of those standard structures to meet a specific use case. The base-level standards are at the “schema” level, which defines the valid data types, how many times a particular element may be repeated (or not), etc. The implementation of those standards to meet a given use case is at the “schematron” level. ISO Schematron is a standard developed by ISO that has been in use for the past several years to validate conformance of an XML implementations against any given specification.

This idea of base structure versus what goes inside of that structure is important as it allows for multiple levels of standards development, and enables profiling standards to create specific implementation guidance for specific use cases. Through this approach the exchange of health IT information is able to be effectively exchanged in a changing market of available standards and systems.

State

Content may exist as transient or as persistant. Sometimes the lines are blurred here where transient data may later be repurposed as persistant, or vice versa! Workflow (discussed in a forthcoming post) helps to address this issue. State, in this context, is not quite the same as status although they share some characteristics. State is more distinct. A set of content is “in” a state, whereas it “has” a status. So content that is in a document state may be in an approved status. The difference is subtle, but very relevant. Health IT standards rely on the use of states to provide some sense of stability around what to expect of the content standardized within.

Reference Data

Reference data is a special kind of data used to classify other data. It helps to describe the purpose and meaning of the data. Reference data is found in the form of standardized vocabularies such as SNOMED and LOINC. Reference data is commonly required in master data management solutions to link the data across the spectrum, whether that data be patients, providers, clinical concepts, financial transactions, or any number of other master data concepts that are able to be leveraged to tell a story about the business that brings value to the organization in terms of decisions they need to make. Reference data can also be used in inferencing solutions where probable conclusions are developed based on the presence of a certain number of other specific data that is available. Reference data is an extension of schematron – if schematron defines what general type of data shall exist in a given content structure, then reference data allows for flexibility in terms of what the options are for specific content within those assertions.

Data Mapping

Data mapping is the act of identifying the likeness of one data concept to another, and providing a persistant and resuable link to that data. This is the glue that enables systems to exchange data. Data mapping leverages standard structures and reference data to figure out what needs to be mapped where. A particular set of inbound source content may be represented by one indsutry standard and needs to be mapped to an internal data model. If the source data is already linked to an industry standard reference data set (i.e., Vocabulary), then both the structure and the specific implementation and codification of the data elements within that structure can be mapped into the internal data model with relative ease given the internal system has tooling in place to support such content and reference data standards. That is a long-winded way of saying that content standards and terminology standards go a long way to solving interoperability problems when implemented properly.

Conclusion

Content is vitallly important – I would argue the most important aspect of health IT interoperability. If the content is not understood by any system in a health exchange of data, a number of different problems present themselves. And sometimes ill-communication is worse than no commmunication if it is misunderstood in a way that will bring harm to the patient. As the Hippocratic Oath states: “First do no harm.” A content creator must take extreme care to ensure content is properly represented, and a content consumer must equally take care that the content is consumed in the way it was intended for consumption by the creator.

Internal and External Content Model Standards

A conversation came up recently with someone on the use of and adherence to healthcare industry standards inside of a particular robust solution in healthcare. I have seen many an organization model their internal standard data model after an industry-wide data model such as HL7 CDA or HL7 FHIR. What seems to be often missed is the importance of an organization to realize that they will always need their own “flavor” of that standard for internal use.

The idea to follow the industry standard is one that is very well-intentioned, but it is also extremely difficult to implement and maintain. It is also not always the best choice because industry-wide standards are intended to handle use cases across many different organizations (hence the name “industry-wide”) and while they may meet the specific needs of the organization desiring to implement the standard, they may also include additional “baggage” that is not helpful to the organization. Conversely, they may require extensions or adaptations to the model to fully support the organization’s specific use cases. The effort required to implement the content model with either of these considerations can become burdensome.

We must realize that industry standards are quite important to drive health IT applications toward common ways to exchange and communicate data, but they must be at the guidance level, and not the end-all-be-all way to represent data that needs to be shared. This is now being realized in the data warehousing market as ‘schema-on-read’ is becoming a more popular approach to dealing with analytics on large data sets as opposed to ‘schema-on-write.’ The optimism on ‘one data model to rule them all’ is shrinking. A good example of this would be a solution that leverages metadata for the anlaytical data points rather than the concrete source data structures. This allows an application to focus on writing good queries, and lets the metadata model deal with the underlying differences in the source data model. It provides an effective layer of abstraction on the source data model, and as long as that abstraction layer properly maps to the source data model, then we have an effective ‘schema on read’ solution. This sort of approach is becoming more and more necessary as the rate of change in the technology and in healthcare IT is still increasing.

Internal standards are more manageable. Organizations can design and implement a content model for a set of use cases in a reasonable time frame with a reasonable amount of resources. This model may even be based on an industry-standard model, but it must not BE the industry standard model! What I mean by that is that expectations must set clear from the outset that the model WILL change over time as the organization changes, as the business opportunities change, as laws change, etc. As the decision is made as to what the internal model is to be, it must be understood that it is for that organization only and mapping shall be provided to and from as needed, while looking for opportunities of reuse across specific data elements or data element groups.

What this all drives toward is having interoperability as a first class citizen in an orgnization’s solution set. The content model is important, but the content model is designed for internal usage, with mappings to external systems’ content models. In addition to the content model, an organization must also include their implementation approach in their overall strategy to ensure that external systems can be mapped to the internal content model effectively (on time, under budget, and meeting the business needs). A great strategy without an execution plan will die on the vine.

In summary, the intent of this post is an attempt to clarify the difference between this idea of an external data standard and an internal data standard, and the overlap between these ideas. Interoperability is not a clear cut landscape. Interoperability is hard. We must realize and accept that fact and look for ways to work effectively within it to drive toward higher quality communication between health IT systems, leading to improved patient health outcomes.