Category Archives: HIT

IHE Connectathon – a Look Back and a Look to the Future

I spent last week at the 20th IHE North America Connectathon. This is my 14th year involved in IHE, and my 13th year attending the Connectathon. This year my role was to serve as an IHE Ambassador to assist others in understanding what is happening at the event and to be a member of the welcoming party to new comers.

I was asked to share a few thoughts for the video crew around what my experience has been with past IHE Connectathons, what has changed over the years and what I think some of the successes and challenges have been in interoperability over the past year, so it seems a good opportunitiy to write about it here.

My first Connectathon was in 2008 when I worked for digiChart, a software company focused on obstetric care. As a still fairly green software developer I spent the year prior writing the Antepartum Summary (APS) IHE Profile in the IHE PCC Domain and implementing that profile along with XDS, CT, ATNA, and a few others in the digiChart product. I took that implementation to Connectathon and tested with other software vendors. This was my first foray into IHE and I was hooked. I have continued to stay engaged in IHE in one form or another since that time, through six employers supporting at least seven organizations. My roles have included: software engineer, development lead, HIMSS Showcase Technical Project Manager, strategic guidance, and IHE Ambassador.

Over the years I have watched the Connectathon tooling improve greatly. The first tool used was Laverne Palmer and a giant pad of paper, but this was legend I’ve only heard others speak of as it was prior to my involvement. Sometime after that came the Kudu tool, to which those of us participating learned how to use with its quirks, but at the end of the day it did it’s job well enough (which is not simple!). The successor to Kudu was Gazelle, which is still in use today. Gazelle was a redesign of Kudu and used a much improved architecture allowing for greater opportunities for continued improvement over time. Automation was provided year after year in support of vendor testers being able to execute their tests and receive feedback and results with faster and better accuracy. The interaction with test monitors through the Gazelle tool also greatly improved over time providing further efficiencies.

Another change I have seen over the years is the increase in number of middleware vendors bringing their products to the Connecathon. In earlier Connectathons these vendors were not encouraged, or even not allowed. An end user experience with every product was a requirement of participation. In other words, a user interface of some sort was required to demonstrate that interoperability was happening. As the market changed and middleware, or integrator systems, became more prevelant they were allowed to attend. This was a natural and expected progression as the market began to specialize more in specific interoperability areas.

Some things have not changed at Connectathon over the years, and one of those is the culture of extreme collaboration and cooperation that exists among the participants. One will observe that participants on the Connectathon floor that may compete out in the open market will be found working together at the event to solve interoperability problems. This was a culture that was established early on as it was recognized that the benefit of such an approach would result in higher levels of innovation and creativity. By breaking down the competitive walls, ideas could more freely be shared (and let’s face it, due to the craftsmanship nature of coding if you actually got a copy of another company’s source code it very likely wouldn’t do you any good unless you were to get it in totality which wouldn’t ever happen at an event like this). This culture still exists today. Many of the same faces appear each year at the IHE Connectathon, but new faces also show up to take advantage of the collaboration and interoperability testing that happens there.

In terms of successes and challenges of interoperability over the past year the answer is a little more difficult. There really isn’t any blockbuster news like The Most Important Interoperability Story of 2016, however, there is an expected ONC rule that will soon be released that supports much greater levels of patient data access and this is something that has been in the works over the course of 2019. This is significant! This is how we will continue to innovate, to create. We must provide patients with the opportunity to work with their data to gain greater levels of control of their healthcare data. Dave DeBronkart (“ePatient Dave”) discusses this when he speaks on “paternalism” and how that prevents patients from being able to have a say and postive influence in their own health outcomes. So while this is not a specific success marker in 2019, there is a lot of groundwork that has been laid in support of the regulation expected to released within a week or so.

A challenge in 2019 is the lack of organic adoption of interoperability. The driver still seems to be federal incentives. Perhaps that’s how it will always be, but I don’t think that’s ok. We have to find a way to tip the scales so that software companies will innovate to provide value-add services to patients such as those that exist in other markets. We do need continued help from our federal government (here in the US), but once we have the initial push (via upcoming regulation) I suspect that creativity and opportunity will eventually take over and we’ll see interesting and effective solutions on the market that allow patients to engage with and have better control over their data, and thus better understanding of what their healthcare options are for their specific situation.

All in all I think healthcare IT and interoperability continues to move in the right direction. I am thankful to be a part of this industry despite it’s challenges and sluggishness as compared to other industries. We ARE moving forward. We ARE getting better. We WILL succeed. Let’s keep pushing for access to our data!

New Beginnings!

I started a new adventure earlier this week. I have finished up my time at IQVIA and have joined OneRecord to lead technology efforts. I have had a good run at IQVIA over the past 2.5 years, where I led teams to develop various aspects of the clinical data registry business. I am deeply appreciative of the opportunity I was provided and the experience gained. I am also thankful for the relationships created, for the people I had the good fortune to interact with, both internally as well as clients.

Going forward my focus will change from building solutions for and implementing clinical registries being leveraged by providers to figuring out how to best help patients get access to their healthcare data. As patients we all should have access to our data, unfortunately there are still many barriers to this. So let’s see what we can do to overcome those barriers through technology solutions.

In my new role I will lead all technology aspects of the company and I will continue to stay engaged in healthcare standards work through IHE, participating in PCC and QRPH domains. I will also track HL7 and may get more engaged in some activities there. While my more immediate focus will be on pulling in real data via FHIR and through national interoperability initiatives, I also will have an eye to what’s going on the mobile health apps space. There is considerable opportunity as device manufacturers get better at creating their products to communicate quality data to device gateways, and as app developers build in the ability to read from device gateways and continue to refine quality of the user interface views views on top of patient data.

So here’s to the future, to change, to new beginnings, and to figuring out how to continue to drive forward the release of data to and for patients. We are all patients, and we all have a right to our data. So let’s march out into the world and free our healthcare data!

What Standards Are in Your Toolbag?

It is important to never forget that a standard is a tool to be used to attain a certain end. It is easy, and sometimes selfishly rewarding, to get caught up in situations where developing new standards-based solutions seems like the right idea, but in fact it’s putting the ladder up against the wrong wall in the context of certain business cases. One must carefully choose between a path that explores the effectiveness of a new standard and a path of meeting the customer’s business need. Sometimes these paths converge, but they must be taken with care and each situation has a different tolerance level. The truly golden moments are those where the standard actually helps to solve the business problem in a way better than a non-standard can. That’s what we call a win-win!

Standards exist along a very lengthy spectrum and in varying degrees of maturity and usefulness. They have very different meanings and provide different levels of value to different stakeholders. Some standards we take for granted today, such as TCP or XML, but those standards were once bleeding edge, believe it or not. And some developer somewhere was exploring what it might be like to leverage TCP to move data between systems. For example, Tim Berners-Lee was one such developer and used TCP/IP to invent the World Wide Web. This was quite revolutionary at the time, but something we certainly take for granted today.

There are most certainly appropriate times to use new standards to solve business problems, but as developers, we must view them as a tool in our toolbag, not the end-all-be-all way to solve a problem. I have seen customers so often become hyper-focused on a particular standard being the single best solution to their problem, and then once the layers of reality are exposed which make evident the complexity of the situation, all doubt ensues about standards being good for solving real-world problems. It feels like there is some sort of pattern here, perhaps it’s Gartner’s Hype Cycle, and it’s the Trough of Disillusionment that these customers and find themselves in. It is our job then, as developers, and as evangelists of standards to guide customers in and out of the issues to achieve a successful end for all parties involved.

So how do we prevent this? We educate, educate, educate. We must educate at the right level, in the right time, and in the right way.

Right Level

I don’t do plumbing, but I know a bit about plumbing. If I need to hire a plumber, I know enough about plumbing that I know how to talk to a plumber. This is important because it ensures the contract I have with the plumber is understand both by me as well as the plumber. Expectations of the work being performed and the expected results are well-enough understood for the project to be determined to be a success (or not!). This ensures I do not pay for something I do not understand to a minimal degree at least, and it also holds the plumber accountable for doing a good job. What is important from the plumber’s viewpoint is to set the right expectations to the receipient of the services (me) as to the benefit of the service being provided. If the recipient of the service expects too much, the plumber will come out on the losing end of the arrangement.

Right Time

People are receptive to receive information in different ways, at different times. We must think about when it is appropriate to deliver information to people so they are receptive to it, and so that understanding will persist. At the beginning of a project is usually a good time, ideas are being shared around, requirements are being clarified, it’s usually not too late to make last minute changes to scope. Bad news should be delivered early and often, along with a mitigation plan, and assumptions should be made that the customer probably doesn’t understand the complexities of successfully delivering interoperability solutions (just as I may not understand the complexities of a complex plumbing project).

Right Way

The right people must provide education about the effective use of standards for a particular project in the right way. The right demeanor, not insulting/disrespectful, not too assuming, the right amount of patience, and having a passion for learning are all good characteristics to have in such individuals. These characteristics generally are found in people with enthusiasm about their role as an educator of standards. It’s not just about the content being delivered but it’s how that content is delivered. It’s about building a rapport with the “students,” about gaining their respect so that any instruction is well-received. This becomes quite an art when the amount of actual time available to build such trust is so often limited.

Standards are created to bring order to a chaotic world. They must be implemented appropriately and in situations that bring value to most or all parties involved. If there is a cost to bear as an early adopter that cost must be understood and planned for. The big challenge here is that cost must also not be allowed to prevent innovation in the interest of longer term success for the customer and organization alike, and this requires gaining buy-in from the sponsors of the work.

Standards implementation in Health IT is often about vision casting, and educating the middle ground between the exiciting vision and ground level developer work that happens. It’s that in-between ground that helps customers see the benefits of the vision without having to get too deep into the mess (and fun!) of writing code to leverage those standards.

So what is your experience using health IT standards to solve real-world business problems and how have you worked with your customers to overcome issues around early adoption?

What’s in a Name?

What’s in a Name?

Well, for starters, sometimes everything. Names are highly important. It’s the mechanism we use to remember our friends and family by, as well as our adversaries. Names aid us in navigating the world, in providing direction as to how to understand what’s underneath the surface. Names can also lead us astray, as in “don’t judge a book by it’s cover.” I feel it’s time to reconsider the name of this blog in interest of providing better understanding to the world about what’s happening here.

Welcome to Interop Futures! Why Interop Futures? It’s what I do. I have the good fortune to spend a good amount of my time thinking about the future of interoperability, specifically in healthcare. I apply this thinking to ideas at my full time job. I apply this thinking in my IHE participation. I apply this thinking to other non-career related areas of my life as well.

I have a few specific reasons for changing the name:

Better Industry Recognition

My old blog name – {ts_blog} – was rather poetic (as a respected colleague recently mentioned to me – thanks Gila!), but it was quite vague as well. Unless you read the tag line or titles of some of the posts it was hard to really understand what this web presence was all about. Now we have more clarity about what’s going on here.

Alignment with Career and Industry Involvement

I have been trading in interop futures for the better part of a decade now, going back to 2007 when I first got involved in IHE. I joined a company and wrote a profile on antepartum care, and then implemented that profile in our EHR solution. I left that company about a year later because funding was cut for “IHE work.” And so began my career in interoperability futures. I have worked for three other companies since, with a handful of independent contracts in between and almost all of my work involves thinking about and implementing solutions around the future of healthcare IT, about future-proofing those systems, about developing ideas on how to bridge the old to the new, and so on.

Retains a Certain Amount of Flexibility

“Interop Futures” does not have “healthcare” in the name. It allows for a certain amount of flexibility in terms of what topics I choose to write about. Many of the underlying standards used in healthcare IT standards and solutions are domain-agnostic. SOAP and REST by themselves are not healthcare specific, but when used in XDS and FHIR they certainly become so.

It’s a Cool Name!

What’s not to like about the name “Interop Futures”? It sounds like a cool movie, or a crystal ball that allow you to see what’s coming. In reality there is no crystal ball, and I certainly have no movie contracts. Even still, thinking about the future of where health IT is going, and how we might get there is intriguing! As we must not also forget the past lest be condemned to repeat it, the writing you will find here will at times focus on the past and the present, to build into thinking about what is coming in the future!

IHE on FHIR: History, Development, Implementation

Plentiful is the health IT industry with FHIR discussions and opportunities. It’s on everyone’s topic boards, it’s being pitched at all of the health IT conferences, it’s being discussed and used time and again in SDOs, apps are being developed, initiatives are born. And it’s possibly near a tipping point of success.

HL7/IHE History around FHIR

IHE and HL7 have a long history, going back to the beginning of IHE in 1998 (HL7 was already in existance). There have always been collaborators across and between the two organizations. This is, effectively, how IHE begun. A bunch of health IT standards geeks were seeking a new way to provide interoperability guidance to the world, and thus IHE was born. So it’s not surprising that pattern has continued into the era of FHIR. It started with ad-hoc liasons between the organizations, taking a set of FHIR resources into an IHE Profile, or taking requirements from an IHE Profile back to HL7 to create or modify an existing FHIR Resource. The value of FHIR was quickly recognized as a market disruptor, and as such IHE and HL7 begun to explore the idea of formal collaboration more seriously. These organizations are big ships, and they turn slowly, but over the past 6 years, they seem to be turning in the right direction.

In 2013 HL7 and IHE drafted and signed a Statement of Understanding to identify many areas of collaboration between the two organizations. While this SOU did not make specific mention of FHIR, I strongly suspect FHIR was a driving factor in the agreement.

In 2014 the IHE-HL7 Coordination Committee and the Healthcare Standards Integration (HSI) Workgroup were both created. The former in IHE, the latter in HL7. These were intended to be “sister groups” to work with each other helping to improve collaboration for both organizations, leading to greater efficiencies for all involved. These groups languished a bit and never really got enough traction to continue in the way they were originally envisioned.

A few years later, in 2017, IHE created and IHE FHIR Workgroup that continues to meet today. This workgroup is focused on how to include FHIR in IHE Profiles and has very detailed guidance on this documented on the IHE wiki. It also tracks IHE Profiles using FHIR, cross-referencing across IHE Domains. This workgroup has produced materials and guidance that is very helpful to bringing together IHE and FHIR.

In 2018 Project Gemini was launched, named after the space program of years ago. It’s goal is to identify and bring to market pilot project opportunities based on FHIR. It will leverage and potentially create specifications, participate in testing activities, and seek demonstration opportunities. Basically, it’s job is to tee up FHIR-based projects so they can be launch into the outerspace of the health IT ecosystem. Interoperability is often big, expensive, and scary to implementers and stakeholders – similiar to the challenges that NASA’s Project Gemini was facing.

We are on the cusp of pitching into a new era in health IT with the forthcoming of FHIR. While FHIR will not be a silver bullet, it does provide a great opportunity to be disruptive, in a good way.

IHE PCC and QRPH – Profiles on FHIR

The PCC and QRPH domains have been working on FHIR-based IHE Profiles since 2015. PCC has a total of 9 Profiles that include FHIR, and 1 National Extension, and is working on updating 1 of those Profiles this development cycle to include additional FHIR Resources. QRPH has a total of 4 Profiles leveraging FHIR, with 1 new FHIR-based Profile in the works for this development cycle.

One observation that we have made within PCC, and this is also being used in other domains, is the importance of retaining backwards compatability for our implementers by adding FHIR as an option to the menu. It is not a wholesale delete old and bring in new situation. In fact, if we followed that approach then standards would likely never be implemented en masse as they would always be changing. So an IHE Profile that uses CDA today, and that is under conseridation for FHIR will be asssed by the IHE committee to determine if it should add FHIR as another menu item, or perhaps a more drastic measure should be taken to deprecate the “old” technology.

This will obviously vary based on a number of factors, and that’s a topic for another post, but the point is that the default goal for improving existing IHE Profiles with FHIR is not to replace everything in that Profile with FHIR. Rather, it is to assess each situation and make a wise choice based on what’s best for all involved (vendor implementaters, stakeholders (patients and providers), testing bodies, governments, standards bodies). This does not mean that everyone is happy all the time, but all angles must be considered and consensus is desired.

Implementation of IHE and FHIR

FHIR is being implemented in various ways across the industry. There are two very significant initiatives happening right now that are well-positioned to launch FHIR into the outer space of health IT: CommonWell Health Alliance and Carequality. Both iniatives have been around for roughly the same amount of time (CommonWell 2013, Carequality 2015), and focus on the same general mission to improve data flow in support of improving patient health outcomes, but they take different approaches to get there. CommonWell provides a service that members leverage to query and retrieve data, whereas Carequality provides a framework, including a governance model to do this.

These are fundamentally different approaches but both are achieving great success. CommonWell touts upwards of 11,000 healthcare provider sites that are connected to their network. Carequality touts 1,700 hospitals, and 40,000 clinics leveraging their governance model to exchange data. These are big numbers, and both organizations are on a trajectory to continue increasing their connectivity. CommonWell already has FHIR fully embedded as an option is their platform, with the ability for a member to only leverage REST-based connectivity (most, if not all of which is based on FHIR) to fully participate in the Alliance’s network. Carequality currently has an open call for participation in newly forming FHIR Technical and Policy Workgroups to include FHIR as a main-line offering in their specifications.

Given that both of these initiatives have included IHE as part of their original implementation requirements, and that both are now including FHIR, and that both have signifincat implementation numbers – we have an exceptional opportunity to advance interoperability in ways that we have not been able to previous.


The world of interoperability is alive and well, despite constant setbacks (due mostly to non-technical things), and thanks in part to IHE and FHIR. Convergence is happening, both on the SDO front as well as in the implementation world. And I fully expect that convergence to continue.

IHE PCC Domain Meetings – Fall 2018

The fall PCC, QRPH, and ITI technical committee meetings were held in Oak Brook in mid November, as usual. Unfortunately I was not able to attend the October planning committee meetings – neither in person or via telecommute due to other committments, however I was able to catch up with what is going on by attending in person for the November meetings. I attended mostly PCC, and sat in on a few QRPH sessions. Here is a quick update on PCC Activities.

Profile Work

In PCC we are going to have a quiet year, with only one profile work item, and this work item is to update an existing profile: Dynamic Care Team Managment (DCTM) to include some additional FHIR-based guidance. Tune in for the calls being scheduled now if you want to learn more.

As I understand it there has also been some discussion on CDA harminozation, but there are no formal CDA harmonization work efforts on PCC’s plate for this upcoming cycle. This is a topic that has been discussed in previous years, but only mediocre progress has been made. Perhaps with efforts like Project Gemini there is hope to re-ignite some of this work.

Change Proposal Work

PCC received a sizeable number of CPs last year (2017), and have been slowly working through processing these. This work will continue with the goal for this next work cycle to have all of these CPs closed out.

Based on a quick count here is our CP submission by year:

Year Number of CPs Submitted
2018 6
2017 37
2016 9
2015 12
2014 18
2013 43
2012 20
2011 30
2010 15
2009 5
<= 2008 77

While 2017 was not our largest CP submission year, it is still significant. We have 13 CPs that remain in “Assigned” status, which means that someone is reviewing and finalizing for inclusion in a ballot. If those CPs make it to ballot and pass, then they will be incorporated into the appropriate published document (e.g., Technical Framwork, Profile, National Extension, Appendix).

PCC Committee Structure

The PCC Planning and PCC Technical committees have made a decision to combine into a single planning/technical committee, at least for the next work cycle. This is to streamline the work of PCC given the lower number of participants. Fortunately we do have 3 co-chairs (one of whom is acting more as a “co-chair emeritus”, providing expert guidance to our two new co-chairs.) This is completely within the bounds of the IHE Governance, so no issues on that front. In future years we may split back into separate planning and technical committees if appropriate.

PCC Publication Cycle

We also discussed, and are interested in the idea of supporting a year-round profile publication process. This is something that has been discussed in previous years, but due to high volume of profile publication (and other reasons) it has not yet been possible to achieve. ITI is also interested in this idea and has started a wiki page with a great deal of detail. I encourage you to read this and comment on the wiki page if you have additional ideas to include. As part of this effort IHE may also need to look at it’s back end publication processes and explore opportunities to move away from pdf-based publication.


PCC has it’s work items outlined for this upcoming cycle, and an opportunity to explore what a new/different publication cycle might look like. While there are not a great number of profiles being published this year (only one, in fact, and it’s “just an update” at that), this doesn’t necessarily signify distress for PCC, rather it could indicate the market is still working on catching up with many of the profiles that PCC has published over the past several years.

The Pillars of Health IT Standards – Part 1

The Pillars of Health IT Standards – Part 1

Health IT standards can be broken down into what I call pillars of interoperability. These pillars are Content, Transport, and Workflow. Content standards aid in clearly communicating content shared between various applications. They provide a medium between disparate health IT systems to speak the same language. They provide clues, sometimes very specific, sometimes vague, as to what data lives inside of various content structures. Transport standards describe ways that content will be sent to, received from, or otherwise made available for consumption between two or more applications. Workflow standards stitch together the web of interaction points across the health IT ecosystem, leveraging Content and Transport standards to describe what an “end-to-end” flow of healthcare data looks like for any given use case.

This will be a 3-part blog post series breaking down each of these concepts. This post will focus in on Content, followed by Transport, and finallly Workflow.


Is very important when it comes to one system needing to communicate to another system relevant information to do something. Understanding the content is vital for the system or end user to be able to take some action. If the information is improperly understood then catastrophe could follow, and rather easily. Let’s take the example of units of measure on a medication. Milligrams is a common unit of measure, micrograms is perhaps not (at least in my non-clinical experience). 200 milligrams is a common dosage of ibuprofen, but 200,000 micrograms is not. The astute reader will note these are equivalent values. Suppose that a health IT system creating content to be shared with another system uses the unit of measure of micrograms for ibuprofen and documents a dosage of 200,000 (this could be entered as 200 milligrams by the end user, but perhaps stored as micrograms in the database). A health IT system consuming content created from this source system could accidentally misinterpret the value to be 200,000 milligrams, potentially resulting in a fatal situation for the patient.

While the above example may seem far-fetched, this is a reality that happens all too often and there has been much analysis and research done in the area of accidental medication overdose. The proper creation and consumption of content is vitally important (quite literally!) to a positive health outcome for a patient. Content creators must ensure they properly represent the data to be shared, and content consumers must ensure they properly understand the data to be consumed.

Content can be broken down into several different components: structures, state, reference data, and data mapping. Let’s take a look a each of these areas.


The structures used in content interoperability vary from base-level standards to implementations of those standard structures to meet a specific use case. The base-level standards are at the “schema” level, which defines the valid data types, how many times a particular element may be repeated (or not), etc. The implementation of those standards to meet a given use case is at the “schematron” level. ISO Schematron is a standard developed by ISO that has been in use for the past several years to validate conformance of an XML implementations against any given specification.

This idea of base structure versus what goes inside of that structure is important as it allows for multiple levels of standards development, and enables profiling standards to create specific implementation guidance for specific use cases. Through this approach the exchange of health IT information is able to be effectively exchanged in a changing market of available standards and systems.


Content may exist as transient or as persistant. Sometimes the lines are blurred here where transient data may later be repurposed as persistant, or vice versa! Workflow (discussed in a forthcoming post) helps to address this issue. State, in this context, is not quite the same as status although they share some characteristics. State is more distinct. A set of content is “in” a state, whereas it “has” a status. So content that is in a document state may be in an approved status. The difference is subtle, but very relevant. Health IT standards rely on the use of states to provide some sense of stability around what to expect of the content standardized within.

Reference Data

Reference data is a special kind of data used to classify other data. It helps to describe the purpose and meaning of the data. Reference data is found in the form of standardized vocabularies such as SNOMED and LOINC. Reference data is commonly required in master data management solutions to link the data across the spectrum, whether that data be patients, providers, clinical concepts, financial transactions, or any number of other master data concepts that are able to be leveraged to tell a story about the business that brings value to the organization in terms of decisions they need to make. Reference data can also be used in inferencing solutions where probable conclusions are developed based on the presence of a certain number of other specific data that is available. Reference data is an extension of schematron – if schematron defines what general type of data shall exist in a given content structure, then reference data allows for flexibility in terms of what the options are for specific content within those assertions.

Data Mapping

Data mapping is the act of identifying the likeness of one data concept to another, and providing a persistant and resuable link to that data. This is the glue that enables systems to exchange data. Data mapping leverages standard structures and reference data to figure out what needs to be mapped where. A particular set of inbound source content may be represented by one indsutry standard and needs to be mapped to an internal data model. If the source data is already linked to an industry standard reference data set (i.e., Vocabulary), then both the structure and the specific implementation and codification of the data elements within that structure can be mapped into the internal data model with relative ease given the internal system has tooling in place to support such content and reference data standards. That is a long-winded way of saying that content standards and terminology standards go a long way to solving interoperability problems when implemented properly.


Content is vitallly important – I would argue the most important aspect of health IT interoperability. If the content is not understood by any system in a health exchange of data, a number of different problems present themselves. And sometimes ill-communication is worse than no commmunication if it is misunderstood in a way that will bring harm to the patient. As the Hippocratic Oath states: “First do no harm.” A content creator must take extreme care to ensure content is properly represented, and a content consumer must equally take care that the content is consumed in the way it was intended for consumption by the creator.

Internal and External Content Model Standards

A conversation came up recently with someone on the use of and adherence to healthcare industry standards inside of a particular robust solution in healthcare. I have seen many an organization model their internal standard data model after an industry-wide data model such as HL7 CDA or HL7 FHIR. What seems to be often missed is the importance of an organization to realize that they will always need their own “flavor” of that standard for internal use.

The idea to follow the industry standard is one that is very well-intentioned, but it is also extremely difficult to implement and maintain. It is also not always the best choice because industry-wide standards are intended to handle use cases across many different organizations (hence the name “industry-wide”) and while they may meet the specific needs of the organization desiring to implement the standard, they may also include additional “baggage” that is not helpful to the organization. Conversely, they may require extensions or adaptations to the model to fully support the organization’s specific use cases. The effort required to implement the content model with either of these considerations can become burdensome.

We must realize that industry standards are quite important to drive health IT applications toward common ways to exchange and communicate data, but they must be at the guidance level, and not the end-all-be-all way to represent data that needs to be shared. This is now being realized in the data warehousing market as ‘schema-on-read’ is becoming a more popular approach to dealing with analytics on large data sets as opposed to ‘schema-on-write.’ The optimism on ‘one data model to rule them all’ is shrinking. A good example of this would be a solution that leverages metadata for the anlaytical data points rather than the concrete source data structures. This allows an application to focus on writing good queries, and lets the metadata model deal with the underlying differences in the source data model. It provides an effective layer of abstraction on the source data model, and as long as that abstraction layer properly maps to the source data model, then we have an effective ‘schema on read’ solution. This sort of approach is becoming more and more necessary as the rate of change in the technology and in healthcare IT is still increasing.

Internal standards are more manageable. Organizations can design and implement a content model for a set of use cases in a reasonable time frame with a reasonable amount of resources. This model may even be based on an industry-standard model, but it must not BE the industry standard model! What I mean by that is that expectations must set clear from the outset that the model WILL change over time as the organization changes, as the business opportunities change, as laws change, etc. As the decision is made as to what the internal model is to be, it must be understood that it is for that organization only and mapping shall be provided to and from as needed, while looking for opportunities of reuse across specific data elements or data element groups.

What this all drives toward is having interoperability as a first class citizen in an orgnization’s solution set. The content model is important, but the content model is designed for internal usage, with mappings to external systems’ content models. In addition to the content model, an organization must also include their implementation approach in their overall strategy to ensure that external systems can be mapped to the internal content model effectively (on time, under budget, and meeting the business needs). A great strategy without an execution plan will die on the vine.

In summary, the intent of this post is an attempt to clarify the difference between this idea of an external data standard and an internal data standard, and the overlap between these ideas. Interoperability is not a clear cut landscape. Interoperability is hard. We must realize and accept that fact and look for ways to work effectively within it to drive toward higher quality communication between health IT systems, leading to improved patient health outcomes.

A Changing Landscape in Healthcare IT Software

For many of the years that I have been involved in Healthcare IT I have had the good fortune of working for EHR software vendor organizations. It was very exciting to develop bleeding edge technology supporting interoperability standards, and see the benefits realized in real-world use. We broke through many barriers in the healthcare field, although we certainly did not reach the ceiling. However, as in many market sectors, times change and with such change innovation and new ideas take a front row to what has become solid and stable. EHR solutions are mostly now considered to be the foundation for the industry, providing a solid framework upon which more complete solutions are being delivered in interest of bettering patient health outcomes.

In the past few years so many niche applications have arisen to address very specific problem areas. This is in part due to a natural industry shift requiring fresh perspectives on older problems. This is in other part (in the US) due to government incentive programs driving forward a need to improve patient health outcomes such as ACO, PCMH, and MACRA (QPP).

As technology solutions mature, they naturally become more difficult to adapt to changing conditions. The more components a solution has the more effort it requires to update those components while retaining backwards compatibility and the harder it is to craft deployment packages that do not interrupt existing live productions in significant negative ways. Managing a technology solution over the long term definitely comes with its challenges as well as its benefits. New comers into the market are able to quickly adapt to changing requirements that exist, freely building innovative and creative solutions that are not so dependent on existing components and choices made earlier in development by the more established solutions. These newer solutions are able to be plugged into the older solutions to address gaps in functionality that might not otherwise be addressed. This is the nature of software engineering and development as it has progressed since its inception in the middle of the last century.

The other driver of new technology solutions in healthcare is that of federal incentive programs that are placing a strong emphasis on improving patient health outcomes. Research is beginning to show that improving health outcomes for patients across a population is more than just using a certified EHR system or following the medical practice guidelines from the appropriate medical college. Rather, it involves a much more complete and holistic approach of medical practice that looks not only at the clinical facts of a patient, but engages the patient in a relational aspect, understanding the impact that social determinants have on that particular patient. Understanding the mental health of the patient is also quite important, often times being very difficult to properly diagnose and treat for a number of good reasons. To address these sorts of issues doctors must find ways to meet patients where they are today, using tools and approaches that are natural to patients. This means making use of apps developed against popular platforms that interact with social media, are intuitive to use, integrate with existing systems of all kinds to provide seamless access and management of the patients health data and care from both the provider and patient perspectives.

This IS the new face of healthcare, this combination of the foundational EHR vendors’ existing solutions, in conjunction with newer solutions that focus in on very specific problems that need solving. And really, this is not much different than how technology has always been managed. As solutions age, various components of those solutions are wrapped with interfaces that abstract them away, providing a way for the newer, more efficient, and many times more socially accepted solutions to interact with the older technologies. In the same thought, wisdom is valued more than mere intelligence or trend. That wisdom is grounded in the foundational participants in healthcare IT that continue to drive forward standards-based development efforts, focusing on the shared goal between patients, providers, and payers of improved patient health outcomes for all.