What Standards Are in Your Toolbag?

It is important to never forget that a standard is a tool to be used to attain a certain end. It is easy, and sometimes selfishly rewarding, to get caught up in situations where developing new standards-based solutions seems like the right idea, but in fact it’s putting the ladder up against the wrong wall in the context of certain business cases. One must carefully choose between a path that explores the effectiveness of a new standard and a path of meeting the customer’s business need. Sometimes these paths converge, but they must be taken with care and each situation has a different tolerance level. The truly golden moments are those where the standard actually helps to solve the business problem in a way better than a non-standard can. That’s what we call a win-win!

Standards exist along a very lengthy spectrum and in varying degrees of maturity and usefulness. They have very different meanings and provide different levels of value to different stakeholders. Some standards we take for granted today, such as TCP or XML, but those standards were once bleeding edge, believe it or not. And some developer somewhere was exploring what it might be like to leverage TCP to move data between systems. For example, Tim Berners-Lee was one such developer and used TCP/IP to invent the World Wide Web. This was quite revolutionary at the time, but something we certainly take for granted today.

There are most certainly appropriate times to use new standards to solve business problems, but as developers, we must view them as a tool in our toolbag, not the end-all-be-all way to solve a problem. I have seen customers so often become hyper-focused on a particular standard being the single best solution to their problem, and then once the layers of reality are exposed which make evident the complexity of the situation, all doubt ensues about standards being good for solving real-world problems. It feels like there is some sort of pattern here, perhaps it’s Gartner’s Hype Cycle, and it’s the Trough of Disillusionment that these customers and find themselves in. It is our job then, as developers, and as evangelists of standards to guide customers in and out of the issues to achieve a successful end for all parties involved.

So how do we prevent this? We educate, educate, educate. We must educate at the right level, in the right time, and in the right way.

Right Level

I don’t do plumbing, but I know a bit about plumbing. If I need to hire a plumber, I know enough about plumbing that I know how to talk to a plumber. This is important because it ensures the contract I have with the plumber is understand both by me as well as the plumber. Expectations of the work being performed and the expected results are well-enough understood for the project to be determined to be a success (or not!). This ensures I do not pay for something I do not understand to a minimal degree at least, and it also holds the plumber accountable for doing a good job. What is important from the plumber’s viewpoint is to set the right expectations to the receipient of the services (me) as to the benefit of the service being provided. If the recipient of the service expects too much, the plumber will come out on the losing end of the arrangement.

Right Time

People are receptive to receive information in different ways, at different times. We must think about when it is appropriate to deliver information to people so they are receptive to it, and so that understanding will persist. At the beginning of a project is usually a good time, ideas are being shared around, requirements are being clarified, it’s usually not too late to make last minute changes to scope. Bad news should be delivered early and often, along with a mitigation plan, and assumptions should be made that the customer probably doesn’t understand the complexities of successfully delivering interoperability solutions (just as I may not understand the complexities of a complex plumbing project).

Right Way

The right people must provide education about the effective use of standards for a particular project in the right way. The right demeanor, not insulting/disrespectful, not too assuming, the right amount of patience, and having a passion for learning are all good characteristics to have in such individuals. These characteristics generally are found in people with enthusiasm about their role as an educator of standards. It’s not just about the content being delivered but it’s how that content is delivered. It’s about building a rapport with the “students,” about gaining their respect so that any instruction is well-received. This becomes quite an art when the amount of actual time available to build such trust is so often limited.

Standards are created to bring order to a chaotic world. They must be implemented appropriately and in situations that bring value to most or all parties involved. If there is a cost to bear as an early adopter that cost must be understood and planned for. The big challenge here is that cost must also not be allowed to prevent innovation in the interest of longer term success for the customer and organization alike, and this requires gaining buy-in from the sponsors of the work.

Standards implementation in Health IT is often about vision casting, and educating the middle ground between the exiciting vision and ground level developer work that happens. It’s that in-between ground that helps customers see the benefits of the vision without having to get too deep into the mess (and fun!) of writing code to leverage those standards.

So what is your experience using health IT standards to solve real-world business problems and how have you worked with your customers to overcome issues around early adoption?

What’s in a Name?

What’s in a Name?

Well, for starters, sometimes everything. Names are highly important. It’s the mechanism we use to remember our friends and family by, as well as our adversaries. Names aid us in navigating the world, in providing direction as to how to understand what’s underneath the surface. Names can also lead us astray, as in “don’t judge a book by it’s cover.” I feel it’s time to reconsider the name of this blog in interest of providing better understanding to the world about what’s happening here.

Welcome to Interop Futures! Why Interop Futures? It’s what I do. I have the good fortune to spend a good amount of my time thinking about the future of interoperability, specifically in healthcare. I apply this thinking to ideas at my full time job. I apply this thinking in my IHE participation. I apply this thinking to other non-career related areas of my life as well.

I have a few specific reasons for changing the name:

Better Industry Recognition

My old blog name – {ts_blog} – was rather poetic (as a respected colleague recently mentioned to me – thanks Gila!), but it was quite vague as well. Unless you read the tag line or titles of some of the posts it was hard to really understand what this web presence was all about. Now we have more clarity about what’s going on here.

Alignment with Career and Industry Involvement

I have been trading in interop futures for the better part of a decade now, going back to 2007 when I first got involved in IHE. I joined a company and wrote a profile on antepartum care, and then implemented that profile in our EHR solution. I left that company about a year later because funding was cut for “IHE work.” And so began my career in interoperability futures. I have worked for three other companies since, with a handful of independent contracts in between and almost all of my work involves thinking about and implementing solutions around the future of healthcare IT, about future-proofing those systems, about developing ideas on how to bridge the old to the new, and so on.

Retains a Certain Amount of Flexibility

“Interop Futures” does not have “healthcare” in the name. It allows for a certain amount of flexibility in terms of what topics I choose to write about. Many of the underlying standards used in healthcare IT standards and solutions are domain-agnostic. SOAP and REST by themselves are not healthcare specific, but when used in XDS and FHIR they certainly become so.

It’s a Cool Name!

What’s not to like about the name “Interop Futures”? It sounds like a cool movie, or a crystal ball that allow you to see what’s coming. In reality there is no crystal ball, and I certainly have no movie contracts. Even still, thinking about the future of where health IT is going, and how we might get there is intriguing! As we must not also forget the past lest be condemned to repeat it, the writing you will find here will at times focus on the past and the present, to build into thinking about what is coming in the future!

IHE on FHIR: History, Development, Implementation

Plentiful is the health IT industry with FHIR discussions and opportunities. It’s on everyone’s topic boards, it’s being pitched at all of the health IT conferences, it’s being discussed and used time and again in SDOs, apps are being developed, initiatives are born. And it’s possibly near a tipping point of success.

HL7/IHE History around FHIR

IHE and HL7 have a long history, going back to the beginning of IHE in 1998 (HL7 was already in existance). There have always been collaborators across and between the two organizations. This is, effectively, how IHE begun. A bunch of health IT standards geeks were seeking a new way to provide interoperability guidance to the world, and thus IHE was born. So it’s not surprising that pattern has continued into the era of FHIR. It started with ad-hoc liasons between the organizations, taking a set of FHIR resources into an IHE Profile, or taking requirements from an IHE Profile back to HL7 to create or modify an existing FHIR Resource. The value of FHIR was quickly recognized as a market disruptor, and as such IHE and HL7 begun to explore the idea of formal collaboration more seriously. These organizations are big ships, and they turn slowly, but over the past 6 years, they seem to be turning in the right direction.

In 2013 HL7 and IHE drafted and signed a Statement of Understanding to identify many areas of collaboration between the two organizations. While this SOU did not make specific mention of FHIR, I strongly suspect FHIR was a driving factor in the agreement.

In 2014 the IHE-HL7 Coordination Committee and the Healthcare Standards Integration (HSI) Workgroup were both created. The former in IHE, the latter in HL7. These were intended to be “sister groups” to work with each other helping to improve collaboration for both organizations, leading to greater efficiencies for all involved. These groups languished a bit and never really got enough traction to continue in the way they were originally envisioned.

A few years later, in 2017, IHE created and IHE FHIR Workgroup that continues to meet today. This workgroup is focused on how to include FHIR in IHE Profiles and has very detailed guidance on this documented on the IHE wiki. It also tracks IHE Profiles using FHIR, cross-referencing across IHE Domains. This workgroup has produced materials and guidance that is very helpful to bringing together IHE and FHIR.

In 2018 Project Gemini was launched, named after the space program of years ago. It’s goal is to identify and bring to market pilot project opportunities based on FHIR. It will leverage and potentially create specifications, participate in testing activities, and seek demonstration opportunities. Basically, it’s job is to tee up FHIR-based projects so they can be launch into the outerspace of the health IT ecosystem. Interoperability is often big, expensive, and scary to implementers and stakeholders – similiar to the challenges that NASA’s Project Gemini was facing.

We are on the cusp of pitching into a new era in health IT with the forthcoming of FHIR. While FHIR will not be a silver bullet, it does provide a great opportunity to be disruptive, in a good way.

IHE PCC and QRPH – Profiles on FHIR

The PCC and QRPH domains have been working on FHIR-based IHE Profiles since 2015. PCC has a total of 9 Profiles that include FHIR, and 1 National Extension, and is working on updating 1 of those Profiles this development cycle to include additional FHIR Resources. QRPH has a total of 4 Profiles leveraging FHIR, with 1 new FHIR-based Profile in the works for this development cycle.

One observation that we have made within PCC, and this is also being used in other domains, is the importance of retaining backwards compatability for our implementers by adding FHIR as an option to the menu. It is not a wholesale delete old and bring in new situation. In fact, if we followed that approach then standards would likely never be implemented en masse as they would always be changing. So an IHE Profile that uses CDA today, and that is under conseridation for FHIR will be asssed by the IHE committee to determine if it should add FHIR as another menu item, or perhaps a more drastic measure should be taken to deprecate the “old” technology.

This will obviously vary based on a number of factors, and that’s a topic for another post, but the point is that the default goal for improving existing IHE Profiles with FHIR is not to replace everything in that Profile with FHIR. Rather, it is to assess each situation and make a wise choice based on what’s best for all involved (vendor implementaters, stakeholders (patients and providers), testing bodies, governments, standards bodies). This does not mean that everyone is happy all the time, but all angles must be considered and consensus is desired.

Implementation of IHE and FHIR

FHIR is being implemented in various ways across the industry. There are two very significant initiatives happening right now that are well-positioned to launch FHIR into the outer space of health IT: CommonWell Health Alliance and Carequality. Both iniatives have been around for roughly the same amount of time (CommonWell 2013, Carequality 2015), and focus on the same general mission to improve data flow in support of improving patient health outcomes, but they take different approaches to get there. CommonWell provides a service that members leverage to query and retrieve data, whereas Carequality provides a framework, including a governance model to do this.

These are fundamentally different approaches but both are achieving great success. CommonWell touts upwards of 11,000 healthcare provider sites that are connected to their network. Carequality touts 1,700 hospitals, and 40,000 clinics leveraging their governance model to exchange data. These are big numbers, and both organizations are on a trajectory to continue increasing their connectivity. CommonWell already has FHIR fully embedded as an option is their platform, with the ability for a member to only leverage REST-based connectivity (most, if not all of which is based on FHIR) to fully participate in the Alliance’s network. Carequality currently has an open call for participation in newly forming FHIR Technical and Policy Workgroups to include FHIR as a main-line offering in their specifications.

Given that both of these initiatives have included IHE as part of their original implementation requirements, and that both are now including FHIR, and that both have signifincat implementation numbers – we have an exceptional opportunity to advance interoperability in ways that we have not been able to previous.


The world of interoperability is alive and well, despite constant setbacks (due mostly to non-technical things), and thanks in part to IHE and FHIR. Convergence is happening, both on the SDO front as well as in the implementation world. And I fully expect that convergence to continue.

IHE PCC Domain Meetings – Fall 2018

The fall PCC, QRPH, and ITI technical committee meetings were held in Oak Brook in mid November, as usual. Unfortunately I was not able to attend the October planning committee meetings – neither in person or via telecommute due to other committments, however I was able to catch up with what is going on by attending in person for the November meetings. I attended mostly PCC, and sat in on a few QRPH sessions. Here is a quick update on PCC Activities.

Profile Work

In PCC we are going to have a quiet year, with only one profile work item, and this work item is to update an existing profile: Dynamic Care Team Managment (DCTM) to include some additional FHIR-based guidance. Tune in for the calls being scheduled now if you want to learn more.

As I understand it there has also been some discussion on CDA harminozation, but there are no formal CDA harmonization work efforts on PCC’s plate for this upcoming cycle. This is a topic that has been discussed in previous years, but only mediocre progress has been made. Perhaps with efforts like Project Gemini there is hope to re-ignite some of this work.

Change Proposal Work

PCC received a sizeable number of CPs last year (2017), and have been slowly working through processing these. This work will continue with the goal for this next work cycle to have all of these CPs closed out.

Based on a quick count here is our CP submission by year:

Year Number of CPs Submitted
2018 6
2017 37
2016 9
2015 12
2014 18
2013 43
2012 20
2011 30
2010 15
2009 5
<= 2008 77

While 2017 was not our largest CP submission year, it is still significant. We have 13 CPs that remain in “Assigned” status, which means that someone is reviewing and finalizing for inclusion in a ballot. If those CPs make it to ballot and pass, then they will be incorporated into the appropriate published document (e.g., Technical Framwork, Profile, National Extension, Appendix).

PCC Committee Structure

The PCC Planning and PCC Technical committees have made a decision to combine into a single planning/technical committee, at least for the next work cycle. This is to streamline the work of PCC given the lower number of participants. Fortunately we do have 3 co-chairs (one of whom is acting more as a “co-chair emeritus”, providing expert guidance to our two new co-chairs.) This is completely within the bounds of the IHE Governance, so no issues on that front. In future years we may split back into separate planning and technical committees if appropriate.

PCC Publication Cycle

We also discussed, and are interested in the idea of supporting a year-round profile publication process. This is something that has been discussed in previous years, but due to high volume of profile publication (and other reasons) it has not yet been possible to achieve. ITI is also interested in this idea and has started a wiki page with a great deal of detail. I encourage you to read this and comment on the wiki page if you have additional ideas to include. As part of this effort IHE may also need to look at it’s back end publication processes and explore opportunities to move away from pdf-based publication.


PCC has it’s work items outlined for this upcoming cycle, and an opportunity to explore what a new/different publication cycle might look like. While there are not a great number of profiles being published this year (only one, in fact, and it’s “just an update” at that), this doesn’t necessarily signify distress for PCC, rather it could indicate the market is still working on catching up with many of the profiles that PCC has published over the past several years.

The Pillars of Health IT Standards – Part 1

The Pillars of Health IT Standards – Part 1

Health IT standards can be broken down into what I call pillars of interoperability. These pillars are Content, Transport, and Workflow. Content standards aid in clearly communicating content shared between various applications. They provide a medium between disparate health IT systems to speak the same language. They provide clues, sometimes very specific, sometimes vague, as to what data lives inside of various content structures. Transport standards describe ways that content will be sent to, received from, or otherwise made available for consumption between two or more applications. Workflow standards stitch together the web of interaction points across the health IT ecosystem, leveraging Content and Transport standards to describe what an “end-to-end” flow of healthcare data looks like for any given use case.

This will be a 3-part blog post series breaking down each of these concepts. This post will focus in on Content, followed by Transport, and finallly Workflow.


Is very important when it comes to one system needing to communicate to another system relevant information to do something. Understanding the content is vital for the system or end user to be able to take some action. If the information is improperly understood then catastrophe could follow, and rather easily. Let’s take the example of units of measure on a medication. Milligrams is a common unit of measure, micrograms is perhaps not (at least in my non-clinical experience). 200 milligrams is a common dosage of ibuprofen, but 200,000 micrograms is not. The astute reader will note these are equivalent values. Suppose that a health IT system creating content to be shared with another system uses the unit of measure of micrograms for ibuprofen and documents a dosage of 200,000 (this could be entered as 200 milligrams by the end user, but perhaps stored as micrograms in the database). A health IT system consuming content created from this source system could accidentally misinterpret the value to be 200,000 milligrams, potentially resulting in a fatal situation for the patient.

While the above example may seem far-fetched, this is a reality that happens all too often and there has been much analysis and research done in the area of accidental medication overdose. The proper creation and consumption of content is vitally important (quite literally!) to a positive health outcome for a patient. Content creators must ensure they properly represent the data to be shared, and content consumers must ensure they properly understand the data to be consumed.

Content can be broken down into several different components: structures, state, reference data, and data mapping. Let’s take a look a each of these areas.


The structures used in content interoperability vary from base-level standards to implementations of those standard structures to meet a specific use case. The base-level standards are at the “schema” level, which defines the valid data types, how many times a particular element may be repeated (or not), etc. The implementation of those standards to meet a given use case is at the “schematron” level. ISO Schematron is a standard developed by ISO that has been in use for the past several years to validate conformance of an XML implementations against any given specification.

This idea of base structure versus what goes inside of that structure is important as it allows for multiple levels of standards development, and enables profiling standards to create specific implementation guidance for specific use cases. Through this approach the exchange of health IT information is able to be effectively exchanged in a changing market of available standards and systems.


Content may exist as transient or as persistant. Sometimes the lines are blurred here where transient data may later be repurposed as persistant, or vice versa! Workflow (discussed in a forthcoming post) helps to address this issue. State, in this context, is not quite the same as status although they share some characteristics. State is more distinct. A set of content is “in” a state, whereas it “has” a status. So content that is in a document state may be in an approved status. The difference is subtle, but very relevant. Health IT standards rely on the use of states to provide some sense of stability around what to expect of the content standardized within.

Reference Data

Reference data is a special kind of data used to classify other data. It helps to describe the purpose and meaning of the data. Reference data is found in the form of standardized vocabularies such as SNOMED and LOINC. Reference data is commonly required in master data management solutions to link the data across the spectrum, whether that data be patients, providers, clinical concepts, financial transactions, or any number of other master data concepts that are able to be leveraged to tell a story about the business that brings value to the organization in terms of decisions they need to make. Reference data can also be used in inferencing solutions where probable conclusions are developed based on the presence of a certain number of other specific data that is available. Reference data is an extension of schematron – if schematron defines what general type of data shall exist in a given content structure, then reference data allows for flexibility in terms of what the options are for specific content within those assertions.

Data Mapping

Data mapping is the act of identifying the likeness of one data concept to another, and providing a persistant and resuable link to that data. This is the glue that enables systems to exchange data. Data mapping leverages standard structures and reference data to figure out what needs to be mapped where. A particular set of inbound source content may be represented by one indsutry standard and needs to be mapped to an internal data model. If the source data is already linked to an industry standard reference data set (i.e., Vocabulary), then both the structure and the specific implementation and codification of the data elements within that structure can be mapped into the internal data model with relative ease given the internal system has tooling in place to support such content and reference data standards. That is a long-winded way of saying that content standards and terminology standards go a long way to solving interoperability problems when implemented properly.


Content is vitallly important – I would argue the most important aspect of health IT interoperability. If the content is not understood by any system in a health exchange of data, a number of different problems present themselves. And sometimes ill-communication is worse than no commmunication if it is misunderstood in a way that will bring harm to the patient. As the Hippocratic Oath states: “First do no harm.” A content creator must take extreme care to ensure content is properly represented, and a content consumer must equally take care that the content is consumed in the way it was intended for consumption by the creator.

Internal and External Content Model Standards

A conversation came up recently with someone on the use of and adherence to healthcare industry standards inside of a particular robust solution in healthcare. I have seen many an organization model their internal standard data model after an industry-wide data model such as HL7 CDA or HL7 FHIR. What seems to be often missed is the importance of an organization to realize that they will always need their own “flavor” of that standard for internal use.

The idea to follow the industry standard is one that is very well-intentioned, but it is also extremely difficult to implement and maintain. It is also not always the best choice because industry-wide standards are intended to handle use cases across many different organizations (hence the name “industry-wide”) and while they may meet the specific needs of the organization desiring to implement the standard, they may also include additional “baggage” that is not helpful to the organization. Conversely, they may require extensions or adaptations to the model to fully support the organization’s specific use cases. The effort required to implement the content model with either of these considerations can become burdensome.

We must realize that industry standards are quite important to drive health IT applications toward common ways to exchange and communicate data, but they must be at the guidance level, and not the end-all-be-all way to represent data that needs to be shared. This is now being realized in the data warehousing market as ‘schema-on-read’ is becoming a more popular approach to dealing with analytics on large data sets as opposed to ‘schema-on-write.’ The optimism on ‘one data model to rule them all’ is shrinking. A good example of this would be a solution that leverages metadata for the anlaytical data points rather than the concrete source data structures. This allows an application to focus on writing good queries, and lets the metadata model deal with the underlying differences in the source data model. It provides an effective layer of abstraction on the source data model, and as long as that abstraction layer properly maps to the source data model, then we have an effective ‘schema on read’ solution. This sort of approach is becoming more and more necessary as the rate of change in the technology and in healthcare IT is still increasing.

Internal standards are more manageable. Organizations can design and implement a content model for a set of use cases in a reasonable time frame with a reasonable amount of resources. This model may even be based on an industry-standard model, but it must not BE the industry standard model! What I mean by that is that expectations must set clear from the outset that the model WILL change over time as the organization changes, as the business opportunities change, as laws change, etc. As the decision is made as to what the internal model is to be, it must be understood that it is for that organization only and mapping shall be provided to and from as needed, while looking for opportunities of reuse across specific data elements or data element groups.

What this all drives toward is having interoperability as a first class citizen in an orgnization’s solution set. The content model is important, but the content model is designed for internal usage, with mappings to external systems’ content models. In addition to the content model, an organization must also include their implementation approach in their overall strategy to ensure that external systems can be mapped to the internal content model effectively (on time, under budget, and meeting the business needs). A great strategy without an execution plan will die on the vine.

In summary, the intent of this post is an attempt to clarify the difference between this idea of an external data standard and an internal data standard, and the overlap between these ideas. Interoperability is not a clear cut landscape. Interoperability is hard. We must realize and accept that fact and look for ways to work effectively within it to drive toward higher quality communication between health IT systems, leading to improved patient health outcomes.

A Changing Landscape in Healthcare IT Software

For many of the years that I have been involved in Healthcare IT I have had the good fortune of working for EHR software vendor organizations. It was very exciting to develop bleeding edge technology supporting interoperability standards, and see the benefits realized in real-world use. We broke through many barriers in the healthcare field, although we certainly did not reach the ceiling. However, as in many market sectors, times change and with such change innovation and new ideas take a front row to what has become solid and stable. EHR solutions are mostly now considered to be the foundation for the industry, providing a solid framework upon which more complete solutions are being delivered in interest of bettering patient health outcomes.

In the past few years so many niche applications have arisen to address very specific problem areas. This is in part due to a natural industry shift requiring fresh perspectives on older problems. This is in other part (in the US) due to government incentive programs driving forward a need to improve patient health outcomes such as ACO, PCMH, and MACRA (QPP).

As technology solutions mature, they naturally become more difficult to adapt to changing conditions. The more components a solution has the more effort it requires to update those components while retaining backwards compatibility and the harder it is to craft deployment packages that do not interrupt existing live productions in significant negative ways. Managing a technology solution over the long term definitely comes with its challenges as well as its benefits. New comers into the market are able to quickly adapt to changing requirements that exist, freely building innovative and creative solutions that are not so dependent on existing components and choices made earlier in development by the more established solutions. These newer solutions are able to be plugged into the older solutions to address gaps in functionality that might not otherwise be addressed. This is the nature of software engineering and development as it has progressed since its inception in the middle of the last century.

The other driver of new technology solutions in healthcare is that of federal incentive programs that are placing a strong emphasis on improving patient health outcomes. Research is beginning to show that improving health outcomes for patients across a population is more than just using a certified EHR system or following the medical practice guidelines from the appropriate medical college. Rather, it involves a much more complete and holistic approach of medical practice that looks not only at the clinical facts of a patient, but engages the patient in a relational aspect, understanding the impact that social determinants have on that particular patient. Understanding the mental health of the patient is also quite important, often times being very difficult to properly diagnose and treat for a number of good reasons. To address these sorts of issues doctors must find ways to meet patients where they are today, using tools and approaches that are natural to patients. This means making use of apps developed against popular platforms that interact with social media, are intuitive to use, integrate with existing systems of all kinds to provide seamless access and management of the patients health data and care from both the provider and patient perspectives.

This IS the new face of healthcare, this combination of the foundational EHR vendors’ existing solutions, in conjunction with newer solutions that focus in on very specific problems that need solving. And really, this is not much different than how technology has always been managed. As solutions age, various components of those solutions are wrapped with interfaces that abstract them away, providing a way for the newer, more efficient, and many times more socially accepted solutions to interact with the older technologies. In the same thought, wisdom is valued more than mere intelligence or trend. That wisdom is grounded in the foundational participants in healthcare IT that continue to drive forward standards-based development efforts, focusing on the shared goal between patients, providers, and payers of improved patient health outcomes for all.

Workflows in Healthcare Standards

Workflows are a big challenge in healthcare interoperability. Workflows exist everywhere, and in all forms. There are complex workflows, simple and straight-forward workflows, confusing workflows, cross-domain workflows, single-domain workflows, static workflows, dynamic workflows, defined workflows, and undefined workflows. Workflows are at the core of everything that happens in healthcare. We, as an industry are just beginning to understand how to go about defining workflows from an electronic perspective. For years IHE and other standards development organizations have been creating implementation guidance components that ultimately make up these larger workflows. These workflows derive from the various medical colleges that spend much time studying clinical workflows and producing guidance on implementation of these workflows in healthcare practices.

This blog post will not go into detail on all the different types and aspects of workflows mentioned above, however, it will introduce a couple of approaches that IHE has been using: static workflows and dynamic workflows.

Static workflows are those that are very well defined, and rarely, if ever, need to change from their original guidance. I admit that I am no expert in radiology, but I do know that the radiology domain profiles workflows that are fairly well understood and able to be controlled – at least to some degree. For example, a patient needs to have a certain imaging procedure performed, so they will first be registered in the system, an order will then be placed for the imaging procedure, the procedure will be scheduled and placed onto a DICOM Modality Worklist. The procedure is managed on the worklist according to the specifics involved (which are different for various imaging procedures). All in all, the process is fairly well understood. What happens outside the bounds of that particular procedure are of course variable, but the imaging procedure itself fits nicely into a well-defined workflow, and is finished within a single outpatient visit.

Dynamic workflows are much less predictable and must provide appropriate levels of adaptability in order to be successfully implemented. These workflows are those that are very open ended and dependent on many varying factors. Did the patient actually take their medication? Did the patient schedule that follow up appointment? It turns out that it is hard to ensure that patients follow their care plans prescribed by their providers. It is often times equally as hard for the patients themselves to follow their own care plans, amid busy schedules, with families to manage, work, etc. Sometimes tasks are forgotten or deemed to be less important than the task competing for their immediate attention. There is also a level of importance placed upon any given task in a patient’s care plan based on the amount of benefit that would be received from the task.

For example, a care provider may prescribe some sort of physical therapy, and if the patient completes the physical therapy then she will show signs of improvement (less pain, more mobility, etc). Maybe that is enough to satisfy the patient, but it would not satisfy her doctor. The patient decides to stop going to the physical therapy sessions because she thinks she is “better enough.” However, her condition worsens over the next few months, and she must schedule a follow up visit with her doctor to determine how to reengage with her physical therapy. The same scenario could be applied to many other situations, one common example being medication adherence for anemia.

Another aspect of primary care is dealing with patients that have comorbidities. Multiple chronic conditions can greatly complicate being able to effectively care for a patient. Combined with the fact that different people have different metabolic rates for medications that are not based on consistent factors – such as body weight, height, and vital signs – the doctor needs a system that is flexible and adaptable in order to do their job. This is one reason that adoption of EHRs has been so sluggish in the past few years. They tend to constrain care providers to a certain workflow that gets in the way of the doctor caring for their patient.

Workflows are addressed in IHE domains in various different ways depending on the clinical use cases being addressed. There are a handful of different underlying standards that are profiled, and in many cases those standards are profiled in slightly different ways for the varying clinical use cases present. Workflows exist everywhere, in every system – both healthcare specific and generic. What makes implementation guidance effective is a combination of factors, one of the key factors being how well the clinical workflow is understood by the specification author (or committee) and how well the underlying standard is matched to the clinical use case in a way that an implementer (i.e. health IT software vendor) can understand and write code to it, providing a useable product to the clinician.


High quality listening skills are so important to many roles across various types of organizations. I am reminded of this constantly. Recently, that reminder came through watching a documentary about the assassination of US President James Garfield in 1888. Dr. Willard Bliss, the physician treating President Garfield, was guilty of not listening. He was perhaps blinded by his own ambition, or maybe his ego. Dr. Willard was not an uneducated man, he was understood the importance of listening, he had to in order to achieve the status he had. When Alexander Graham Bell visited the injured president to offer the use of his newly developed technology in metal detection in attempt to find the bullet lodged in his body, Dr. Bliss would not allow Bell to test the instrument on the president’s left side, claiming that the metal springs in the bed were interfering with the device. Why Garfield could not be moved to another bed without metal springs, or why Bliss did not simply allow Bell to at least try on the opposite side of Garfield will likely never be fully known. Regardless, the hindsight wisdom is of course that he should have been more open to alternatives, putting his ego, ambition, or whatever was in the way of his ability to listen in the best interest of his President and patient.

In a software company listening to customers is also quite important. There are two parts to listening: the actual act of listening, and the interpretation of what is being communicated. The latter being achieved often by ensuring the right questions are asked to ensure the sharer of information is providing all of the right context and information necessary to make a good decision. In a product management role the act of listening to the customer is often driven by the organizational structure more so than the individual with the opportunity to listen. What I mean by that statement is that some companies promote as much communication as possible between product analyst teams and their end users (customers of the product) understanding the benefit that comes from that communication is much more often than not positive improvements in the product feature sets. Other companies stifle this communication by providing a process by which there are teams that build the product and then there are teams that manage the customer relations. The idea is to isolate or protect the builder teams from too much customer chatter to keep their productivity levels high. But the problem is the same one found in the game of Telephone (Chinese Whispers), where the resulting message communicated to the builders that originated from the client engagement folks is not the intended message! Building software is a complex task, and there is much room for error when taking this approach.

As the software industry matures, one of the leading ideas that has introduced an incredible paradigm shift in this regard is the Agile Manifesto, an idea that many of you are likely very familiar with. Following the Agile Manifesto, and associated methodologies that are built on its principals ensures that you and your team are outstanding listeners. In fact, the third rule in the manifesto is “Customer collaboration over contract negotiation.” Agile also supports a very strong idea of accountability, ensuring that you and your teammates are all good listeners, among other things. So how is your listening? Are you asking the right questions by deeply processing what your customers are saying and applying critical thinking?

Prescribing Behavior

At the InterSystems Global Summit a few weeks ago I heard about this new approach to solving medical problems, in lieu of medications. It’s the idea of prescribing behavior to a patient, just the same as a medication would be prescribed. The idea is basically centered around an app store for clinicians. A doctor will issue a prescription for an app, and use the medical app store software to ensure adherence to proper use of the app. The clinician will be notified when the patient installs the app, and will be able to track whether the patient is using the app as it was intended to be used. If the patient is not using the app then appropriate follow up actions can be taken (e.g. text message, phone call). It could be essentially the same workflow that happens today with medication adherence.

This brings up a couple of interesting ideas. One is that our day to day behaviors sometimes lead to improved outcomes (and perhaps more often than not!). If a patient has Type 2 Diabetes then diet and excercise can have tremendous benefit. This particular example is, of course, not a new discovery. The less understood, or perhaps less quantifiable aspect here is the impact from improved emotional health. Does happiness truly help a patient to combat certain diseases? Some researchers think so. In this line of thinking, an app that somehow tracks and improves happiness levels could also be used to collect data correlating to disease outcomes.

Another interesting idea is how this might impact the existing pharma industry. Surely there are still and will remain very valid reasons for and effective outcomes resulting in the prescription of real medications. I am in no way advocating this is a silver bullet, and we do not need medications. However I do know, from personal experience how technology can influence change in habits, for better or for worse. And I also have seen how our culture today readily embraces convenience – people want to have their cake and eat it too. If doctors start prescribing apps, and medication sales decline, pharma might need to rethink its strategy – perhaps it gets in on the app store business.

Although there are clinicians using this now from what I have heard, from the overall industry perspective I believe this is still a medium to long term plan. If I were to guess it would weigh heavily on the fact that there is still a generation gap with technology adoption. All in all the clinician app store approach is all about encouraging the patient by providing motivation to adopt and maintain behaviors that result in better outcomes. And things that improve outcomes are things that I am very much a fan of.