Category Archives: Technology

IHE Connectathon – a Look Back and a Look to the Future

I spent last week at the 20th IHE North America Connectathon. This is my 14th year involved in IHE, and my 13th year attending the Connectathon. This year my role was to serve as an IHE Ambassador to assist others in understanding what is happening at the event and to be a member of the welcoming party to new comers.

I was asked to share a few thoughts for the video crew around what my experience has been with past IHE Connectathons, what has changed over the years and what I think some of the successes and challenges have been in interoperability over the past year, so it seems a good opportunitiy to write about it here.

My first Connectathon was in 2008 when I worked for digiChart, a software company focused on obstetric care. As a still fairly green software developer I spent the year prior writing the Antepartum Summary (APS) IHE Profile in the IHE PCC Domain and implementing that profile along with XDS, CT, ATNA, and a few others in the digiChart product. I took that implementation to Connectathon and tested with other software vendors. This was my first foray into IHE and I was hooked. I have continued to stay engaged in IHE in one form or another since that time, through six employers supporting at least seven organizations. My roles have included: software engineer, development lead, HIMSS Showcase Technical Project Manager, strategic guidance, and IHE Ambassador.

Over the years I have watched the Connectathon tooling improve greatly. The first tool used was Laverne Palmer and a giant pad of paper, but this was legend I’ve only heard others speak of as it was prior to my involvement. Sometime after that came the Kudu tool, to which those of us participating learned how to use with its quirks, but at the end of the day it did it’s job well enough (which is not simple!). The successor to Kudu was Gazelle, which is still in use today. Gazelle was a redesign of Kudu and used a much improved architecture allowing for greater opportunities for continued improvement over time. Automation was provided year after year in support of vendor testers being able to execute their tests and receive feedback and results with faster and better accuracy. The interaction with test monitors through the Gazelle tool also greatly improved over time providing further efficiencies.

Another change I have seen over the years is the increase in number of middleware vendors bringing their products to the Connecathon. In earlier Connectathons these vendors were not encouraged, or even not allowed. An end user experience with every product was a requirement of participation. In other words, a user interface of some sort was required to demonstrate that interoperability was happening. As the market changed and middleware, or integrator systems, became more prevelant they were allowed to attend. This was a natural and expected progression as the market began to specialize more in specific interoperability areas.

Some things have not changed at Connectathon over the years, and one of those is the culture of extreme collaboration and cooperation that exists among the participants. One will observe that participants on the Connectathon floor that may compete out in the open market will be found working together at the event to solve interoperability problems. This was a culture that was established early on as it was recognized that the benefit of such an approach would result in higher levels of innovation and creativity. By breaking down the competitive walls, ideas could more freely be shared (and let’s face it, due to the craftsmanship nature of coding if you actually got a copy of another company’s source code it very likely wouldn’t do you any good unless you were to get it in totality which wouldn’t ever happen at an event like this). This culture still exists today. Many of the same faces appear each year at the IHE Connectathon, but new faces also show up to take advantage of the collaboration and interoperability testing that happens there.

In terms of successes and challenges of interoperability over the past year the answer is a little more difficult. There really isn’t any blockbuster news like The Most Important Interoperability Story of 2016, however, there is an expected ONC rule that will soon be released that supports much greater levels of patient data access and this is something that has been in the works over the course of 2019. This is significant! This is how we will continue to innovate, to create. We must provide patients with the opportunity to work with their data to gain greater levels of control of their healthcare data. Dave DeBronkart (“ePatient Dave”) discusses this when he speaks on “paternalism” and how that prevents patients from being able to have a say and postive influence in their own health outcomes. So while this is not a specific success marker in 2019, there is a lot of groundwork that has been laid in support of the regulation expected to released within a week or so.

A challenge in 2019 is the lack of organic adoption of interoperability. The driver still seems to be federal incentives. Perhaps that’s how it will always be, but I don’t think that’s ok. We have to find a way to tip the scales so that software companies will innovate to provide value-add services to patients such as those that exist in other markets. We do need continued help from our federal government (here in the US), but once we have the initial push (via upcoming regulation) I suspect that creativity and opportunity will eventually take over and we’ll see interesting and effective solutions on the market that allow patients to engage with and have better control over their data, and thus better understanding of what their healthcare options are for their specific situation.

All in all I think healthcare IT and interoperability continues to move in the right direction. I am thankful to be a part of this industry despite it’s challenges and sluggishness as compared to other industries. We ARE moving forward. We ARE getting better. We WILL succeed. Let’s keep pushing for access to our data!

IHE on FHIR: History, Development, Implementation

Plentiful is the health IT industry with FHIR discussions and opportunities. It’s on everyone’s topic boards, it’s being pitched at all of the health IT conferences, it’s being discussed and used time and again in SDOs, apps are being developed, initiatives are born. And it’s possibly near a tipping point of success.

HL7/IHE History around FHIR

IHE and HL7 have a long history, going back to the beginning of IHE in 1998 (HL7 was already in existance). There have always been collaborators across and between the two organizations. This is, effectively, how IHE begun. A bunch of health IT standards geeks were seeking a new way to provide interoperability guidance to the world, and thus IHE was born. So it’s not surprising that pattern has continued into the era of FHIR. It started with ad-hoc liasons between the organizations, taking a set of FHIR resources into an IHE Profile, or taking requirements from an IHE Profile back to HL7 to create or modify an existing FHIR Resource. The value of FHIR was quickly recognized as a market disruptor, and as such IHE and HL7 begun to explore the idea of formal collaboration more seriously. These organizations are big ships, and they turn slowly, but over the past 6 years, they seem to be turning in the right direction.

In 2013 HL7 and IHE drafted and signed a Statement of Understanding to identify many areas of collaboration between the two organizations. While this SOU did not make specific mention of FHIR, I strongly suspect FHIR was a driving factor in the agreement.

In 2014 the IHE-HL7 Coordination Committee and the Healthcare Standards Integration (HSI) Workgroup were both created. The former in IHE, the latter in HL7. These were intended to be “sister groups” to work with each other helping to improve collaboration for both organizations, leading to greater efficiencies for all involved. These groups languished a bit and never really got enough traction to continue in the way they were originally envisioned.

A few years later, in 2017, IHE created and IHE FHIR Workgroup that continues to meet today. This workgroup is focused on how to include FHIR in IHE Profiles and has very detailed guidance on this documented on the IHE wiki. It also tracks IHE Profiles using FHIR, cross-referencing across IHE Domains. This workgroup has produced materials and guidance that is very helpful to bringing together IHE and FHIR.

In 2018 Project Gemini was launched, named after the space program of years ago. It’s goal is to identify and bring to market pilot project opportunities based on FHIR. It will leverage and potentially create specifications, participate in testing activities, and seek demonstration opportunities. Basically, it’s job is to tee up FHIR-based projects so they can be launch into the outerspace of the health IT ecosystem. Interoperability is often big, expensive, and scary to implementers and stakeholders – similiar to the challenges that NASA’s Project Gemini was facing.

We are on the cusp of pitching into a new era in health IT with the forthcoming of FHIR. While FHIR will not be a silver bullet, it does provide a great opportunity to be disruptive, in a good way.

IHE PCC and QRPH – Profiles on FHIR

The PCC and QRPH domains have been working on FHIR-based IHE Profiles since 2015. PCC has a total of 9 Profiles that include FHIR, and 1 National Extension, and is working on updating 1 of those Profiles this development cycle to include additional FHIR Resources. QRPH has a total of 4 Profiles leveraging FHIR, with 1 new FHIR-based Profile in the works for this development cycle.

One observation that we have made within PCC, and this is also being used in other domains, is the importance of retaining backwards compatability for our implementers by adding FHIR as an option to the menu. It is not a wholesale delete old and bring in new situation. In fact, if we followed that approach then standards would likely never be implemented en masse as they would always be changing. So an IHE Profile that uses CDA today, and that is under conseridation for FHIR will be asssed by the IHE committee to determine if it should add FHIR as another menu item, or perhaps a more drastic measure should be taken to deprecate the “old” technology.

This will obviously vary based on a number of factors, and that’s a topic for another post, but the point is that the default goal for improving existing IHE Profiles with FHIR is not to replace everything in that Profile with FHIR. Rather, it is to assess each situation and make a wise choice based on what’s best for all involved (vendor implementaters, stakeholders (patients and providers), testing bodies, governments, standards bodies). This does not mean that everyone is happy all the time, but all angles must be considered and consensus is desired.

Implementation of IHE and FHIR

FHIR is being implemented in various ways across the industry. There are two very significant initiatives happening right now that are well-positioned to launch FHIR into the outer space of health IT: CommonWell Health Alliance and Carequality. Both iniatives have been around for roughly the same amount of time (CommonWell 2013, Carequality 2015), and focus on the same general mission to improve data flow in support of improving patient health outcomes, but they take different approaches to get there. CommonWell provides a service that members leverage to query and retrieve data, whereas Carequality provides a framework, including a governance model to do this.

These are fundamentally different approaches but both are achieving great success. CommonWell touts upwards of 11,000 healthcare provider sites that are connected to their network. Carequality touts 1,700 hospitals, and 40,000 clinics leveraging their governance model to exchange data. These are big numbers, and both organizations are on a trajectory to continue increasing their connectivity. CommonWell already has FHIR fully embedded as an option is their platform, with the ability for a member to only leverage REST-based connectivity (most, if not all of which is based on FHIR) to fully participate in the Alliance’s network. Carequality currently has an open call for participation in newly forming FHIR Technical and Policy Workgroups to include FHIR as a main-line offering in their specifications.

Given that both of these initiatives have included IHE as part of their original implementation requirements, and that both are now including FHIR, and that both have signifincat implementation numbers – we have an exceptional opportunity to advance interoperability in ways that we have not been able to previous.


The world of interoperability is alive and well, despite constant setbacks (due mostly to non-technical things), and thanks in part to IHE and FHIR. Convergence is happening, both on the SDO front as well as in the implementation world. And I fully expect that convergence to continue.

The Pillars of Health IT Standards – Part 1

The Pillars of Health IT Standards – Part 1

Health IT standards can be broken down into what I call pillars of interoperability. These pillars are Content, Transport, and Workflow. Content standards aid in clearly communicating content shared between various applications. They provide a medium between disparate health IT systems to speak the same language. They provide clues, sometimes very specific, sometimes vague, as to what data lives inside of various content structures. Transport standards describe ways that content will be sent to, received from, or otherwise made available for consumption between two or more applications. Workflow standards stitch together the web of interaction points across the health IT ecosystem, leveraging Content and Transport standards to describe what an “end-to-end” flow of healthcare data looks like for any given use case.

This will be a 3-part blog post series breaking down each of these concepts. This post will focus in on Content, followed by Transport, and finallly Workflow.


Is very important when it comes to one system needing to communicate to another system relevant information to do something. Understanding the content is vital for the system or end user to be able to take some action. If the information is improperly understood then catastrophe could follow, and rather easily. Let’s take the example of units of measure on a medication. Milligrams is a common unit of measure, micrograms is perhaps not (at least in my non-clinical experience). 200 milligrams is a common dosage of ibuprofen, but 200,000 micrograms is not. The astute reader will note these are equivalent values. Suppose that a health IT system creating content to be shared with another system uses the unit of measure of micrograms for ibuprofen and documents a dosage of 200,000 (this could be entered as 200 milligrams by the end user, but perhaps stored as micrograms in the database). A health IT system consuming content created from this source system could accidentally misinterpret the value to be 200,000 milligrams, potentially resulting in a fatal situation for the patient.

While the above example may seem far-fetched, this is a reality that happens all too often and there has been much analysis and research done in the area of accidental medication overdose. The proper creation and consumption of content is vitally important (quite literally!) to a positive health outcome for a patient. Content creators must ensure they properly represent the data to be shared, and content consumers must ensure they properly understand the data to be consumed.

Content can be broken down into several different components: structures, state, reference data, and data mapping. Let’s take a look a each of these areas.


The structures used in content interoperability vary from base-level standards to implementations of those standard structures to meet a specific use case. The base-level standards are at the “schema” level, which defines the valid data types, how many times a particular element may be repeated (or not), etc. The implementation of those standards to meet a given use case is at the “schematron” level. ISO Schematron is a standard developed by ISO that has been in use for the past several years to validate conformance of an XML implementations against any given specification.

This idea of base structure versus what goes inside of that structure is important as it allows for multiple levels of standards development, and enables profiling standards to create specific implementation guidance for specific use cases. Through this approach the exchange of health IT information is able to be effectively exchanged in a changing market of available standards and systems.


Content may exist as transient or as persistant. Sometimes the lines are blurred here where transient data may later be repurposed as persistant, or vice versa! Workflow (discussed in a forthcoming post) helps to address this issue. State, in this context, is not quite the same as status although they share some characteristics. State is more distinct. A set of content is “in” a state, whereas it “has” a status. So content that is in a document state may be in an approved status. The difference is subtle, but very relevant. Health IT standards rely on the use of states to provide some sense of stability around what to expect of the content standardized within.

Reference Data

Reference data is a special kind of data used to classify other data. It helps to describe the purpose and meaning of the data. Reference data is found in the form of standardized vocabularies such as SNOMED and LOINC. Reference data is commonly required in master data management solutions to link the data across the spectrum, whether that data be patients, providers, clinical concepts, financial transactions, or any number of other master data concepts that are able to be leveraged to tell a story about the business that brings value to the organization in terms of decisions they need to make. Reference data can also be used in inferencing solutions where probable conclusions are developed based on the presence of a certain number of other specific data that is available. Reference data is an extension of schematron – if schematron defines what general type of data shall exist in a given content structure, then reference data allows for flexibility in terms of what the options are for specific content within those assertions.

Data Mapping

Data mapping is the act of identifying the likeness of one data concept to another, and providing a persistant and resuable link to that data. This is the glue that enables systems to exchange data. Data mapping leverages standard structures and reference data to figure out what needs to be mapped where. A particular set of inbound source content may be represented by one indsutry standard and needs to be mapped to an internal data model. If the source data is already linked to an industry standard reference data set (i.e., Vocabulary), then both the structure and the specific implementation and codification of the data elements within that structure can be mapped into the internal data model with relative ease given the internal system has tooling in place to support such content and reference data standards. That is a long-winded way of saying that content standards and terminology standards go a long way to solving interoperability problems when implemented properly.


Content is vitallly important – I would argue the most important aspect of health IT interoperability. If the content is not understood by any system in a health exchange of data, a number of different problems present themselves. And sometimes ill-communication is worse than no commmunication if it is misunderstood in a way that will bring harm to the patient. As the Hippocratic Oath states: “First do no harm.” A content creator must take extreme care to ensure content is properly represented, and a content consumer must equally take care that the content is consumed in the way it was intended for consumption by the creator.

Internal and External Content Model Standards

A conversation came up recently with someone on the use of and adherence to healthcare industry standards inside of a particular robust solution in healthcare. I have seen many an organization model their internal standard data model after an industry-wide data model such as HL7 CDA or HL7 FHIR. What seems to be often missed is the importance of an organization to realize that they will always need their own “flavor” of that standard for internal use.

The idea to follow the industry standard is one that is very well-intentioned, but it is also extremely difficult to implement and maintain. It is also not always the best choice because industry-wide standards are intended to handle use cases across many different organizations (hence the name “industry-wide”) and while they may meet the specific needs of the organization desiring to implement the standard, they may also include additional “baggage” that is not helpful to the organization. Conversely, they may require extensions or adaptations to the model to fully support the organization’s specific use cases. The effort required to implement the content model with either of these considerations can become burdensome.

We must realize that industry standards are quite important to drive health IT applications toward common ways to exchange and communicate data, but they must be at the guidance level, and not the end-all-be-all way to represent data that needs to be shared. This is now being realized in the data warehousing market as ‘schema-on-read’ is becoming a more popular approach to dealing with analytics on large data sets as opposed to ‘schema-on-write.’ The optimism on ‘one data model to rule them all’ is shrinking. A good example of this would be a solution that leverages metadata for the anlaytical data points rather than the concrete source data structures. This allows an application to focus on writing good queries, and lets the metadata model deal with the underlying differences in the source data model. It provides an effective layer of abstraction on the source data model, and as long as that abstraction layer properly maps to the source data model, then we have an effective ‘schema on read’ solution. This sort of approach is becoming more and more necessary as the rate of change in the technology and in healthcare IT is still increasing.

Internal standards are more manageable. Organizations can design and implement a content model for a set of use cases in a reasonable time frame with a reasonable amount of resources. This model may even be based on an industry-standard model, but it must not BE the industry standard model! What I mean by that is that expectations must set clear from the outset that the model WILL change over time as the organization changes, as the business opportunities change, as laws change, etc. As the decision is made as to what the internal model is to be, it must be understood that it is for that organization only and mapping shall be provided to and from as needed, while looking for opportunities of reuse across specific data elements or data element groups.

What this all drives toward is having interoperability as a first class citizen in an orgnization’s solution set. The content model is important, but the content model is designed for internal usage, with mappings to external systems’ content models. In addition to the content model, an organization must also include their implementation approach in their overall strategy to ensure that external systems can be mapped to the internal content model effectively (on time, under budget, and meeting the business needs). A great strategy without an execution plan will die on the vine.

In summary, the intent of this post is an attempt to clarify the difference between this idea of an external data standard and an internal data standard, and the overlap between these ideas. Interoperability is not a clear cut landscape. Interoperability is hard. We must realize and accept that fact and look for ways to work effectively within it to drive toward higher quality communication between health IT systems, leading to improved patient health outcomes.

A Changing Landscape in Healthcare IT Software

For many of the years that I have been involved in Healthcare IT I have had the good fortune of working for EHR software vendor organizations. It was very exciting to develop bleeding edge technology supporting interoperability standards, and see the benefits realized in real-world use. We broke through many barriers in the healthcare field, although we certainly did not reach the ceiling. However, as in many market sectors, times change and with such change innovation and new ideas take a front row to what has become solid and stable. EHR solutions are mostly now considered to be the foundation for the industry, providing a solid framework upon which more complete solutions are being delivered in interest of bettering patient health outcomes.

In the past few years so many niche applications have arisen to address very specific problem areas. This is in part due to a natural industry shift requiring fresh perspectives on older problems. This is in other part (in the US) due to government incentive programs driving forward a need to improve patient health outcomes such as ACO, PCMH, and MACRA (QPP).

As technology solutions mature, they naturally become more difficult to adapt to changing conditions. The more components a solution has the more effort it requires to update those components while retaining backwards compatibility and the harder it is to craft deployment packages that do not interrupt existing live productions in significant negative ways. Managing a technology solution over the long term definitely comes with its challenges as well as its benefits. New comers into the market are able to quickly adapt to changing requirements that exist, freely building innovative and creative solutions that are not so dependent on existing components and choices made earlier in development by the more established solutions. These newer solutions are able to be plugged into the older solutions to address gaps in functionality that might not otherwise be addressed. This is the nature of software engineering and development as it has progressed since its inception in the middle of the last century.

The other driver of new technology solutions in healthcare is that of federal incentive programs that are placing a strong emphasis on improving patient health outcomes. Research is beginning to show that improving health outcomes for patients across a population is more than just using a certified EHR system or following the medical practice guidelines from the appropriate medical college. Rather, it involves a much more complete and holistic approach of medical practice that looks not only at the clinical facts of a patient, but engages the patient in a relational aspect, understanding the impact that social determinants have on that particular patient. Understanding the mental health of the patient is also quite important, often times being very difficult to properly diagnose and treat for a number of good reasons. To address these sorts of issues doctors must find ways to meet patients where they are today, using tools and approaches that are natural to patients. This means making use of apps developed against popular platforms that interact with social media, are intuitive to use, integrate with existing systems of all kinds to provide seamless access and management of the patients health data and care from both the provider and patient perspectives.

This IS the new face of healthcare, this combination of the foundational EHR vendors’ existing solutions, in conjunction with newer solutions that focus in on very specific problems that need solving. And really, this is not much different than how technology has always been managed. As solutions age, various components of those solutions are wrapped with interfaces that abstract them away, providing a way for the newer, more efficient, and many times more socially accepted solutions to interact with the older technologies. In the same thought, wisdom is valued more than mere intelligence or trend. That wisdom is grounded in the foundational participants in healthcare IT that continue to drive forward standards-based development efforts, focusing on the shared goal between patients, providers, and payers of improved patient health outcomes for all.


High quality listening skills are so important to many roles across various types of organizations. I am reminded of this constantly. Recently, that reminder came through watching a documentary about the assassination of US President James Garfield in 1888. Dr. Willard Bliss, the physician treating President Garfield, was guilty of not listening. He was perhaps blinded by his own ambition, or maybe his ego. Dr. Willard was not an uneducated man, he was understood the importance of listening, he had to in order to achieve the status he had. When Alexander Graham Bell visited the injured president to offer the use of his newly developed technology in metal detection in attempt to find the bullet lodged in his body, Dr. Bliss would not allow Bell to test the instrument on the president’s left side, claiming that the metal springs in the bed were interfering with the device. Why Garfield could not be moved to another bed without metal springs, or why Bliss did not simply allow Bell to at least try on the opposite side of Garfield will likely never be fully known. Regardless, the hindsight wisdom is of course that he should have been more open to alternatives, putting his ego, ambition, or whatever was in the way of his ability to listen in the best interest of his President and patient.

In a software company listening to customers is also quite important. There are two parts to listening: the actual act of listening, and the interpretation of what is being communicated. The latter being achieved often by ensuring the right questions are asked to ensure the sharer of information is providing all of the right context and information necessary to make a good decision. In a product management role the act of listening to the customer is often driven by the organizational structure more so than the individual with the opportunity to listen. What I mean by that statement is that some companies promote as much communication as possible between product analyst teams and their end users (customers of the product) understanding the benefit that comes from that communication is much more often than not positive improvements in the product feature sets. Other companies stifle this communication by providing a process by which there are teams that build the product and then there are teams that manage the customer relations. The idea is to isolate or protect the builder teams from too much customer chatter to keep their productivity levels high. But the problem is the same one found in the game of Telephone (Chinese Whispers), where the resulting message communicated to the builders that originated from the client engagement folks is not the intended message! Building software is a complex task, and there is much room for error when taking this approach.

As the software industry matures, one of the leading ideas that has introduced an incredible paradigm shift in this regard is the Agile Manifesto, an idea that many of you are likely very familiar with. Following the Agile Manifesto, and associated methodologies that are built on its principals ensures that you and your team are outstanding listeners. In fact, the third rule in the manifesto is “Customer collaboration over contract negotiation.” Agile also supports a very strong idea of accountability, ensuring that you and your teammates are all good listeners, among other things. So how is your listening? Are you asking the right questions by deeply processing what your customers are saying and applying critical thinking?

Use Cases – A Path to Clarity

Some call them user stories, others all them use cases but all in all they serve much the same purpose: to align the business and the technical interests in a project. Use cases have many benefits when used correctly and I have seen them used effectively in several different areas of life including healthcare standards, software development, and family decisions.

Use cases are beneficial in many ways. At the very basic level they bring needed clarity to any given situation by stepping back to consider what it really is that is trying to be achieved. I’ve seen so many software developers that just jump in and start designing and coding without vetting the business requirements first. And I know this especially well because I used to be one of those developers (it is an impulse problem, and as a developer gains experience he/she should learn to control it) . Use cases act as a bridge between the business and technical interests by providing a communication path for these different groups. In earlier software design and development models the generally accepted approach was to thoroughly vet the requirements, which included use cases, by the business analyst/product team. Once everything was vetted properly, which could take months, documentation was handed off to the development team to execute on them and deliver software. The problem with this model is that there is little to no communication between those two groups. As we have learned in the past decade or so requirements often change before software is delivered to the field, and methods have been created to combat this challenge. Use cases and user stories are often used as a central discussion point in support of such methods. Use cases also bring confirmation of the goals of the project by illustrating what the software (or whatever the use case is being applied to) is supposed to achieve. It is a way to “check your answer,” similar to what one would do to validate a math problem.

I have much experience with use cases in the realm of healthcare information technology standards profiling in my work within Integrating the Healthcare Enterprise (IHE). A profile in IHE provides implementation guidance for systems in the healthcare domain. One of the base level requirements of a profile is a use case, or set of use cases. I have been involved in the development of over 15 IHE profiles over the past several years and every time I discuss a profile with someone, be they architects, engineers, product managers or VPs, I always get back to the use case at some point. It is a very effective way to center the conversation around one central goal, consistently representing the message across and between these different groups.

Use cases can often be effectively applied to planning family excursions as well. My wife and I are very fortunate to have four children, and this means that we must thoroughly think through any vacations we plan, day trips or other traveling we may choose or be required to do as a family. Although we do not use the same terms as I would in my professional career, we use the concept of use cases constantly by playing out what our trip will look like. For example, say we are planning a camping trip, we need to plan what to bring (because no one wants to be stuck in the woods without the necessary supplies to care for a two year old). So we could play it out in conversation, my wife may say to me “Honey, we will be changing a few diapers while we are in the woods so we will need to bring plenty of supplies.” And I may say “Ah of course! This is a fantastic idea and makes complete sense! Let’s be sure to pack those items so we do not end up in an undesirable situation.” This of course is a bit of a stretch because we would not necessarily need to have such an explicit conversation. At this point in our child rearing experiences we inherently know that diapers, wipes and a sealable trash bag are a must on our packing list. However the point I am making here is that all of us do this in our heads on a daily basis, whether it is for a camping trip, a grocery store run, or something else that needs to be planned. It is a natural part of our psyche, so it makes sense that this can be effectively applied in a work situation as well.

In the event that use cases are not used for a project or at least not done so properly many negative consequences may occur. The worst case scenario would be a complete mismatch between what the stakeholders are expecting and what the development team delivers. In the camping trip example this would be the failure of one spouse delivering what the other spouse is expecting. Projects that fail to keep communication lines open between stakeholders and the development team are at a very high risk of this landing in this situation. It is easier than you might think to accidently “fool” the stakeholders of what is being developed. Without consistent hands on functionality demonstrations and reviews it is hard to really get a pulse on how closely aligned the development work is aligned with the project requirements.

Use cases help to mitigate this risk by bringing together all parties on agreement and understanding of the story the software should tell, that is, the functionality that the user will interact with, the problem being solved, and how from any end user’s perspective it solves it. Not using use cases usually results in the developer writing code as directed but without at least in part understanding why. This results in a high risk of delivering a solution that is far off the original target as understood by the stakeholders. In the camping example from above this would mean a messy situation in the middle of nowhere. In which case I would at least hope there is a water source and the temperature is not too cold! In a work environment this means money lost, unhappy customers, or worse: patients not properly cared for because the system the clinician is using does not provide the appropriate functionality.

Use cases need not be complicated. If you are in doubt of how to start my suggestion is just to write a short narrative of what you want to accomplish. If you find your narrative becoming overly complex then split it apart to more manageable pieces. Most importantly remember that use cases are meant to help clarify the business requirements, improve communication and ultimately create better results in whatever situation they are applied to.