Electronic Data Capture Wishlist

Electronic Data Capture incentive badges reward for the most mundane EDC tasks

Electronic Data Capture: Dreams, Desires, and Demands when evaluating different EDC systems

Building a Wishlist

There were no kittens in the room. However, this is definitely what a brainstorm looks like.

There were no kittens in the room. However, this is definitely what a brainstorm looks like.

This week my team at work met to brainstorm what our ideal Electronic Data Capture (EDC) system would look like and how it would perform. The brainstorm process was really fun and we covered a lot of territory. Now we have to review the list to determine which items are requirements and needs and which are wants and preferences. The list we come up with will be helpful as we flesh out our standard Request For Proposal (RFP) template and bid grids for new vendors and review our current vendors.

EDC Nice to Haves

For today’s post, I thought I would share some of the items we discussed that were interesting, unusual, humorous, or truly pie-in-the-sky wishful thinking. Some of these are firmly “nice-to-haves rather than must-haves”

Alerts & Communication

We love the idea of customizable alerts and notifications. It would be great if a user could opt-in to emails or texts regarding pending data entry, outstanding queries, important study announcements, etc. We also like the idea of a study communications center or portal available from within the system or even in an optional newsfeed in the sidebar.

Incentives & Positive Feedback

if EDC offered an incetive badge what would it look likeIt turns out a lot of people like earning Achievements, Badges, check-ins, and accolades when it comes to fitness, dining, or social activities. Why can’t this be applied to EDC users? Maybe they should get a “That was Easy!” button to press at the end of daunting tasks. Perhaps they can earn currency based on using the system to exchange for small incentive items or gift-cards. What if the system could spit out a piece of chocolate every 30 minutes as a reward (the idea of system administered electric shocks was also briefly discussed but then shuttered)? As an industry, there is always additional opportunity to appreciate individual contributors and reward people for effort; I think we can do more here.

Bird’s Eye View Status Summaries

When anyone logs into EDC, there should be a clear path forward as to what needs attention.  If someone logs in to do Data Entry, they should be able to see a tasklist (preferably color-coded) and be able to get right to work.  As a program manager, I’d actually like to see the sites all lined up by different performance metrics (like a horse race so I can compare them) but this type of data review is still more of an art than a science; descriptive statistics are just one part of many criteria when evaluating operations in a trial and we have to exercise caution when drawing conclusions without considering all the data.

Multiple Navigation Paths

what do i do hereI like alternative paths for navigation. My ideal EDC system would have sidebar navigation trees, tabs, links, bookmarks, breadcrumbs, and any other number of methods to perform the same action. I realize this type of architecture is more complex, but different users will want to navigate the system in different ways and I appreciate a system that can accommodate this.

Intelligent Errors

We don’t expect the process of data entry or review to go perfectly but when there are errors, we are looking for them to be informative enough that the path to the solution is clear. For example, a data entry pop-up that says “non-conformant data” or “unexpected error: type 102” is really not as clear as a message like “You have entered a ‘$’ dollar sign character; this character cannot be accepted by the system. Please revise your entry.”.

Predictive Text

data entry fortune tellerIf a CRF field collects free-text, why can’t that text check its own spelling or better yet, try to type itself in?  When I go to Google and type “What is” I get lots of predictive text entries and I can go right on typing or just grab one of the suggestions from the list. This is so handy when filling in forms over and over again or when worried about spelling.  I appreciate that there are some reasons we may not want to “guess” what a user is trying to enter and that there are risks to this approach, but it seems like such an easy time-saver in many ways that would result in cleaner listings and easier analysis of qualitative data.

Whodunnit? Audit Trails

I appreciate having easy access to form level and field level audit trails. I’m completely spoiled because I have worked in several EDC systems that do this elegantly and intuitively. Unfortunately, I have also worked in other systems where the audit database is completely separate from the view while recording or reviewing data requiring me to run queries and reports in an entirely different part of the system. Reviewing history of changes for a data field in an entirely other part of the system is not acceptable.

Queries that Resolve Themselves

I have been yelled at by countless study coordinators who were frustrated with data entry, in particular query resolution and how long it takes.  In most EDC systems, answering queries is entirely too burdensome.  In the last system I used, one had to 1) read and understand the query 2) navigate to the issue 3) revise the data 4) confirm the entry 5) re-confirm the entry 6) re-mark the CRF complete 6) re-open the query 7) respond in the query 8) save the query as complete.  That is totally ridiculous.  So maybe queries can’t totally resolve themselves since they need to be reviewed and addressed, but certainly the steps to address a query can be more simple. In that particular system, if the edit re-fired when the form was saved and saw that the query no longer applied, it should have shut itself down with an auto-closed audit.  

Partial Monitoring

We’ve received feedback from our monitoring staff that the ability to partially monitor a form is helpful to them while they perform Source Data Review. Take for example, the scenario where an Adverse Event reconciles totally with the source but the “relatedness” field in the source document is blank and can’t be verified. The monitor could mark the rest of the record monitored and leave that one file open for review with the Investigator and then monitor later once the source was located.

Nothing Too Glamorous – 8 Must-Have’s for EDC

If you are dying to know what system requirements and features are on our must-haves (or if you are erroneously suspect that we wasted our brainstorming session being silly and non-specific), I assure you there is nothing too extraordinarily interesting or news breaking but read on if you are curious. I was surprised that as a group, we did not come up with many must-haves in terms of aesthetics. At the end of the day, our team doesn’t care if the system is purple, orange, blue, etc. (although we all agreed the ability to customize the data entry view by trial would be helpful for branding individual studies and ensuring our site users don’t try to put data in for one trial in another trial’s database by mistake). In no particular order, these were our eight current must-have requirements:

  1. We insist on a fast and responsive system that has a short list of technical system requirements (many academic institutions we work with can’t adjust their browser and firewall options so the system needs to be robust enough to operate in a variety of operating environments with very little user interface manipulation).
  2. We need to be able to deploy the system globally and have adequate help-desk support & reporting for international sites.
  3. Data entry personnel need to be able to do partial entry and save Case Report Forms in an “Incomplete” state (just in case they want to grab a coffee or some additional source documents prior to completing entry in a long or difficult form but also secure the system without fear of the dreaded “timeout”).
  4. We have a lot of requirements around Standard Administrative Reports (and how they are refreshed, accessed, and used) and although some are specific to our trial management processes others are more traditional and are bare minimum requirements such as Screen Fail Reasons & Enrollment Reports, CRF Status (overdue/missing, complete, monitored, locked, etc.), Query Reports, etc. It really all boils down to a requirement that we can compare sites and operational performance metrics with minimal manipulation of data.
  5. An Ad Hoc Trial Data reporting function is mandatory and we expect to have filterable default data views that join forms (see Adverse Events linked to Concomitant Medications for example) and allow extraction of individual data fields and derivation of new variables or at a minimum, the ability to extract production/live data for use in other trial intelligence and safety oversight systems.
  6. We have some hard and fast rules about how the Query workflow and management tools work. Specifically, we want to be able to modify issued queries (with appropriate audits and permissions) and also we want User A (with the correct permissions) to be able to close a query generated by User B. There are some other specifications we have about queries but, for non Data Managers, the details are just too boring to elaborate on in this post.
  7. It is essential that the system plays nice with all of our external data sources. We want seamless integration with ePRO, IVRS, site payments, core labs, etc. Ideally, the integration is a flow and data can move both directions with audits (think of the nightmare that is header reconciliation and acquisition number mismatches as an example of what I mean).
  8. We want a good user experience that is compliant with 21 CFR Part 11 and not unnecessarily over-complicated. We defined some requirements here to measure how complicated it is to use the system but as a completely made up example, sign-off in one click is not a requirement but sign-off in 20 clicks is less than ideal.

Your thoughts?

If you’d like to continue this topic discussion, please reach out via email, in the comments, or through my contact form.

You may likeFrom the Lead CRA Blog

I’d also appreciate if you follow me on the Clinical Operations Toolkit Facebook page

About The Author

Nadia

Nadia Bracken, lead contributor to the Lead CRA blog and the ClinOps Toolkit blog, is a Clinical Program Manager in the San Francisco Bay Area.

2 Comments

  • Kate

    June 11, 2013

    Hi Nadia, I am a student doing a report on the clinical trial marketplace for a summer project. I am curious if you have a feeling whether sponsors mandate the type of EDC being used or are program managers able to select their own? I appreciate your thoughts!

    • Nadia

      Nadia

      June 11, 2013

      Hi Kate, sponsors are on the hook for analyzing and publishing the clinical study results so they absolutely do (or should) influence how that data is collected. the answer to your question actually depends on the roles and responsibilities of the program manager. If they are working at a sponsor company, they may very well lead the functional team that chooses the EDC platform. The choice of platform could also be a responsibility of individual internal trial teams. A sponsor could use several different EDC platforms (or paper CRFs) across various trials or clinical programs. At my company, for example, we don’t have a formal Data Management department and as a consequence, the clinical group has historically chosen the EDC system(s), with or without input from our CRO or other partners. For some trials, the study site may actually influence which system is selected (take for instance a small or single center Phase I trial). I hope that helps, thank you for following along on the ClinOpsToolkit blog!

Leave A Response