Inside the Bubble 2019

Watchcat

Trusted Information Resource
Inside the Bubble 2019

In 2017, I spent an unhealthy amount of time inside the bubble, knowing I would not have any time for it in 2018. After I staggered back out again, gasping for air, I posted a multi-part series on LinkedIn, titled “Inside the Bubble.” This year, I went back inside, but I didn’t overdo it like I did in 2017.

Having spent less time there, you might think I wouldn’t have nearly as much to say, but these days I’m watching with better informed eyes than in 2017. This doesn’t mean that now I know What’s Really Going On, but I think I might be getting warmer. In any case, this is just a heads up that I’m about to start posting some thoughts and observations from inside the bubble, again....
 

Watchcat

Trusted Information Resource
Inside the Bubble - Failure

My main takeaway from The Bubble this year was that Jeff Shuren has become quite taken with the idea of CDRH failing. Apparently CDRH needs permission, and/or the ability, to fail at "modernizing." Since I avoided The Bubble in 2018, I'm not sure how long this has been going on, but I for one hope it doesn't go on much longer.

For my part, I seem to have become obsessed with the whole thing, probably the psych major in me as much as the RA professional. I'm still thinking about it, but if I wait to post until I reach a conclusion ("the point at which you stopped thinking"), I'll never post anything. Plus I never stop thinking. ;) So I'll go with where my thinking is now.

I've thought a good bit about who started this anyway, and to what purpose. I don't mean the rhetoric about CDRH failing, I mean the whole meme about needing permission to fail, or the ability to fail, whichever.

This meme is frequently found on LinkedIn, where it typically refers to management giving their employees permission to fail. In this context, both "permission to" and "able to" tend to mean the same thing, because employees often feel incapable of acting without permission, given indirectly in the form of policies, procedures, work instructions, or directly from their supervisor. Getting permission to fail is seen as a way to avoid unpleasant consequences of failure. While it can work out that way, I wouldn't count on it.

Personally, I've always been more inclined to go with "forgiveness over permission." Or maybe that should be "forgiveness over risk of permission denied." If what I'm doing is important to me, and I have a reasonable level of confidence in what I might fail at, I'm going to do it regardless. When it comes to unpleasant consequences, doing something after being told not to do it is obviously the least desirable scenario. I think maybe permission is often asked when the asker doesn't much care about, and/or lacks confidence in, what they are asking permission to do.

My MBA side tends to think of failure more in the context of risk and reward. I see the relationship between risk and reward as something of a natural law. Many have tried to subvert it, but still it stands firm.

"Permission to fail" creates an illusion of risk where there is none. If permission will protect you from unpleasant consequences of failure, then, if you have permission to fail, you are really risking nothing at all. If anything characterizes "innovation" as I know it, it is a desire to reap reward without exposure to risk.

"Need to be able to fail" creates the illusion of being unable to fail. Since there is very little in this life you are not fully capable of failing at if you want to, that phrasing strikes me as disingenuous, or maybe it's denial. Or maybe both. Creating an illusion of being unable to fail, that also fits with "innovation" as I know it.

Shuren seems to always speak of failure in the context of "modernizing" and "fostering innovation," so I'm inclined to think his interest in failing is more likely to be found in this context than that of LinkedIn.

When I first posted about this, Shuren had said that CDRH needs permission to fail, and a number of people offered suggestions about what he really meant. Later in the year I got the rewrite, or maybe just a different version, tailored to a different audience, which was that CDRH needs to be able to fail, but not in a way that endangers patient safety. Perhaps this answers the question of what he really meant to say the first time. Or perhaps not.

What I understand least about the whole thing is the saying of it. Even more, the repeating of it. If you need permission, get it. If you need to be able to fail, just do it. Don't talk it to death. Or could this be the whole point?

Speaking of not talking something to death, I think I'm well past that point with this topic, and hopefully will not be taking it up again. This was something of an attempt at exorcism.
 

Watchcat

Trusted Information Resource
Inside the Bubble - Pooling

I learned about the history of the Medical Device Innovation Consortium (MDIC) at its town hall meeting this year, never forgetting for a moment that history is written by the winners. Or at least by whoever has won control of the microphone, which may be the same thing.

MDIC was started in 2012. Guess where? Yep, Minnesota. Guess whose idea/fault it was? Jeff Shuren's.

I've been following MDIC since 2014, by which time I'm sure its history was already being rewritten, as this tends to be an ongoing process with organizations. This year's history didn't seem consistent with some earlier impressions, so I went back to the 2018 meeting. A different person in control of the microphone, and the credit/blame apportioned a bit differently then.

No matter. I'm writing about my trip inside the bubble this year, not what happened while I was otherwise occupied last year, so I'll stick with the 2019 version. I find it more entertaining, anyway. In this version, Shuren was motivated by his conclusion (there's that word again) that CDRH wasn't going to get any more money out of Congress. The solution was for CDRH and…someone…to pool their resources.

This brought to mind the line about how suicide is a permanent solution to a temporary problem, close followed by the hope that this thought didn't prove to be prescient. At least, not the suicide part. It seems the temporary part is already a done deal. As best as I can tell (it's the federal budget, after all), Congressional appropriations for FDA took a bit of a tumble circa 2012 (along with pretty much everything else in the US economy), bounced back in 2014, and have been on the rise ever since, with 2019 being something of a banner year.

In the meantime, the solution to this temporary problem looks to go on, if not permanently, at least indefinitely. Only now it isn't about a lack of Congressional funding, it's about "building trust" and…wait for it…"a sense of community." Oh Lord, Kumbaya. Whether this new focus is also temporary, one can only hope. On the bright side, at least now I have a much deeper appreciation of the phrase, "a solution in search of a problem."

As for the pooling of resources, I know I'm not the only one who would be interested to know how much of whose resources are being "pooled" for what these days. On the CDRH side, I learned that 100 FDA staff are now involved with MDIC. (Is it just me, or does this statistic not scream Best Lightbulb Joke Ever? Maybe I'll run a contest.)

Kinda makes me wonder who is really "pooling" whom here...
 

Watchcat

Trusted Information Resource
Inside the Bubble - The Cardio Club

The Cardio Club was out in full force this year, sigh. It seemed to be totally in control of MDIC’s early feasibility study project, which has been struggling for four years now to address a national emergency, the wholesale departure of early feasibility studies from the US. Whether they fled by plane, train, boat, car, or by scaling a wall in the dead of night, no one has said.

Everyone on MDIC’s panel on early feasibility studies was a member of the Cardio Club, leading me to ponder possible reasons why:

• Was no one from any other therapeutic area invited to participate? Or did they just not want to?

• Did someone think it would be appropriate for the Cardio Club to come up with a one-size-fits all approach to early feasibility studies for the entire industry? Or is this project intended to “foster innovation” only by the Cardio Club?

• Is it only the Cardio Club that needs “fostering”? Is it only the Cardio Club that can’t figure out how to do an early feasibility study all by themselves?

I encountered the Cardio Club again at the Patient Engagement Advisory Committee (PEAC) meeting, where the scenario for the roundtable discussion was the potential hacking of…drum roll…a cardio implant.

Having sort of lost it during the roundtable discussion at the last PEAC meeting, I was hoping to be able to keep calm and carry on at this one. Now it looked like the discussion could be wading into waters where I might feel compelled to lose it again. But no! Someone from one of the big medical device companies piped right up and advised the FDAer who was moderating our discussion that CDRH really needed to get over this, leaving me free to sit back, relax, and enjoy. Which I did.

Although I wasn’t able to figure out out What Is Really Going On with the Cardio Club this year, when I got home, sorted through, and followed up on, some of the bits and pieces I’d brought back from The Bubble, I think found some clues. Should they lead anywhere interesting, you’ll be the first to know.
 

Watchcat

Trusted Information Resource
Inside the Bubble – Insecurity

If you reveal your secrets to the wind, you should not blame the wind for revealing them to the trees.
-- Kahlil Gibran


The topic at this year’s Patient Engagement Advisory Committee (PEAC) meeting was cybersecurity. This has become my favorite domestic regulatory meeting, and I was especially looking forward to this one, because I thought the topic might bring out more experts and fewer wide-eyeds. I found that, when it comes to cybersecurity, these are not mutually exclusive groups, but the cybersecurity wide-eyeds tend to run a bit deeper (geekier?) than some others.

A CDRH presentation described patients as an “asset.” Merriam Webster defines an asset as “an item of value owned.” The presenter defined assets as “things that we care about” and “things that are worth protecting,” apparently “things” being the operative word here. The other things cited as assets were medical devices and patient data. I wasn’t especially surprised by this perspective, but I was a little surprised to hear it said out loud.

The same presenter advised the Committee that “Providing probabilistic estimates of a threat exploiting a vulnerability is unknowable.” This struck me as more of a position firmly taken than a conclusion thoughtfully arrived at. It was a departure from the material in the slides, and the syntax made no sense. At first I thought he meant to say (or that someone else meant for him to say) that the probability that a threat will exploit a vulnerability is unknowable. But that didn’t seem to mesh with whatever he was trying to say about providing probabilistic estimates, which was…I think…that, given the unknowable, you couldn’t provide them. But isn’t this what estimates are for? If you don’t know the probability that something will happen, you estimate it. If you do know the probability, then you don’t need to estimate it, right?

I subscribe to the long-standing and variously attributed wisdom that “it’s not what you don’t know that gets you into trouble, but what you think you know that you don’t know.” My impression was that CDRH thinks it knows something about the probabilities associated with other types of risks, and, further, that it thinks this knowledge can’t be appropriately applied to cybersecurity risks, because of that whole unknowable thing. I wondered if the problem might lie more with what CDRH thinks it knows about other types of risks, than with what it thinks it doesn’t know about cybersecurity risks. Or maybe CDRH is painfully aware that it doesn’t know much about other types of risks, either, but would find it rather awkward to say so after all these years. Maybe, with cybersecurity risks, it’s trying to curb expectations from the get-go.

One of the Committee members pressed CDRH on patient access to their data, noting that one of CDRH’s slides showed a lot of players circled around the patient, communicating and coordinating, but the patient had been left out of the loop. The CDRH response agreed that patients should have access, but this access seemed limited to data as it pertains to cybersecurity breaches, not to all the data their device might have to offer, which is what the patients seemed to want.

There was only one presentation by a medical device company. It was a nice presentation about the company’s response to reports of cybersecurity vulnerabilities involving its devices. It seemed reminiscent of the CDRH slide, with a lot of collaborating, but no reference to patients being engaged in that process.

AdvaMed offered a list of cybersecurity principles it had adopted. I thought they were okay principles, but somehow I had been expecting something more substantive from the industry’s largest trade association. One of the Committee members asked what light these principles might shed on how AdvaMed would recommend that companies include patients in decision making and think about its communications to patients. The answer was it would recommend following FDA’s guidance. So I guess its members really got their dues’ worth there.

Most of the public speakers were either cyber MDs or cyber patients, which is to say, they knew far more about cybersecurity than most MDs and patients. Several had hacked their own devices. They seemed to be pretty uniform in their priorities, which were more transparency, more communication, and more control. In one patient’s words, “my device, my body, my life, my choice.”

A cyber MD currently associated with a university medical center gave a nicely thought-out presentation on informed consent and cybersecurity. I paid close attention, and saw no indication that he viewed this as an academic exercise. He really seemed to think that informed consent was part of medical practice. This puzzled me until he made a reference to doctors with “an informed consent document that they're going to do two dozen times that day in the OR.” Ah. One of the Committee members picked up on it too and asked if this wasn’t a little late to be seeking a patient’s consent. I wondered if the Committee member was thinking what I was thinking, which was that, call it what you will, a form that a patient signs in the OR is not a consent form. It’s a release form.

Based on the public presentations and the roundtable discussions, the non-cyber patients struck me as more pragmatic than the experts. Perhaps that’s because for patients this is only a matter of life and death, while for other players it’s a matter of financial, legal, political, and professional liability. These patients wanted their doctors to be their first point of contact in the event of any issue with their device, but they did not expect their doctors to be cybersecurity experts, nor did most of them want to become cybersecurity experts themselves. They simply wanted timely notification about any cybersecurity threat, practical information about what options were open to them in the event of a threat, and to be advised of sources for further information. Most were not amenable to the notion that anyone should sit on the information until everything got reviewed, analyzed, figured out, and maybe also resolved. (In other words, they weren’t interested in being kept in the dark until everyone else involved had had ample opportunity to C their A’s.)

Most thought manufacturers should be the best sources of information about their devices, and the actions patients might take in the event that a device they manufactured was breached. They thought doctors should be the best sources of information regarding the clinical ramifications of these options for individual patients. They did not think that either manufacturers or doctors were where they needed to be, but generally seem to accept that this was still a brave new world for everybody.

Where there was dissatisfaction, it seemed to be directed primarily at healthcare providers, and secondly at manufacturers, which I thought was appropriate. The dissatisfaction with healthcare providers seemed a bit more intense, probably because patients expect their healthcare providers to act in their patient’s best interests, but they had no such expectations of manufacturers. From manufacturers, they seemed to want competence. Don’t we all. No one seemed to be unhappy with CDRH, which I found refreshing. Perhaps this was because, unlike almost everyone else involved, the patients themselves didn’t seem to have a lot expectations of CDRH, which I thought was also appropriate.

A number of the presenters were affiliated with one organized group of hackers or another. This took me back to my grassroots days, when I learned how easily such groups are taken over and manipulated to address agendas other than the ones they think they are pursuing. Who might want to bend a hacker group to their own agendas, and what those agendas might be…quite the intriguing question, no?

There was an undertone of concern emanating from CDRH and other players related to cyberattacks, national security, and nation-states, which not many patients seemed to share. My takeaway on this topic was that, inside the bubble, some “assets” are more worthy of protection than others. Perhaps rightly so, but maybe not nearly as much as they might like to think.
 

Watchcat

Trusted Information Resource
You are very welcome, but..."Info" might be overstating it. Mostly musings on my part, based on my perspective, which I think is at least as limited as anyone else's.
 

Ed Panek

QA RA Small Med Dev Company
Leader
Super Moderator
Our CE Mark NB recently asked us if we were compliant with the latest FDA cybersecurity requirements. I thought it was odd (EU regulators invoking USA regulations), but whatever.

We are now attempting to navigate GDPR compliance for EU healthcare. Some of the work is valuable but a lot of it seems like "this feels good" stuff. As if a Code of Conduct stating we obey laws means "We obey laws." It's more of a useless platitude. Obeying the law should be the base; not an elevation.

My personal policy is that if you put personal information on the internet there is a non zero chance it will be exploited eventually. Just assume it has been exposed but your data isn't valuable enough to exploit compared to your neighbor's data. Criminals have priority lists as well, ya know.
 
Last edited:

Watchcat

Trusted Information Resource
That may be an indication of how overwhelmed the NBs are right now.

I agree. I think obeying the law is a compliance mindset, not a regulatory mindset.

They same the same thing about home security: You can't realistically expect to make your home impenetrable to criminals, but you don't need to. You just need to make it look less penetrable than your neighbor's homes.
 
Last edited:
Top Bottom