With a rich background in both academia and industry, Michelle Wiest provided fascinating insight into her current role assisting with the development and marketing of early blood-based cancer diagnostics. The conversation covered a lot of ground, from a whistlestop tour of her career to how she views the process of working with regulatory authorities to move diagnostic product development projects forward.
Michelle brings 18 years of research experience in both academia and industry to the podcast. Her first entry into the industry was after graduating from UC Davis, working at a small biotech company in the Sacramento area while finishing her PhD. From there, she worked at the University of Idaho for 12 years, with part of her role running the Statistical Consulting Centre. During this time she spent a period in Australia working as a senior statistician at the Murdoch Children’s Research Institute before leaving the university to return to industry. At the start of the podcast, Michelle gives an insight into wanting to work more closely with patients as a big drive for the move.
Michelle’s interview gives an in-depth perspective on the practical considerations of building datasets for clinical trials and the role of machine learning to facilitate this. However, if you are looking for a quick read, our five highlights are:
Further reading: Michelle has a number of papers on a broad range of topics from her time as an associate professor at the University of Idaho. You can also read more about work at Freenome through publications listed on their website.
One of the most interesting aspects of the conversation was discussing her first-hand experience working with the FDA and its attitudes to machine learning.
In the past 12 months, the FDA has been vocal about how it intends to address the rise of Machine Learning (ML) within drug discovery. The regulatory authority has noted a significant increase in the number of drug and biologic application submissions using AI/ML components, with more than 100 submissions reported in 2021. Part of the response to this growing trend, was a paper for discussion in May 2023 looking at the current and potential future uses for AI/ML and the possible concerns and risks associated with these innovations.
One of the big areas for concern and part of our discussion with Michelle was the challenge of avoiding bias in data when training algorithms and how this bias can be mitigated. It also emphasizes the need for human involvement and how to embrace the technologies without increasing the risk to patients using the products of an innovative approach.
This prudent approach can also be seen in a more recent FDA announcement, which states its proposed position on laboratory-developed tests, or LDTs.
LDTs are in vitro diagnostic products (IVDs), and the FDA plans to “make explicit that IVDs are devices under the Federal Food, Drug, and Cosmetic Act, including when the manufacturer of the IVD is a laboratory. Along with this amendment, the FDA is proposing a policy under which the agency intends to provide greater oversight of LDTs, through a phaseout of its general enforcement discretion approach to LDTs.”
The motivation for tighter regulation around LDTs is to encourage responsible innovation, but it is widely acknowledged that this stance could have unwanted side effects, such as increasing the cost of developing new and potentially beneficial LDTs.
This puts more pressure on designers and developers to not waste resources during the development of their diagnostic tool, which will, inevitably, deter them from exploring more speculative paths. There is a distinct possibility that tighter regulation could stifle innovation.
However, the intention of the FDA is not to hamper innovation, but to balance it with safety. Using data effectively can play a role in achieving this balance. Having experimental processes and datasets that are balanced, following best practices, and ensuring that the methods your data team is using are structured and repeatable, results in an efficient learning process. It lowers overhead and supports FDA-compliant practices that are able to tread the line between safety and progress set out by the FDA. More and better curated data can lead to more efficient innovation.
Although we can see that AI and ML are an area of concern for the FDA from a safety perspective, when used in a sensible and well-structured way, they can further scientific progress while supporting the FDA's commitment to patient safety. If you want to explore the potential of data within your organization with experts in implementing data projects within the biotech industry, get in touch for a free analysis and consultation.
Want to listen to the full podcast? Listen here: