In our latest Minds Behind VICT3R , we’re catching up with Heinz Schwerdtfeger from Syncwork Ag, who leads the project WP9. Heinz and his team are making sure that the tools developed in VICT3R are properly validated and meet Good Laboratory Practice (GLP) standards − using what’s known as Computer System Validation (CSV).
1. Could you briefly introduce yourself and your role in the VICT3R project?
Four decades ago, my career started as a Computer Science Assistant at the Hahn-Meitner-Institute in Berlin, Germany. Initially I worked as an application programmer in the laboratory and manufacturing environment before transitioning into IT consulting for supply chain within the pharmaceutical industry. After completing the International Project Leadership Academy, I assumed global project leadership roles in pharmacovigilance IT initiatives. During recent 15 years, I have gained extensive expertise and knowledge in computer system validation including IT test management. Since November 2024, I’m serving Syncwork AG as a Management Consultant for Life Sciences in Berlin, and acting as Computer System Validation lead within the VICT3R project. I take pride in contributing to this project, collaborating with highly skilled consortium member developing innovative methods for virtual control groups delivered by the VICT3R application.2. Syncwork brings strong expertise in computer system validation (CSV). For those less familiar, could you explain what CSV is and why it’s important in the context of VICT3R?
CSV is a documented process driven by GxP (good practises) guidelines, ensuring computerized systems in pharmaceutical environment consistently perform as intended, ensure data integrity, and meet regulatory requirements. For VICT3R, validation confirms that all components (hardware, software, databases, algorithms) are correctly installed, tested successfully, confirm data integrity, operate reliably and safely, and achieve compliance and regulatory acceptance by agencies like EMA and FDA.
As VICT3R plans to incorporate artificial intelligence (AI) , new validation challenges arise. AI systems involve complex training, adaptive algorithms, and data-driven decisions, requiring extra steps to ensure accuracy, reliability, and freedom from bias. This includes rigorous testing on representative datasets, periodic re-validation to mitigate model drift, and thorough documentation to meet regulatory and quality expectations.3. GLP compliance is essential for regulatory acceptance. What do you see as the biggest challenges in implementing CSV for innovative, data-driven tools like Virtual Control Groups in the framework of a consortium?
a) Harmonization:
Not all consortium member were familiar with CSV, its terminology plus dependent documents. A quality guidance as CSV-framework was established to harmonize & align amongst related member.b) Transparency & Regulatory Acceptance:
Regulatory framework may not be directly suited to accommodate the characteristics of innovative tools, as AI can be complex and act as “black boxes”.
Traceability of decisions is essential when these influence regulatory outcomes.
Demonstrating that such tools reliably produce accurate, unbiased results requires extensive documentation. CSV & GLP standards require a careful risk-based approach and might push boundaries of regulatory expectations.
A consortium can better strive for regulatory acceptance than any company could do alone.
Balance transformative benefits of data-driven AI with established protocols required for regulatory compliance, ensures that innovation doesn’t harm patient safety or data integrity.4. In your view, what needs to be done to achieve validation and compliance for AI-based tools in toxicology and safety science?
A comprehensive approach is essential and includes clearly
– defining the tool’s intended use,
– rigorous testing with real-world data,
– ensuring transparency and explainability,
– maintaining high data integrity,
– applying a risk-based monitoring strategy,
– engaging with regulators early, and
– documenting the entire AI lifecycle.
These steps build scientific credibility, regulatory trust, and long-term acceptance of AI in these areas.






