From EHDS requirements to runnable secondary-use pipelines — secure, auditable, and Spark-scalable.

WHAT PONTEGRA PROVIDES
EHDS-ready secondary use patterns with FHIR-native Spark pipelines
Pontegra helps organisations implement EHDS-aligned secondary use workflows:
- Secure analysis inside a Secure Processing Environment (SPE),
- Auditable access and processing,
- Controlled outputs,
All while enabling cohort building and dataset/feature extraction at scale.
Powered by: Spark-on-FHIR, Pontegra’s FHIR-native Spark toolkit.
WHAT THIS MEANS OPERATIONALLY
The EHDS secondary-use model
EHDS enables the re-use of electronic health data for defined secondary purposes through national governance structures (Health Data Access Bodies) and processing in secure environments.
Data users access data under a data permit / authorisation issued via the HDAB process.
Processing is performed in an SPE, with security measures and identifiable logs of access and activities retained.
Outputs are controlled (often via an “airlock” / export review) to minimise disclosure risk.



WHO IS IT FOR
Who is it for
Health data access bodies
Research hospitals
University medical centers
National registries
Public health agencies
Life sciences / RWE teams working under data permits
HOW PONTEGRA HELPS
Secure Processing Environment (SPE) pattern
- Analysts work in notebooks inside the SPE
- Jupyter/Zeppelin/Databricks notebook
- Spark-on-FHIR runs on the Spark cluster in the SPE
- FHIR source access is brokered and policy-controlled
- All access and output actions are audited; outputs land in session storage and can be promoted via controlled export

Capabilities mapped to EHDS-ready patterns
Secure processing with auditability
- SPE-friendly execution: Spark pipelines run where the data is, under environment controls
- Audit events & run manifests: who ran what, on which dataset, with what outputs
- Log retention alignment: EHDS expects identifiable logs of SPE access and activity (minimum one year for access logs)
Controlled outputs (airlock-compatible)
- Outputs written to session-scoped storage
- Support export workflows where outputs can be reviewed/approved before leaving the SPE
- Aligned with TEHDAS SPE guidance that activities are logged and subject to audit.
FHIR-native analytics at scale
- Load from FHIR server APIs or FHIR lakehouse storage
- Filter using FHIR search and FHIRPath
- Extract analysis-ready tables (SQL-on-FHIR style)
What EU researchers actually need
- Cohort extraction: entry/exit/eligibility logic, repeatable runs
- Sampling: periodic or event-aligned timepoints
- Dataset & feature extraction: aggregations over windows, longitudinal feature tables for RWE/AI
Typical EHDS secondary-use workflows
Workflow 1
Registry / observational study dataset
1. Define cohort
2. Sample timepoints (monthly or event-based)
3. Compute features and outcomes
4. Produce a curated dataset table for analysis
5. Produce a curated dataset table for analysis
Workflow 2
Multi-site / cross-domain data preparation
Our packages and pricing options will be published soon…